Allison Koenecke, assistant professor of information science at Cornell University, studies fairness in algorithmic systems. She says AI systems do not perform equally for all English speakers, which could negatively impact the hiring process.
Koenecke says:
“Automated speech recognition systems are increasingly being used as a cost-cutting mechanism across domains from hiring screens to court transcriptions. As of now, speech-to-text technology does not perform at the same level for all English speakers. Our research found that AI-based speech-to-text services made nearly twice as many errors when transcribing Black speakers relative to white speakers of ‘standard English’ – and this phenomenon was consistent across many of the top commercial speech-to-text vendors. AI systems also make disproportionately many errors when transcribing non-native English speakers, speakers with different regional dialects (e.g. Southern or Scottish accents), speakers with speech impediments, and so on.
“For the most part, human transcribers are more accurate at understanding diverse speakers than an AI speech-to-text service. This is because commercial algorithms were only recently (if at all) specifically trained to understand non-‘standard English’ speakers.
“If left unchecked, using out-of-the-box AI technology to evaluate a job candidate can perpetuate the biases of algorithms known to perform poorly on non-‘standard English’ speakers – who make up about one third of Americans today. Audits are necessary to ensure that the use of AI doesn’t further perpetuate existing harms in hiring for individuals who we can understand, but AI cannot.”
Additional discussion about the law from Cornell University experts can be found here.
Media contact:Kaitlyn Serrao
cell: 607-882-1140
Cornell University has dedicated television and audio studios available for mediainterviews.- 30 –