False election information from AI chatbots — expert explains what to guard against and avoid

Anyone who uses artificial intelligence (AI)-driven chatbots or voice assistants for election information should know these tools might provide misleading or false information. During tests conducted earlier this year, Amazon’s Alexa struggled to correctly identify the winner of the 2020 U.S. presidential election, while Google’s AI overview wrongly described Barack Obama as the first Muslim president.

“Although we call them intelligent, AI tools don’t actually know the answers to our questions,” said Virginia Tech digital literacy expert Julia Feerrar. “Chatbots like ChatGPT or Gemini generate predictive text in response to our prompts, and their ability to do that is built on training data that includes a lot of biased, misleading, or even incorrect information. We need to be mindful about what we ask of these tools and what we do with the information they share.”

Feerrar made recommendations for sorting AI-generated fiction from fact. “First and foremost, I want people to know if and when the information they’re looking at is coming from a generative AI tool. Just being aware that you may see an AI overview at the top of Google search results is a useful starting point. If you do see an AI overview, and it cites a particular source, I’d recommend going to that source directly to get more context,” she said.

“Especially with high-stakes decisions like voting, it’s important to seek out trustworthy information sources. Look directly to professional news organizations that go through thorough fact-checking processes and local government websites, for example,” Feerrar said. 

“When vetting any kind of information, voters can ask themselves: Who created this information? What stake do those creators have in getting it right? Where did it come from originally?” Feerrar said. “Fact-checking websites like Snopes or Politifact can also be useful tools when you’re trying to assess the accuracy of a claim you saw online.”

The challenges with sifting AI-produced content won’t go anyway anytime soon, she said. “Unfortunately, I think we can expect to see even more questionable online content as we approach the election, whether that content is AI-generated or not. One of the most powerful things we can all do as digital citizens is to notice any time something online sparks a big reaction, pause, and look for more information before taking action.” 

About Feerrar 
Julia Feerrar is a librarian and digital literacy educator. She is an associate professor at the University Libraries at Virginia Tech and head of the Digital Literacy Initiatives. Her interests include digital well-being, combatting mis/disinformation, and digital citizenship. Read more here.  

Schedule an interview    
To schedule an interview, contact Mike Allen in the media relations office at [email protected] or 540.400.1700.

withyou android app