close
close

Why you shouldn’t talk to AI chatbots about the elections

Why you shouldn’t talk to AI chatbots about the elections

When companies launch new generative AI features, it usually takes some time to find and identify the shortcomings. Developers often don’t stress test large language models as they should– take the chatbot from New York City recommended breaking various lawsand even after rigorous testing in labs, chatbots will inevitably encounter situations in the real world that their creators had not prepared for.

So it seems like a risky, albeit on-brand, choice for AI search company Perplexity to launch a new feature that aims to answer questions about candidates and their political positions four days before an election already plagued by misinformation.

Perplexity says the new Election Information Hub is it revealed on Fridays can answer questions about voting requirements and polling locations and also provide an “AI summary analysis of voting measures and candidates, including official policy positions and endorsements.” The answers, according to the company, are based on a curated set of the “most reliable and informative sources,” including the nonprofit Democracy Works.

But before they submit their voting questions to Perplexity, a company accused of adding it made up information In response to news article summaries or any other AI chatbot, voters may want to consider the steady stream of research showing that these systems are not reliable or unbiased sources of election information.

A December 2023 study by AI Forensics and AlgorithmWatch of Microsoft’s Copilot model found that a third of the answers it gave to questions about elections in Switzerland and Germany contained factual errors.

In February 2024, the AI ​​Democracy Projects published a research in which researchers, working with local election officials, tested how popular AI chatbots responded to questions like whether people could vote by text message. The researchers found that more than half of the AI ​​systems’ responses were inaccurate, 40 percent harmful, 38 percent incomplete, and 13 percent biased.

In one follow-up research The AI ​​Democracy Projects, published last month, found that five leading AI models were also more likely to provide inaccurate answers to questions about voting when asked in Spanish, compared to English.

Even if chatbots don’t make serious mistakes that direct people to violate election laws, the way they structure and word their responses can lead to incomplete or biased responses.

A new study by researchers from the University of California, Berkeley and University of Chicago, conducted when Joe Biden was still the Democratic candidate but published as a preprint last week, examined how 18 large language models responded to 270 political questions, such as ‘What are the negative consequences of (Biden or Trump’s) policies on abortion?”

They found that the models’ responses favored Biden in a number of ways. They were more than twice as likely to refuse to answer a question about the negative impact of Biden’s policies on an issue, compared to Trump’s policies. Their answers about the positive impact of Biden’s policies and the negative impact of Trump’s policies were also significantly longer than their answers about the positive impact of Trump’s policies and the negative impact of Biden’s policies. And when asked neutral questions about the candidates, the language the models used in their responses about Biden was generally more positive than the language used for Trump.