From help with mundane admin tasks to seeking out relationship advice, internet users are increasingly turning to AI chatbots for life tips. But can you really use AI for voting advice, and can it be trusted with democracy?
The Dutch Data Protection Authority (AP) has said no, warning that AI tools such as ChatGPT and Gemini can be unreliable and prone to bias, ahead of the country’s snap elections scheduled for 29 October.
The AP recently published a study in which it fed four chatbots — ChatGPT, Gemini, Grok and Le Chat — 200 typical voting profiles per political party currently active in the Dutch parliament, and asked them to give voting advice based on these profiles.
The Dutch watchdog found that the chatbots consistently recommended one of two parties on the left and right sides of the political spectrum, regardless of the user’s commands or questions.
In 56% of cases, the chatbots pointed users towards the two parties forecast to win the most seats in the Dutch election: Geert Wilder’s Party for Freedom (PVV) or Frans Timmermans’ Green Left-Labour Party (GL/PvdA).
“What we saw was a kind of oversimplification of the Dutch political landscape, where one party on the left kind of sucked up all the votes in that corner of the political spectrum and the same with the one party on the right,” Joost van der Burgt, project manager at the AP, said.
The agency said its results showed views from the political centre were underrepresented, whilst smaller fringe parties such as The Farmer-Citizen Movement (BBB) and Christian Democratic Appeal (CDA) were almost never suggested, even when users input information that directly matched the parties’ views.
A third party, the right-wing JA21, was recommended a surprisingly high number of times, despite it being a relatively young party with comparatively less extensive media coverage.
“The way generative AI and large language models work means that they are basically statistical machines that predict missing words in a phrase or a certain output,” van der Burgt told Euronews’ verification team, The Cube.
“If your political views position you towards one end of the political spectrum, it’s maybe not that surprising that generative AI picks a political party that fits that side and seems like a safe choice,” he said. “AI can also fail to distinguish between parties who are quite close together or who might have quite comparable views on most of the issues, but not all.”
In August 2024, the European Artificial Intelligence Act (AI Act) came into force, defining the conditions which qualify an AI system as “high risk”.
Under the legislation, AI systems which are qualified as “high risk” can pose a threat to the health or fundamental rights of EU citizens. However, they remain legal because their socio-economic benefits outweigh their risks.
According to researchers, the fact that chatbots provide voting advice could mean they qualify as “high-risk systems” under a new set of rules set to be implemented in August 2026 under the EU’s AI Act.
For van der Burgt, the threat that chatbots can influence voters based on biased information is heightened by a lack of regulation.
“It is already the case when it comes to questions about mental health or aids to create improvised weapons,” he said. “In all these scenarios, a chatbot clearly states, I’m sorry, we’re not allowed to help you with it. And we think that the same sort of mechanism should be in place when it comes to voting advice.”
Who can you trust for advice?
The AP compared results with outcomes produced by two established Dutch online voting advice tools: StemWijzer and Kieskompas, which rely on data sources rather than AI.
These tools ask voters to answer a series of 30 questions to determine which political party they align with the most.
In Germany, the government-approved and widely used Wahl-O-Mat — originally based on StemWijzer — compares voters’ positions on a series of political statements to the positions of political parties to eventually align the user with the party that best fits their views.
Experts suggest that these tools are less likely to provide unbiased results compared to those that use generative AI.
One additional pull of these tools is that their methods are transparent, whilst mainstream chatbots are not.
“One fundamental problem with these chatbots is that the way they work is untransparent,” van der Burgt said. “We, nor the public, nor journalists can figure out why exactly they will produce a certain answer.”
Voting tools, he added, are “transparent in how they work, how they’ve been set up and how they come to their answers”.
But is there hope that chatbots can provide voters with reliable and detailed advice on which political party best suits a user’s perspectives? A crop of researchers suggests there is potential, if those chatbots are controlled.
Ahead of Germany’s federal elections in February 2025, Michel Schimpf launched the Wahl.Chat bot with a team of researchers, as an alternative to Wahl-O-Mat. The bot incorporates a range of sources, including party manifestos, enabling voters to ask questions like “how does this party stand on climate change?”.
“If you ask ChatGPT a question, its source could be a biased website, but we made sure that our bot relies on party manifestos, so if it can’t find an answer, it will tell the user that the party didn’t respond to that point,” Schimpf said in an interview. “We prompted our chatbots to say that a party said so-and-so, but not ‘this is fact’.”
The Wahl.Chat team also incorporated a fact-checking element into its bot, allowing users to verify a manifesto statement by searching the internet.
“There is no such thing as non-biased information, as the media is biased, like social media,” Schimpf said. “I think that our chatbot was less biased because of the way it presented information, designing it with the idea of reducing bias — but it’s still AI, which is ultimately probabilistic.”
“And if you really try, you will get information that can nudge AI into a biased direction,” he added.
Others are using AI in more limited means when developing their chatbots.
Naomi Kamoen, assistant professor at Tilburg University, has been helping to develop a controlled chatbot that can explain statements and complex terms about the Dutch election to its users. Its answers, which are limited, are written by qualified researchers rather than being AI-generated.
“I would say there’s more that people can get out of their voting advice tools if they also have a tool that helps them obtain more detailed information about the issues,” she told The Cube. “The goal of both is to inform, not necessarily to tell them what to vote for.”
“Voting tools are a starting point, watch the news, talk to people, read what parties have to say,” she added. “It’s not a bad thing that people are encouraged to put more effort into informing themselves about politics.”
Can I still use ChatGPT?
Despite potential pitfalls, many argue that chatbot models that rely on generative AI have the potential to be a helpful tool in making political decisions, as long as they are used correctly.
“If I have no idea who I’m going to vote for and just ask ChatGPT who the best party is before voting, then using AI is a hard no,” said Jianlong Zhu, doctoral researcher at Saarland University’s Department of Computer Science. “We should never delegate such high-stakes decisions to someone else, let alone a machine.”
“However, if we are looking at this advice in the context of education, if we want to help voters learn more about the political landscape to get themselves more interested in politics, then I think AI could be a viable tool,” he added. “Although I’m not so confident about ChatGPT or other existing interfaces as they are.”
One of the benefits of AI, compared to tools such as Wahl-O-Mat, is that it can effectively break down complex concepts for its users in real time.
“If you are using Wahl-O-Mat and say, out of 38 questions, there are 10 statements containing concepts that you don’t understand or that you haven’t thought about, what you end up doing is randomly clicking stances,” Zhu said.
“But with a chatbot, you can ask what those terms mean, and the chatbot is effective in breaking down those terms to help you engage with the topic better,” he continued. “We found that our participants who did not have a university education were more likely to enjoy using the chatbot and feel more informed after using it.”
Still, considering the risks, “users should be asking about parties as opposed to receiving recommendations,” Zhu said, adding that there could be regulatory frameworks put in place in the future to make sure chatbots are used responsibly during election campaigns.
Read the full article here












