This year, half of the world’s population will go to the polls. More than 4 billion people can vote during one of the 40 elections that are held there. Not all elections are likely to be democratic, as in Russia. But there are particular concerns about the negative impact of technology such as AI that could undermine elections.
The results of this year’s elections are expected to have significant and global implications for the future of democracy. But also for important topics such as human rights, security and climate action. In Bangladesh and Taiwan, voters already took to the bus to exercise their democratic right. World powers such as the US, India and the United Kingdom will follow later this year.
I have twice been able to participate in an election campaign as part of a political party’s campaign team. This introduced me to the role that technology plays in this. As:
Big data analytics to get to know voters, and targeted ads on social media to address specific target groups with a specific message.
But in an age of AI, even the most powerful democracies are frightened by the idea of the negative impact technology can have on voting. So scared, in fact, that my colleagues at the World Economic Forum consider ‘AI-generated disinformation’ to be the biggest risk for 2024.
AI-generated disinformation
Media such as the New York Times and the BBC already warned about this in 2018. And countless scientific articles show what an unprecedented negative effect the spread of disinformation via social media can have on elections, by manipulating the online dialogue and destroying trust.
We’ve seen this before, of course. Cambridge Analytica, a political data analytics company, collected personal data from approximately 87 million Facebook users without consent. This data was used to create psychological profiles and distribute targeted political advertisements. The company claimed to have played a role in influencing approximately 200 elections worldwide.
AI weapons for mass disinformation
“Save your vote for the November elections.” Last month, a robocall went out impersonating US President Joe Biden. The message was aimed at voters in the US state of New Hampshire, advising them not to vote in the state’s presidential election.
The voice, generated by AI, sounded incredibly real. This also shows the major problem with the rapidly developing AI technology. We’re no longer talking about photoshopping minor tweaks to how someone looks. We are talking about the large-scale creation and dissemination of very real-looking information, which can be fabricated by any technical layman.
An online test by the New York Times showed that many people blindly believe this type of fabricated information. Readers were invited to look at ten images and try to determine which were real and which were AI-generated. A bit the same as what Lubach does in his regular column with ‘MP of AI’.
The test showed how difficult it is to distinguish between real and AI-generated images. This was supported by multiple academic studies, which found that “faces of white people created by AI systems were perceived as more realistic than real photos,” according to journalist Stuart Thompson.
The democratization of disinformation
Social media reduced the costs of spreading misinformation or information. AI reduces the costs of its production. The easy-to-use dashboards of LLM AI models have already enabled an explosion of falsified information and so-called “synthetic” content. From advanced voice cloning to counterfeit websites.
The technology is deeply democratizing disinformation, providing highly sophisticated tools to any citizen interested in promoting their favorite candidate by spreading messages they want. People no longer have to be developers or Photoshop wizards to generate text, images or video. But they don’t necessarily have to work for a Russian or Chinese troll farm to sow chaos. In that respect, anyone can become a creator of political content and try to influence voters or the media.
It’s a global problem
In addition to Biden’s example in the US, we have also seen other concrete examples worldwide. For example, Venezuelan state media spread pro-government messages through AI-generated videos from newsreaders from a non-existent international English-language channel. They were generated by Synthesia, a company that produces custom deepfakes.
In the recent elections in Slovakia, AI-generated audio recordings circulated on Facebook, impersonating a liberal candidate discussing plans to raise alcohol prices and rig the election. During the Nigerian elections last February, an AI-manipulated audio clip wrongly implicated a presidential candidate in plans to manipulate ballots.
Old fraud, new tricks
Many types of fraud that we see are not new. Scammers have been doing voice cloning and using deepfakes to mislead people for years. The big difference here is really the simplicity with which every world citizen can do this themselves with tools that are freely available. I previously wrote about startups that make it possible to ‘bring back to life’ a deceased person with a short voice recording, some photos and film material as a 3D hologram, with which you can even have a conversation. In my view, that is a really great example of the power of convergence of technologies. But it can also be used incorrectly. According to the Financial Times, the number of scams in the financial sector is increasing at a record pace.
Recently, in a masterclass on the use of AI, I showed the participants how easy it is to create a fake ID. I did this in response to this Reddit post that went viral. The amazement quickly turned to astonishment. For example, you can close a bank account online at many organizations, such as banks. To successfully complete this process, identification is required by uploading an ID and a selfie that you take. A simple layman can now generate a fake ID using tools such as Midjourney. What does this do to all those so-called Know Your Customer (KYC) processes, which depend so heavily on this?
By the way, it is not only the negative impact of Generative AI that is being warned about elections. A recent report from Freedom House shows that more and more governments are also using AI to apply censorship to the population. For example, by checking internet access. Something we already see/saw in Turkey, Iran and Ethiopia.
Wash nose or strong fist?
Normally, large technology companies in the United States are often left alone. But in October last year, US President Biden signed a so-called ‘executive order’ to make AI-generated content mandatory watermarking. Unfortunately, many experts from both the government and technology companies have indicated that they do not even understand what they mean by ‘watermarks’.
Fortunately, early this year, 20 major tech companies (including Google, Meta, Microsoft, OpenAI, TikTok, X, Amazon and Adobe) pledged to help prevent AI misuse from impacting global elections. They signed the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’. A ‘voluntary’ agreement with eight specific commitments to deploy technology to combat harmful AI content. You can think of detection mechanisms for the distribution of this type of content on their platforms, but also for generating it.
How can we ensure that AI and technology are forces for good rather than chaos? – Ravi Agrawal, Edit-in-Chief, Foreign Policy
Actions by Google, OpenAI and Microsoft
In addition, the companies have also announced numerous promotions separately. Google requires political advertisers to make it very clear whether content has been digitally modified or generated with AI. It has also limited the answering of election-related questions by its chatbot Gemini. Sister YouTube will require video creators to make clear whether they post AI-generated content or not.
ChatGPT owner OpenAI has indicated that it will introduce authentication tools that will allow the user to immediately see whether an image can be trusted or not. It has also said it will ban politicians and political campaigns using these tools. I’m curious what this will look like in concrete terms. A Dutch political party such as the BBB has of course already indicated that it partly wrote its election manifesto with GPT.
Major shareholder Microsoft is also introducing similar measures to verify content. It also said it has upgraded its search engine Bing to give users results from authoritative, verified sources.
Social media channels
In the past year, X has been in the news a lot due to the large amount of misinformation spread on the platform. This is partly because new owner Elon Musk fired the platform’s election integrity team. According to Musk, this actually undermined ‘election integrity’. X recently introduced ‘Community Notes’ as the most important (externally published) tool to combat disinformation. However, it is criticized by many experts as flawed, error-prone and inadequate.
Due to the Cambridge Analytica scandal, Facebook parent Meta has been semi-forced to prevent the platform from being misused to influence elections for some time now. Meta also requires political advertisers to clearly state their names and to clearly indicate whether content is AI-generated.
Timestamp it with the trust machine
As I wrote before, I really see the convergence of several new technologies as the next big step forward. You see the coolest examples, such as:
drones powered by IoT and machine learning to spray crops, and
that blockchain and AI reinforce each other, by checking blockchain code (smart contracts) for bugs and storing AI data decentrally and rewarding owners for use with crypto.
While AI is currently causing a lot of disagreement and is reducing trust in the technology due to all the cases described earlier, blockchain is described as the ‘trust machine’. Due to its technological nature, for example, you cannot change any data that you have put on the blockchain. This ensures confidence in data in countless industries worldwide.
For example, blockchain can authenticate content with so-called ‘timestamping’ and thus give the viewer the confidence that it has not been tampered with. The Amsterdam startup Wordproof won a prestigious European innovation prize and has, for example, already timestamped all articles in the NRC. On the other side of the ocean, Fox (from Fox News, among others) uses blockchain technology (Polygon) to offer the same solution.
In Guatemala, last year’s elections were even put on the blockchain using this technology. Something I already wrote about in 2019 and which is now being experimented with by more and more governments.
Robo-volunteers and conversation partners
It’s not all doom and gloom surrounding the elections. In that respect, I also see plenty of areas where technology can have a positive impact.
For example, I recently had the opportunity to give a workshop on AI for a group of politicians and gave the example of setting up your own chatbot. Here I showed two (pre-made) examples:
A bot where I had uploaded the election manifestos of all parties and could therefore have a very simple, but effective, discussion about topics that are close to my heart.
But also bluntly for a party itself. You can upload anything here: voting behavior, blogs, newspaper articles, etc. Based on this, you can give a potential voter the opportunity to enter into a conversation as deep as they want. At a time and location of your choice.
That is a flat chatbot, but there are also politicians who use virtual volunteers in this way. A good example is the Democratic candidate Shamaine Daniels from the American state of Pennsylvania. She uses Ashley, an AI campaign volunteer, for her campaign. Democratic American presidential candidate Dean Philips has also launched the ‘Dean.Bot‘.
The answer to the fake IDs? For this purpose, reference is made to the much-criticized Worldcoin project, which I wrote about earlier. Due to the design of the technology, it would provide a real ‘proof of humanity’. Experts doubt this.
Liar dividends
Co-founder of the Center for Humane Technology, Tristan Harris, has been warning widely about the negative impact of technologies such as AI on elections, for example.
One of the challenges, for example, is that campaign speeches are protected expressions. Candidates can in principle (in most democratic countries) say and do almost anything they want, without risk of legal consequences. Even if their statements are clearly incorrect, they are almost always released and are not addressed by, for example, a judge.
Another challenge is the fact that according to research, disinformation thrives best on small platforms. Platforms that, unlike their bigger brothers, have not yet announced how they will tackle disinformation. These are platforms that often have much less budget for things such as content moderation.
But another interesting study from 2018 also shows evidence for the so-called ‘liar dividend’. It suggests that as the public becomes more aware of the fact that AI can generate video and audio in a convincing manner, the ‘bad actors’ will actually label authentic and real content as ‘fake’, which, according to the authors, creates an even more unclear information provision. towards the voters. With all its consequences.
Will AI have the same or even worse impact on this year’s elections? Time will tell. I still have hope that technology can help prevent use by ‘bad actors’. We will see in the coming year whether all the measures have helped and the great fear has been for nothing.
Nurture is being replaced by an algorithm. I’m calling out the real-time experiment that’s being run on us and democracy right now.– Ian Bbraker, president of the Eurasia Group