Multiple tech giants have pledged to “detect and counter harmful AI content,” but is controlling AI a “hallucination”.

A worrying trend is starting to take shape. Every time a new technological leap forward falls on an election year, the US elects Donald Trump.

Of course, we haven’t got enough data to confirm a pattern, yet. However, it’s impossible to deny the role that tech-enabled election inference played in the 2016 presidential election. One presidential election later, and efforts taken to tame that interference in 2020 were largely successful. The idea that new technologies can swing an election before being compensated for in the next is a troubling one. Some experts believe that the past could suggest the shape of things to come as generative AI takes center stage. 

Social media in 2016 versus 2020

This is all very speculative, of course. Not to mention that there are many other factors that contribute to the winner of an election. There is evidence, however, that the 2016 Trump campaign utilised social media in ways that had not been seen previously. This generational leap in targeted advertising driven by unquestionalbly worked to the Trump campaign’s advantage.

It was also revealed that foreign interference across social media platforms had a tangible impact on the result. As reported in the New York Times, “Russian hackers pilfered documents from the Democratic National Committee and tried to muck around with state election infrastructure. Digital propagandists backed by the Russian government” were also active across Facebook, Instagram, YouTube and elsewhere. As a result, concerted efforts to “erode people’s faith in voting or inflame social divisions” had a tangible effect.  

In 2020, by contrast, foreign interference via social media and cyber attack was largely stymied. “The progress that was made between 2016 and 2020 was remarkable,” Camille François, chief innovation officer at social media manipulation analysis company Graphika, told the Times

One of the key reasons for this shift is that tech companies moved to acknowledge and cover their blind spots. Their repositioning was successful, but the cost was nevertheless four years of, well, you know. 

Now, the US faces a third pivotal election involving Donald Trump (I’m so tired). Much like in 2020, unless radical action is taken, another unregulated, poorly understood technology with the ability to upset an election through misinformation and direct interference. 

Will generative AI steal the 2024 election? 

The influence of online information sharing on democratic elections has been getting clearer and clearer for years now. Populist leaders, predominantly on the right, have leveraged social media to boost their platforms. Short form content and content algorithms’ tend to favour style and controversy over substantive discourse. This has, according to anthropologist Dominic Boyer, made social media the perfect breeding ground and logistical staging area for fascism. 

“In the era of social media, those prone to fascist sympathies can now easily hear each other’s screams, echo them and organise,” Boyer wrote of the January 6th insurrection

Generative AI is not inextricably entangled with social media. However, many fear that the technology will (and already is) being leveraged by those wishing to subvert democratic process. 

Joshua A. Tucker, a Senior Geopolitical Risk Advisor at Kroll, said as much in an op-ed last year. He notes that ChatGPT “took less than six months to go from a marvel of technological sophistication to quite possibly the next great threat to democracy.”

He added, most pertinently, that “just as social media reduced barriers to the spread of misinformation, AI has now reduced barriers to the production of misinformation. And it is exactly this combination that should have everyone concerned.” 

AI is a perfect election interference tool

While a Brookings report notes that, “a year after this initial frenzy, generative AI has yet to alter the information landscape as much as initially anticipated,” recent developments in multi-modal AI that allow for easier and more powerful conversion of media from one form into another, including video, have undeniably raised the level of risk.

In elections throughout Europe and Asia this year, the influence of AI-powered disinformation is already being felt. A report from the Associated Press also highlighted the demotratisation of the process. They note that anyone with a smartphone and a devious imagination can now “create fake – but convincing – content aimed at fooling voters.” The ease with which people can now create disinformation marks “a quantum leap” compared with just a few years ago, “when creating phony photos, videos or audio clips demanded serious application of resources.

“You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” Henry Ajder, an expert in generative AI based in Cambridge, England, told the AP.

Brookings’ report also admits that “even at a smaller scale, wholly generated or significantly altered content can still be—and has already been—used to undermine democratic discourse and electoral integrity in a variety of ways.” 

The question remains, then. What can be done about it, and is it already too late? 

Continues in Part Two.

  • Cybersecurity
  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.