Can a coalition of 20 tech giants save the 2024 US elections from the generative AI threat they created?

Continued from Part One.

In February 2024—262 days before the US presidential election—leading tech firms assembled in Munich to discuss the future of AI’s relationship to democracy. 

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Brad Smith, vice chair and president of Microsoft, in a statement. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.” 

Collectively, 20 tech companies—mostly involved in social media, AI, or both—including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, pledged to work in tandem to “detect and counter harmful AI content” that could affect the outcome at the polls. 

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections

What they came up with is a set of commitments to “deploy technology countering harmful AI-generated content.” The aim is to stop AI being used to deceive and unfairly influence voters in the run up to the election. 

The signatories then pledged to collaborate on tools to detect and fight the distribution of AI generated content. In conjunction with these new tools, the signatories pledged to drive educational campaigns, and provide transparency, among other concrete—but as yet undefined—steps.

The participating companies agreed to eight specific commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
  • Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content
  • Seeking to detect the distribution of this content on their platforms
  • Seeking to appropriately address this content detected on their platforms
  • Fostering cross-industry resilience to Deceptive AI Election Content
  • Providing transparency to the public regarding how the company addresses it
  • Continuing to engage with a diverse set of global civil society organisations, academics
  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The complete list of signatories includes: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X. 

“Democracy rests on safe and secure elections,” Kent Walker, President of Global Affairs at Google, said in a statement. However also stressed the importance of not letting “digital abuse” pose a threat to the “generational opportunity”. According to Walker, the risk posed by AI to democracy is outweighed by its potential to “improve our economies, create new jobs, and drive progress in health and science.” 

Democracy’s “biggest year ever”

Many have welcomed the world’s largest tech companies’ vocal efforts to control the negative effects of their own creation. However, others are less than convinced. 

“Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises,” Nora Bernavidez, senior counsel for the open internet advocacy group Free Press, told NBC News. She added that “voluntary promises” like the accord “simply aren’t good enough to meet the global challenges facing democracy.”

The stakes are high, as 2024 is being called the “biggest year for democracy in history”. 

This year,  elections are taking place in seven of the world’s 10 most populous countries. As well as the US presidential election in November, India, Russia and Mexico will all hold similar votes. Indonesia, Pakistan and Bangladesh have already held national elections since December. In total, more than 50 nations will head to the polls in 2024.

Will the accord work? Whether big tech even cares is the $1.3 trillion question

The generative AI market could be worth $1.3 trillion by 2032. If the technology played a prominent role in the erosion of democracy—in the US and abroad—it could cast very real doubt over its use in the economy at large. 

In November of 2023, a report by cybersecurity firm SlashNext identified generative AI as a major driver in cybercrime. SlashNext blamed generative AI for a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing. Data published by European cybersecurity training firm, SoSafe, found that 78% of recipients opened phishing emails written by a generative AI. More alarmingly, the emails convinced 21% of people to click on malicious content they contained. 

Of course, phishing and disinformation aren’t a one-to-one comparison. However, it’s impossibly to deny the speed and scale at which generative AI has been deployed for nefarious social engineering. If the efforts taken by the technology’s creators prove to be insufficient, the impact mass disinformation and social engineering campaigns powered by generative AI could have is troubling.

“There are reasons to be optimistic,” writes Joshua A. Tucker is Senior Geopolitical Risk Advisor at Kroll

He adds that tools of the kind promised by the accords’ signatories may make detecting AI-generated text and images easier as we head into the 2024 election season. The response from the US has also included a rapidly drafted ban by the FCC on AI-generated robocalls aimed to discourage voters.

However, Tucker admits that “following longstanding patterns of the cat-and-mouse dynamics of political advantages from technological developments, we will, though, still be dependent on the decisions of a small number of high-reach platforms.”

  • Cybersecurity
  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.