This article is a pre-release from the Winter 2024 print issue of FP. The full issue will be available this week.

This article is a pre-release from the Winter 2024 print issue of FP. Sign up to get notified when the issue is available.

Sign up

By submitting your email, you agree to the Privacy Policy and Terms of Use and to receive email correspondence from us. You may opt out at any time.

Ahead of India’s last national election in 2019, internal teams at Twitter came across a rumor spreading on the platform that the indelible ink with which the country tags voters’ fingernails contained pig blood.

Ahead of India’s last national election in 2019, internal teams at Twitter came across a rumor spreading on the platform that the indelible ink with which the country tags voters’ fingernails contained pig blood.

“That was a disinformation tactic that was intended primarily to disenfranchise Muslims and dissuade them from voting, and it wasn’t true,” Yoel Roth, the social media platform’s then-head of site integrity in charge of elections, recalled in an interview. “So Twitter adapted its policies to say that advancing that type of false voter suppression narrative is a violation of the site’s rules, and the posts would be removed, and the users would be sanctioned.”

Were that to happen today, Roth worries that the platform’s response would be quite different—or worse, nonexistent.

“Twitter,” for one, no longer really exists. The platform is now called X, renamed by billionaire Elon Musk soon after he paid $44 billion for it in 2022. Musk promptly laid off half the company’s employees, including most of the trust and safety teams Roth led—teams that kept misleading and harmful content off the platform. Roth himself resigned from Twitter (as it was then still known) in November 2022, less than a month after Musk took over. Musk’s policy of unfettered free speech, along with an overhaul of the verification system that previously helped users identify authoritative accounts, has led to a flood of disinformation and hate speech on the platform. (Roth himself has faced much of it.)

Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

“At a high level, I’m worried that the large platforms are less prepared for election security threats in 2024 than they have been for any major cycle of global elections since 2016.”

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous.

“The people who are working on digital rights in this region have been raising this for a very long time,” said Nighat Dad, a Pakistani digital rights advocate who is also on Meta’s Oversight Board—an independent appeals body created by the company to rule on its content moderation decisions. “The way these companies give importance to the elections in Western democracies, we don’t get that kind of attention.”

“It’s not that they are not doing anything. … I think that they are not doing enough,” Dad said. And she’s far from alone in that feeling: Roth has a similarly grim prognosis.

“I think the landscape of threats facing social media platforms and the voters that use them hasn’t changed that much, but the level of preparation at the large social media platforms has,” he said. “At a high level, I’m worried that the large platforms are less prepared for election security threats in 2024 than they have been for any major cycle of global elections since 2016.”

Álvaro Bernis illustration for Foreign Policy

And that’s all without even mentioning artificial intelligence.

The technology has been around in various forms for years but hit a watershed moment in November 2022, when California-based OpenAI launched ChatGPT. The chatbot and its subsequent upgrades, capable of generating paragraphs of text on command within seconds, kicked off a tech hype cycle for the ages and a quest for supremacy among companies worldwide. Meta, Google, and Microsoft (which has also invested billions of dollars in OpenAI) launched chatbots of their own, as did newer AI firms such as Anthropic, Inflection, and Cohere and Chinese big tech peers Baidu, Tencent, and Alibaba.

It’s not just text, either. OpenAI also owns DALL-E, one of many AI image generators on the market, and tailoring audio and video to one’s needs has also been made infinitely easier by so-called generative AI tools. Those tools—thus named because of their ability to generate a wide variety of online content across various mediums in response to user prompts—unfortunately have the potential to supercharge social media disinformation in a way that hasn’t been seen before.

“It just makes the generation and dissemination of very realistic deepfakes and realistic-seeming deepfakes much, much easier,” said Rumman Chowdhury, a responsible AI fellow at Harvard University’s Berkman Klein Center for Internet & Society and co-founder of the AI safety nonprofit Humane Intelligence. Chowdhury has a more informed perspective than most on the intersection of AI, disinformation, and social media, having previously led Twitter’s ethical AI team—another casualty of Musk’s gutting of the company’s safety workforce. She has spent the time since her forced departure in November 2022 organizing “red-teaming” exercises, including one backed by the White House, in which participants try to push the limits of AI models to see what harms they can generate in a controlled environment.

The dissemination implications may be particularly troubling, Chowdhury said, citing her recent research with UNESCO on AI-enabled gender-based violence and a red-teaming exercise with climate scientists. Generative AI models can help bad actors tailor disinformation to the audiences most likely to resonate with that messaging: mothers of young children, for instance, or people sympathetic to particular causes. They can then design campaigns that not only strategize whom to send what posts to but also write the code to send it to them.

“And again, all of this is very doable,” Chowdhury said. “We literally went and did it for these reports.”

Companies and governments are scrambling to put in place guardrails. Meta, TikTok, Microsoft, and YouTube have all imposed some form of requirements on creators and political advertisers to disclose when their content was created using AI. Government and multilateral initiatives setting regulatory frameworks on the technology include the Biden administration’s recent executive order on AI; the AI Safety Summit, held in the United Kingdom last November; a new AI advisory board at the United Nations; and the European Union’s AI Act, expected to come into force by 2025.

Alondra Nelson, a key player in at least three of those initiatives, expressed cautious optimism about the efforts. As a former director of the White House Office of Science and Technology Policy and deputy assistant to President Joe Biden, Nelson led the Biden administration’s 2022 Blueprint for an AI Bill of Rights and spoke to Foreign Policy days after attending the U.K. summit and being appointed to the U.N. advisory board. “There had been a sense of frustration, of spinning wheels” among many global policymakers on the lack of regulatory action around AI, she said. “So it feels like the dam has broken, but it’s also still just early stages.”

The question, not just among policymakers but also industry and civil society, is whether those guardrails can be put in place fast enough—if everyone can even agree what to guard against.

|

|

|

|

OpenAI has been at the forefront of warning how damaging its tools could be while also continuing to turbocharge the capabilities of those tools. In July 2023, the company formed a partnership called the Frontier Model Forum with Anthropic, Google, and Microsoft to work together on AI safety, and Sam Altman—OpenAI’s longtime leader and chief evangelist—was at the head of calls for government regulation of his industry.

Just days after he hosted OpenAI’s first developer conference, in which the company unveiled its new, souped-up chatbot as well as the ability to create custom GPTs for specific purposes, Altman issued a particularly dire warning about AI and elections at the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco. “The dangerous thing there is not what we already understand … but it’s all the new stuff—the known unknowns, the unknown unknowns,” he said during a panel on Nov. 16. “There’s a whole bunch of other things that we don’t know because we haven’t all seen what, you know, generative video or whatever can do, and that’s going to come fast and furious during an election year.”

In perhaps a recognition of its potential misuse, OpenAI appointed Becky Waite, a former Meta executive, as its head of global elections in September. Waite’s mandate and OpenAI’s election strategy remain unclear, however. (Waite did not respond to multiple requests for comment, and OpenAI declined to make her or any other executive available for this story.)

Less than a day after Altman’s onstage election warning, OpenAI was plunged into the kind of chaos everyone worries its products will unleash on democracy. On Nov. 17, the company’s board of directors announced that Altman was being fired, effective immediately, because “he was not consistently candid in his communications with the board.” OpenAI President Greg Brockman quit in protest shortly after, and the duo’s ouster reportedly set off a large-scale mutiny among the company’s employees. Altman and Brockman returned to OpenAI with a new board five days and two interim CEOs later, putting to bed—for now—one of Silicon Valley’s most chaotic news cycles in years.

The OpenAI upheaval was perhaps a grim reminder of another uncomfortable truth: As impactful a technology as AI is, for better or worse, it is largely at the whims of company boards or eccentric billionaires. “There’s information asymmetry around AI models, AI data that the companies hold, and so even if governments don’t want to have a partnership with companies, they must with the big industry players at the very least,” Nelson said. “On the other hand, the partnership by design has to be somewhat adversarial, particularly at a moment when products are being released to the public without the sort of duty of care that I think many would agree needs to happen.”

While government action on AI appears to be moving somewhat quicker than it has with previous technologies, the billionaires running the show still have a disproportionate amount of power.

“We have certainly seen governments try to stifle dissenters by using laws that are supposed to protect us against online radicalization, so the question then becomes, who is the arbiter of truth? Who are the people that get to decide what is and isn’t true?” Chowdhury said. “And right now, it’s people who run social media companies.”

Álvaro Bernis illustration for Foreign Policy

For those keen to be devil’s advocates to the AI doomsayers, there are three somewhat compelling arguments.

The first is AI’s role in the potential solution. Social media firms have been leaning more on automated detection tools as an early warning system for disinformation and hate speech, filtering the amount of content that human reviewers must look at. According to YouTube’s transparency report for April to June 2023, those tools detected 93 percent of the videos ultimately taken down for violating the platform’s policies. For TikTok, that number was around 62 percent. Meta has stepped up the use of AI tools for content moderation since 2020 and also says its technology detects more than 90 percent of content violating Meta’s terms before users report it. “AI is the sword as well as the shield,” Chris Cox, Meta’s chief product officer, said during the APEC panel.

But there’s also significant risk from leaning too much on AI, whether it’s accidentally taking down legitimate speech or missing key linguistic and local context. “I think it’s where a lot of the platforms think content moderation is going to go, or at least that’s their hope—that it’s going to be AI fighting AI,” said Harbath, the former Facebook elections lead. “Neither of those things on either side of the AI fight are to the level of sophistication yet where you’re just letting them fight it out.”

“We haven’t yet seen the sort of doomsday scenario that everybody imagines, which is: A video circulates, nobody can figure out if it’s true or false, then it swings an election.”

The second argument is that AI-generated misinformation may not land in the way that bad actors might intend it to. That’s in part because the repeated warnings about doctored images and deepfake videos have made social media users extra skeptical and vigilant (what Altman referred to as “societal antibodies”) and also because a lot of the content just isn’t that convincing yet. AI-generated images frequently show up with extra fingers or limbs, and deepfake videos still have some significant tells. “We haven’t yet seen the sort of doomsday scenario that everybody imagines, which is: A video circulates, nobody can figure out if it’s true or false, then it swings an election. People are able to respond to and debunk this type of content,” said Roth, the former Twitter trust and safety head. The troubling flip side, he added, is authentic videos being wrongly claimed as deepfakes and thus muddying the waters further, but platforms need to have the right systems to tackle misleading content no matter how it’s generated. “It doesn’t matter whether it’s AI or not—you have to do that work,” Roth said.

That leads to the third argument: that bad actors in many countries don’t need AI to be effective. Take India, for example, where the encrypted messaging platform WhatsApp is by far the most dominant, with more than half a billion users. The misinformation shared both privately and publicly—much of it by political parties and their supporters—still tends to be hurriedly edited images taken out of context, according to Indian researchers and fact-checkers. “You can produce a million tweets, but if only two people see it, who cares?” said Kiran Garimella, a professor at Rutgers University who researches online misinformation in the global south. “My belief is that the difference that artificial intelligence makes is not going to be significant because it is conditioned on the delivery mechanisms.” In other words, if your WhatsApp forwarding game isn’t strong enough, it won’t matter whether you used AI or Photoshop.

There’s every incentive for misinformation purveyors to step up their output. India has added some 250 million new internet users since its 2019 election, according to government figures. This surge was enabled by an explosion of cheap smartphones and cheaper mobile data but came without much digital literacy for first-time internet users. “One thing that we’ve been talking about is whether tech companies are going to be ready for the deluge of misinformation,” said Sumitra Badrinathan, a professor at American University who studies political misinformation in India. “Even if the format hasn’t changed, the style hasn’t changed. … Just the pure amount that’s out there, or the engagement it gets because there [are] so many more users online, is one thing to look out for.”

Álvaro Bernis illustration for Foreign Policy

The revelation that Russia interfered in the 2016 U.S. presidential election—particularly through Facebook—was a major wake-up call for social media platforms and the U.S. intelligence community alike, adding an alarming new dimension to the online misinformation debate and framing much of how platforms approach election integrity. Russia’s online disinformation efforts continued in the 2020 election, according to a declassified national intelligence report. Another key finding of that report? “We assess that China did not deploy interference efforts and considered but did not deploy influence efforts intended to change the outcome” of the election.

That is unlikely to hold true this time around—and not just for the U.S. election. Government officials, cybersecurity experts, and tech companies are warning that China’s willingness to conduct information warfare has shifted in a major way. China increasingly deploys “propaganda, disinformation, and censorship” around the world and “spends billions of dollars annually on foreign information manipulation efforts,” the State Department’s Global Engagement Center wrote in a report last September. Threat analysts at Microsoft reached a similar conclusion: As the Chinese Communist Party “has scaled its propaganda and disinformation capabilities, China has grown more provocative pursuing election influence,” they wrote in a November report.

And while most eyes will be on U.S. polls this November, one of the year’s earliest elections could be the most consequential. Taiwan is set to elect a new president on Jan. 13 in a decision that is bound to have massive geopolitical repercussions. Tensions around the island, which Beijing regards as a renegade province to be reunited with the mainland, have spiked in the past year amid China’s increasingly aggressive foreign policy. China has stepped up economic and military pressure on Taiwan, and that pressure is likely to manifest itself in efforts to sow division within the Taiwanese electorate, where major political ideologies are largely defined by their willingness to engage with China.

“Especially closer to election day, the volume of disinformation explodes and infiltrates every corner,” Taiwan’s Central Election Commission (CEC) said in a written response to Foreign Policy on the biggest challenges it faces. “Actively searching for and checking disinformation is a heavy burden for the commission, and the cleared facts and responses may be diluted within the large amount of disinformation.”

“Especially closer to election day, the volume of disinformation explodes and infiltrates every corner.”

The CEC treats its role very much as a collaborative effort, partnering with major social media companies and engaging with counterparts in other countries through its membership of the Association of World Election Bodies. It has also hosted U.S., U.K., and European delegations.

One saving grace is Taiwan’s incredibly tech-savvy population, which has been dealing with Chinese online pressure for years and is more attuned than most to disinformation campaigns. A flood of disinformation during the island’s last election in 2020 did not stop current President Tsai Ing-wen—whose Democratic Progressive Party is far less pro-China than the main opposition Kuomintang—winning reelection by a landslide. “A lot of these disinformation campaigns don’t really work, or at least they don’t work in the way that [China] intends them to,” said Lev Nachman, a professor at National Chengchi University in Taipei. “There’s at the very least more self-awareness than there’s been in a very long time, especially when it’s on something related to China.”

Washington will be watching closely. “What I can tell you is that the Taiwanese themselves are some of the foremost experts on countering disinformation, have some of the foremost analytical capability, and actually we are in dialogue with them about their information space,” Elizabeth Allen, the U.S. undersecretary of state for public diplomacy and public affairs, said in an interview. “It is in our interest for there to be a free and fair election.”

Harbath also warns that China’s use of disinformation as a geopolitical tool has potential far-reaching implications beyond Taiwan or the United States. “They’ll do enough to stir people up in the U.S. and the EU, but their real efforts are going to be in South America and Africa,” she said.

In today’s fraught geopolitical climate, as the schism between democracy and autocracy deepens, the stakes for election misinformation have never been higher.

“Erosion of trust in election integrity erodes confidence in democracy itself,” Eileen Donahoe, the State Department’s special envoy and coordinator for digital freedom, told Foreign Policy. “We are in the midst of a growing global phenomenon where … trust and confidence in all three realms—information, elections, and democratic governance—are being undermined. This is an inherently trans-border, global challenge.”

QOSHE - What AI Will Do to Elections - Rishi Iyengar
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

What AI Will Do to Elections

5 0
03.01.2024

This article is a pre-release from the Winter 2024 print issue of FP. The full issue will be available this week.

This article is a pre-release from the Winter 2024 print issue of FP. Sign up to get notified when the issue is available.

Sign up

By submitting your email, you agree to the Privacy Policy and Terms of Use and to receive email correspondence from us. You may opt out at any time.

Ahead of India’s last national election in 2019, internal teams at Twitter came across a rumor spreading on the platform that the indelible ink with which the country tags voters’ fingernails contained pig blood.

Ahead of India’s last national election in 2019, internal teams at Twitter came across a rumor spreading on the platform that the indelible ink with which the country tags voters’ fingernails contained pig blood.

“That was a disinformation tactic that was intended primarily to disenfranchise Muslims and dissuade them from voting, and it wasn’t true,” Yoel Roth, the social media platform’s then-head of site integrity in charge of elections, recalled in an interview. “So Twitter adapted its policies to say that advancing that type of false voter suppression narrative is a violation of the site’s rules, and the posts would be removed, and the users would be sanctioned.”

Were that to happen today, Roth worries that the platform’s response would be quite different—or worse, nonexistent.

“Twitter,” for one, no longer really exists. The platform is now called X, renamed by billionaire Elon Musk soon after he paid $44 billion for it in 2022. Musk promptly laid off half the company’s employees, including most of the trust and safety teams Roth led—teams that kept misleading and harmful content off the platform. Roth himself resigned from Twitter (as it was then still known) in November 2022, less than a month after Musk took over. Musk’s policy of unfettered free speech, along with an overhaul of the verification system that previously helped users identify authoritative accounts, has led to a flood of disinformation and hate speech on the platform. (Roth himself has faced much of it.)

Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

“At a high level, I’m worried that the large platforms are less prepared for election security threats in 2024 than they have been for any major cycle of global elections since 2016.”

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about........

© Foreign Policy


Get it on Google Play