The year's best stories

To get a sense of the capabilities, contradictions, and chaos that have defined artificial intelligence in the past year, one only needs to look to the technology’s most high-profile champion.

To get a sense of the capabilities, contradictions, and chaos that have defined artificial intelligence in the past year, one only needs to look to the technology’s most high-profile champion.

California-based company OpenAI set the tone for 2023 in November 2022 by rolling out ChatGPT, the versatile chatbot that thrust AI into the public conversation and spurred a global race to develop more powerful models. The company ended the year by abruptly firing its talismanic CEO, Sam Altman, only to bring him back five days later with a new board of directors and seemingly consolidated power at the helm.

ChatGPT and the feeding frenzy that followed perturbed policymakers around the world. Governments in the United States, Europe, and China all moved quickly to try to place regulatory guardrails around artificial intelligence, and multilateral efforts at the G-7 and G-20 meetings, the United Nations, and the United Kingdom’s AI Safety Summit sought to broaden them.

There is good reason to be concerned. AI has the potential to reshape economics, society, and democracy around the world, as well as significant military applications—some of which are already in use. The race to develop artificial intelligence is not only taking place between companies, but also countries.

The starkest warnings have come from those making the technology. Altman—like many in the AI industry—has oscillated between being an evangelist and a doomsayer, calling for more regulation in Washington and beyond. For much of 2023, he was also the industry’s most prominent diplomat, meeting leaders from India, Australia, and the Middle East. That, in turn, has prompted consternation about how much influence large companies wield over the technology’s development and oversight.

Meanwhile, China has outlined its goal of becoming a global leader in AI by the end of the decade and has made significant progress on several fronts. Many of Washington’s recent major foreign-policy moves—such as extensive curbs on semiconductors and outbound tech investment—are aimed at slowing Beijing’s progress. There are some signs of a willingness to engage on mitigating global harms, including discussions between U.S. President Joe Biden and Chinese President Xi Jinping in San Francisco in November.

The big questions about AI in 2024 are whether regulations can be effective and adaptable enough to keep up with rapidly evolving capabilities—and ultimately whether countries around the world can even agree on those guardrails. With national elections in many of the world’s biggest democracies next year, the stakes couldn’t be higher.

By Paul Scharre, June 19

Although the early focus on AI competition was on the industry itself, the conversation quickly expanded to how the transformative technology will impact geopolitics, which countries are best poised to take advantage of the moment, and what determines who comes out ahead.

Foreign Policy’s summer print issue was an early attempt to understand those debates, anchored by a lead essay by Paul Scharre, the executive vice president and director of studies at the Center for a New American Security (CNAS). Scharre argued that governments cannot afford to sit on the sidelines while big companies supercharge AI models to do bigger and better—or worse—things.

Many of Scharre’s predictions and prescriptions have come to pass, with several national and transnational regulatory efforts coming together in recent months. But 2024 will bring new challenges for regulators, including the possibility of AI making it easier to conduct malicious cyberattacks or helping to build bioweapons. Given how quickly such risks can evolve, it’s important to have a solid foundation.

By Matt Sheehan, Sept. 12

One of the biggest questions around AI is “Who’s winning?”—particularly between the United States and China. In another essay for FP’s summer print issue, Mariano-Florentino Cuéllar and Matt Sheehan of the Carnegie Endowment for International Peace highlighted why that may be the wrong question, and how both countries should focus on reining in the technology and avoiding “AI accidents” between them.

In this later piece from September, Sheehan argued that Beijing’s stringent AI regulations can provide some lessons for Washington. Chief among them is the willingness to iterate and adapt quickly to the technology rather than aiming for umbrella legislation. “China has picked out specific applications that it was concerned about and developed a series of regulations to tackle those concerns,” Sheehan wrote. “That has allowed it to steadily build up new policy tools and regulatory know-how with each new regulation.”

By Bill Drexel and Michael Depp, June 13

Given the paradigm-shifting nature of artificial intelligence and the technology’s potential for catastrophic mishaps, it isn’t surprising that much of the geopolitical discourse has compared it to nuclear weapons. U.N. Secretary-General António Guterres was among the voices calling for a global AI governance regime modeled after the International Atomic Energy Agency (IAEA), the body created in 1957 to ensure nuclear nonproliferation.

Although that model could provide some guidance, there are a few reasons it isn’t a perfect fit for AI regulation, as Bill Drexel and Michael Depp of CNAS argued in June. AI is more wide-ranging in its applications and moving too fast for a regime like the IAEA to be truly effective. “Treaties and multilateral agreements tend to move much more slowly than AI,” Drexel and Depp wrote. As 2023 comes to a close, a few multilateral efforts that decidedly don’t resemble the IAEA have begun to take shape, but there’s a long road ahead.

Matt Chase illustration for Foreign Policy

By Bhaskar Chakravorti, Aug. 4

As AI regulation gathers momentum around the world, it’s worth asking who the loudest voices at the table are and what impact that has on everyone else. In August, Bhaskar Chakravorti, the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy, highlighted how AI could exacerbate disparities in how big tech companies dedicate resources to content moderation outside the West.

The companies must tend to the “squeakiest of wheels” in the United States and Europe that are calling for AI regulation and account for the bulk of company profits, Chakravorti argued, potentially leaving gaps in much of the global south that could prove dangerous going into a pivotal election year in which disinformation and hate speech could proliferate more than ever.

By Rishi Iyengar, Aug. 15

We’ve published a lot this year about the dangers of artificial intelligence and the fight to contain its harmful effects, but Foreign Policy got a rare peek behind the curtain into how that is playing out in practice.

In mid-August, I traveled to Las Vegas for one of the world’s biggest hacker conferences, where eight major AI companies teamed up with the U.S. government to open up their models to a so-called red teaming exercise. The goal was to push those models to do harmful things, such as teaching a user how to stalk someone or generate misleading facts.

The ease with which many of the 2,000-plus attendees succeeded—albeit in a controlled environment—highlighted the stakes of getting regulation right and why the White House has taken such a large role. Expand that scenario to the rest of the world, with hundreds of languages and cultural contexts, and it turns into a more daunting proposition.

QOSHE - The Year Policymakers Woke Up to AI - Rishi Iyengar
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

The Year Policymakers Woke Up to AI

8 10
25.12.2023

The year's best stories

To get a sense of the capabilities, contradictions, and chaos that have defined artificial intelligence in the past year, one only needs to look to the technology’s most high-profile champion.

To get a sense of the capabilities, contradictions, and chaos that have defined artificial intelligence in the past year, one only needs to look to the technology’s most high-profile champion.

California-based company OpenAI set the tone for 2023 in November 2022 by rolling out ChatGPT, the versatile chatbot that thrust AI into the public conversation and spurred a global race to develop more powerful models. The company ended the year by abruptly firing its talismanic CEO, Sam Altman, only to bring him back five days later with a new board of directors and seemingly consolidated power at the helm.

ChatGPT and the feeding frenzy that followed perturbed policymakers around the world. Governments in the United States, Europe, and China all moved quickly to try to place regulatory guardrails around artificial intelligence, and multilateral efforts at the G-7 and G-20 meetings, the United Nations, and the United Kingdom’s AI Safety Summit sought to broaden them.

There is good reason to be concerned. AI has the potential to reshape economics, society, and democracy around the world, as well as significant military applications—some of which are already in use. The race to develop artificial intelligence is not only taking place between companies, but also countries.

The starkest warnings have come from those making the technology. Altman—like many in the AI industry—has oscillated between being an evangelist and a doomsayer, calling for more regulation in Washington and beyond. For much of 2023, he was also the industry’s most prominent diplomat, meeting leaders from India, Australia, and the Middle East. That, in turn, has prompted consternation about how much influence large........

© Foreign Policy


Get it on Google Play