The generative AI boom has sent governments worldwide scrambling to regulate the emerging technology, but it also has raised the risk of upending a European Union push to approve the world’s first comprehensive artificial intelligence rules.

The 27-nation bloc’s Artificial Intelligence Act has been hailed as a pioneering rule book. But with time running out, it’s uncertain if the EU’s three branches of government can thrash out a deal Wednesday in what officials hope is a final round of closed-door talks.

Europe’s yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce humanlike work but raised fears about the risks they pose.

Those concerns have driven the U.S., U.K., China, and global coalitions like the Group of 7 major democracies into the race to regulate the rapidly developing technology, though they’re still catching up to Europe.

Besides regulating generative AI, EU negotiators need to resolve a long list of other thorny issues, such as a full ban on police use of facial recognition systems, which have stirred privacy concerns.

Chances of clinching a political agreement between EU lawmakers, representatives from member states, and executive commissioners “are pretty high partly because all the negotiators want a political win” on a flagship legislative effort, said Kris Shrishak, a senior fellow specializing in AI governance at the Irish Council for Civil Liberties.

“But the issues on the table are significant and critical, so we can’t rule out the possibility of not finding a deal,” he said.

Some 85% of the technical wording in the bill already has been agreed on, Carme Artigas, AI and digitalization minister for Spain, which holds the rotating EU presidency, said at a press briefing Tuesday in Brussels.

If a deal isn’t reached in the latest round of talks, starting Wednesday afternoon and expected to run late into the night, negotiators will be forced to pick it up next year. That raises the odds the legislation could get delayed until after EU-wide elections in June—or go in a different direction as new leaders take office.

One of the major sticking points is foundation models, the advanced systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot.

Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

The AI Act was intended as product safety legislation, like similar EU regulations for cosmetics, cars, and toys. It would grade AI uses according to four levels of risk—from minimal or no risk posed by video games and spam filters to unacceptable risk from social scoring systems that judge people based on their behavior.

The new wave of general purpose AI systems released since the legislation’s first draft in 2021 spurred European lawmakers to beef up the proposal to cover foundation models.

Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks, or creation of bioweapons. They act as basic structures for software developers building AI-powered services so that “if these models are rotten, whatever is built on top will also be rotten—and deployers won’t be able to fix it,” said Avaaz, a nonprofit advocacy group.

France, Germany, and Italy have resisted the update to the legislation and are calling instead for self-regulation—a change of heart seen as a bid to help homegrown generative AI players, such as French startup Mistral AI and Germany’s Aleph Alpha, compete with big U.S. tech companies like OpenAI.

Brando Benifei, an Italian member of the European Parliament who is co-leading the body’s negotiating efforts, was optimistic about resolving differences with member states.

There’s been “some movement” on foundation models, though there are “more issues on finding an agreement” on facial recognition systems, he said.

By Kelvin Chan, Associated Press

QOSHE - Can Europe actually lead the world on AI regulation? - Associated Press
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Can Europe actually lead the world on AI regulation?

6 0
06.12.2023

The generative AI boom has sent governments worldwide scrambling to regulate the emerging technology, but it also has raised the risk of upending a European Union push to approve the world’s first comprehensive artificial intelligence rules.

The 27-nation bloc’s Artificial Intelligence Act has been hailed as a pioneering rule book. But with time running out, it’s uncertain if the EU’s three branches of government can thrash out a deal Wednesday in what officials hope is a final round of closed-door talks.

Europe’s yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce humanlike work but raised fears about the risks they pose.

Those concerns have driven the U.S., U.K., China, and global coalitions like the Group of 7 major democracies into the race to regulate the rapidly developing technology, though they’re still catching up to Europe.

Besides regulating generative AI, EU negotiators need........

© Fast Company


Get it on Google Play