Governments appear keener on claiming the right to become the alpha custodians of artificial intelligence (AI) governance than on striking a balance by putting guard-rails around AI without stifling innovation. While their upping the ante to rein in tech companies that are building and releasing AI and generative AI (GenAI) models at a frenetic pace is credible, since it pressures them to develop responsible AI models, using terms like “world leader in AI safety" also smacks of one-upmanship in AI-related geopolitics. Consider these press announcements. On 30 October, the US government said President Joe Biden is issuing a “landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of AI." On 1 November, the UK government followed by announcing at its just-concluded AI Summit at Bletchley Park that “leading AI nations" have reached a “world-first agreement" on “the opportunities and risks posed by frontier AI" (jargon for big foundational models like GPT-4). A day later, the UK announced an ‘AI Safety Institute’ to “cement the UK’s position as a world leader in AI safety."

The fact is that about 60 countries, including India, which is already council chair of the Global Partnership on Artificial Intelligence (GPAI), have national AI strategies. And last week, other than the Bletchley Declaration and US executive order on AI, the Group of Seven too introduced guiding principles and a code of conduct. Chest-thumping by any country on this topic is a needless distraction. Moreover, existing guidelines, reports, white papers and working groups on AI regulation abound. The US alone published a Blueprint for an AI Bill of Rights in October 2022 and issued an Executive Order directing agencies to combat algorithmic discrimination this February before issuing last week’s order. Among other things, the new directive mandates that companies developing any foundation model must notify the US government when training it, and “must share the results of all red-team safety tests." A red team would identify areas where a model could potentially pose a serious risk to national security, economic security or public health and safety. Such moves could add to the bureaucracy of decision-making. Further, the US-centric order aims at protecting the privacy and security of the US government, its agencies and citizens, but it is not clear what it means for enterprises around the world, including here, that have begun building solutions based on application programming interfaces (APIs) provided by foundation AI models and large language models (LLMs) built by US-based companies. Will APIs based on foundation models and LLMs that protect US interests be suitable for companies in other countries too? Meanwhile, while misinformation, AI’s impact on jobs, ‘AI weaponization’ and safety remain key concerns for policymakers, targeting just the big or so-called frontier AI models miss an important point—that an AI model’s size no longer defines its utility, or even capability for that matter. Rather, it is critical to see how apps are being integrated with these models and how LLMs are being compressed to run on mobile devices, increasing their efficacy.

That AI can’t go unregulated is a given, and governments must put guard-rails in place. But given the complexity of the issue, they would do well to avoid hubris in their declarations. It’ll make their intentions more credible.

Milestone Alert!
Livemint tops charts as the fastest growing news website in the world

QOSHE - Global declarations about AI regulation may need a hubris check - Livemint
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Global declarations about AI regulation may need a hubris check

5 0
05.11.2023

Governments appear keener on claiming the right to become the alpha custodians of artificial intelligence (AI) governance than on striking a balance by putting guard-rails around AI without stifling innovation. While their upping the ante to rein in tech companies that are building and releasing AI and generative AI (GenAI) models at a frenetic pace is credible, since it pressures them to develop responsible AI models, using terms like “world leader in AI safety" also smacks of one-upmanship in AI-related geopolitics. Consider these press announcements. On 30 October, the US government said President Joe Biden is issuing a “landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of AI." On 1 November, the UK government followed by announcing at its just-concluded AI Summit at Bletchley Park that “leading AI nations" have reached a........

© Livemint


Get it on Google Play