All of a sudden there is a flurry of activity around artificial intelligence policy. On Oct. 30, President Joe Biden issued an executive order on the topic. An AI safety summit is being held in the UK. And recently the U.S. Senate held a closed-door forum on research and development in AI.

I spoke at the Senate forum, convened by Majority Leader Chuck Schumer. Here's an outline of what I told the panel about how the U.S. can boost progress in AI and improve its national security.

The U.S. should allow in many more high-skilled foreign citizens, most of all those who work in AI and related fields. Many of the key contributors to AI progress such as Geoffrey Hinton (British-Canadian) and Mira Murati (Albanian) come from abroad. Perhaps the U.S. will never be able to compete with China when it comes to assembling raw computing power, but many of the world's best and brightest would prefer to live in America. The government should make their path as easy as possible.

Artificial intelligence also means that science probably is going to move faster in the future. That applies not only to AI but also to the sciences and practices that will benefit, such as computational biology and green energy. The U.S. cannot afford the luxury of its current slow procurement and funding cycles. Biomedical science funding should be more like the nimble National Science Foundation and less like the bureaucratic National Institutes of Health. Better yet, DARPA (Defense Advance Research Projects Agency) models could be applied more broadly to give program managers greater authority to take risks with their grants.

The U.S. should also speed up permitting reform. Construction of more and better semiconductor plants is a priority, both for national security and for AI progress more generally, as recognized by the CHIPS Act. Yet the need for multiple layers of permits and environmental review slows down this process and raises costs.

As the rate of scientific progress increases, regulation may need to adapt. Many critics have charged that FDA approval processes are too slow and conservative. That problem could become much worse if the number of new candidate drugs were to increase by two or three times.

In the short run, the U.S. can reconsider what is sometimes called "modular regulation." If an AI were to issue health or diagnostic advice, for example, it would be covered by current regulatory bodies. At all levels, those institutions need to make significant changes. Now is the time to start those reappraisals.

What if an AI gives diagnostic advice that is better than that of human doctors--but is still not perfect? Should the AI company be subject to medical malpractice law? I would prefer a "user beware" approach, as currently exists for googling medical advice. But obviously this issue requires deeper consideration. The same concern applies to AI legal advice: Plenty of current laws apply, but they need to be revised to match new technologies.

The U.S. should not regulate or license AI services as entities unto themselves. Obviously current AI services fall under extant laws, including laws against violence and fraud.

People will eventually figure out what exactly AIs, including large language models, are best used for. Industry structure may become relatively stable, and risks will be better known.

At that point, the U.S. might consider more general regulations for AI. Market experimentation has the highest return now, when we are debating the best and most appropriate use cases for AI. It is unrealistic to expect bureaucrats, few of whom have any AI expertise, to figure out answers to these questions.

In the meantime, it does not work to license AIs on the condition that they prove they will not cause any harm, or are very unlikely to. The technology is very general, its future uses are hard to predict, and some harms could be the fault of the users, not the company behind the service. It would not have been wise to make similar demands of the printing press or of automation in their early days. And licensing regimes have an unfortunate tendency to devolve into bureaucratic or political squabbling.

In any case: The time to act is now. The U.S. needs to get on with it.

QOSHE - It’s not the time to regulate AI - Bloomberg
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

It’s not the time to regulate AI

7 0
05.11.2023

All of a sudden there is a flurry of activity around artificial intelligence policy. On Oct. 30, President Joe Biden issued an executive order on the topic. An AI safety summit is being held in the UK. And recently the U.S. Senate held a closed-door forum on research and development in AI.

I spoke at the Senate forum, convened by Majority Leader Chuck Schumer. Here's an outline of what I told the panel about how the U.S. can boost progress in AI and improve its national security.

The U.S. should allow in many more high-skilled foreign citizens, most of all those who work in AI and related fields. Many of the key contributors to AI progress such as Geoffrey Hinton (British-Canadian) and Mira Murati (Albanian) come from abroad. Perhaps the U.S. will never be able to compete with China when it comes to assembling raw computing power, but many of the world's best and brightest would prefer to live in America. The government should make their path as easy as possible.

Artificial intelligence also means that science probably is going to move........

© Arkansas Online


Get it on Google Play