When it comes to artificial intelligence, one of the most commonly debated issues in the technology community is safety — so much so that it helped lead to the ouster of OpenAI's co-founder, Sam Altman.

Those concerns boil down to a truly unfathomable one: Will AI kill us all? Allow me to set your mind at ease: Artificial intelligence is no more dangerous than the many other existential risks facing humanity, from supervolcanoes to stray asteroids to nuclear war.

I am sorry if you don’t find that reassuring. But it is far more optimistic than what someone like AI researcher Eliezer Yudkowsky believes, namely that humanity has entered its last hour. In his view, AI will be smarter than us and will not share our goals, and soon enough we humans will go the way of the Neanderthals. Others have called for a six-month pause of AI progress, so we humans can get a better grasp of what is going on.

QOSHE - The AI safety debate after OpenAI CEO's ouster - Tyler Cowen
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

The AI safety debate after OpenAI CEO's ouster

25 0
20.11.2023

When it comes to artificial intelligence, one of the most commonly debated issues in the technology community is safety — so much so that it helped lead to the ouster of OpenAI's co-founder, Sam Altman.

Those concerns........

© The Japan Times

Get it on Google Play