Did you hear? Google has been accused of having a secret vendetta against white people. Elon Musk exchanged tweets about the conspiracy on X more than 150 times over the past week, all regarding portraits generated with Google’s new AI chatbot Gemini.

Ben Shapiro, The New York Post and Musk were driven apoplectic over how diverse the images were: Female popes! Black Nazis! Indigenous founding fathers! Google apologised, and has paused the feature.

Google boss Sundar Pichai is in the crosshairs over the search giant’s AI chatbot Gemini.Credit: Bloomberg

In reality, the issue is that the company did a shoddy job overcorrecting on tech that used to skew racist. No, its chief executive officer Sundar Pichai hasn’t been infected by the woke mind virus. Rather, he’s too obsessed with growth and is neglecting the proper checks on his products.

Three years ago, Google got in trouble when its photo-tagging tool started labelling some black people as apes. It shut the feature down, and then made the problem worse by firing two of its leading AI ethics researchers. These were the people whose job was to make sure that Google’s technology was fair in how it depicted women and minorities. Not overly diverse like the new Gemini, but equitable and balanced.

When Gemini started producing images of German World War II soldiers who were Black and Asian this week, it was a sign that the ethics team hadn’t become more powerful, as Musk and others suggest, but that it was being ignored amid Google’s race against Microsoft and OpenAI to dominate generative web search. Proper investment would have led to a smarter approach to diversity in image generation, but Google was neglecting that work.

The signs have been there for the past year. People who test artificial intelligence systems for safety are outnumbered by those whose job is to make it bigger and more capable by 30-1, according to an estimate from the Centre for Humane Technology. Often they are shouting into a void and told not to get in the way.

Google’s earlier chatbot Bard was so faulty that it made factual errors in its marketing demo. Employees had sounded warnings about that, but managers wouldn’t listen. One posted on an internal message board that Bard was “worse than useless: please do not launch,” and many of the 7000 staffers who viewed the message agreed, according to a Bloomberg News investigation.

Not long after, engineers who’d carried out a risk assessment told their Google superiors that Bard could cause harm and wasn’t ready. You can probably guess what Google did next: It released Bard to the public.

QOSHE - Google’s AI isn’t too woke. It’s too rushed - Parmy Olson
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Google’s AI isn’t too woke. It’s too rushed

8 1
29.02.2024

Did you hear? Google has been accused of having a secret vendetta against white people. Elon Musk exchanged tweets about the conspiracy on X more than 150 times over the past week, all regarding portraits generated with Google’s new AI chatbot Gemini.

Ben Shapiro, The New York Post and Musk were driven apoplectic over how diverse the images were: Female popes! Black Nazis! Indigenous founding fathers! Google apologised, and has paused the feature.

Google boss Sundar Pichai is in the crosshairs over the search giant’s AI chatbot Gemini.Credit: Bloomberg

In reality, the issue is that the company did a shoddy........

© The Sydney Morning Herald


Get it on Google Play