In a city notorious for its cynicism, few things are quite as unsettling as an interest group that truly believes they’re the good guys.

As Washington grapples with the rise of artificial intelligence, a small army of adherents to “effective altruism” has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology.

The Silicon Valley-based movement is backed by tech billionaires and began as a rationalist approach to solving human suffering. But some observers say it has morphed into a cult obsessed with the coming AI doomsday.

The most ardent advocates of effective altruism, or EA, believe researchers are only months or years away from building an AI superintelligence able to outsmart the world’s collective efforts to control it. Through either its own volition or via terrorists seeking to develop deadly bioweapons, such an AI could wipe out humanity, they say. And some, including noted EA thinker Eliezer Yudkowsky, believe even a nuclear holocaust would be preferable to an unchecked AI future.

If stopping malignant AI required war between nuclear-armed nations, Yudkowsky argues, that would be a price worth paying.

“It’s really kind of a ridiculous idea that, you know, risking starting nuclear war might be better because humans will probably survive that and rebuild — versus AI, which will destroy us to the last person,” said Zach Graves, executive director at the Foundation for American Innovation and a longtime observer of EA’s advance into the nation’s capital.

Most EAs aren’t quite so militant. But to varying degrees and on disparate timelines, nearly all of them believe AI poses an existential threat to the human race.



As scores of tech-funded EAs spread across key policy nodes in Washington, they’re triggering a culture clash — landing in the city’s incremental, detail-oriented culture with a fervor more akin to religious converts than policy professionals.

Regulators in Washington usually dwell in a world of practical disputes, like how AI could promote racial profiling, spread disinformation, undermine copyright or displace workers. But EAs, energized by a uniquely Northern Californian mix of awe and fear at the pace of technology, dwell in an existential realm.

“The EA people stand out as talking about a whole different topic, in a whole different style,” said Robin Hanson, an economist at George Mason University and former effective altruist. “They're giving pretty abstract arguments about a pretty abstract concern, and they’re ratcheting up the stakes to the max.”

From their newfound perches on Capitol Hill, in federal agencies and at key think tanks, EAs are pressing lawmakers, agency officials and seasoned policy professionals to support sweeping laws that would “align” AI with human goals and values.

Virtually all the policies that EAs and their allies are pushing — new reporting rules for advanced AI models, licensing requirements for AI firms, restrictions on open-source models, crackdowns on the mixing of AI with biotechnology or even a complete “pause” on “giant” AI experiments — are in furtherance of that goal.

“This shouldn't be grouped in the same sort of vein as saying, ‘Well, this is just another tech issue. We’ve dealt with tech issues for a really long time, we have time to deal with this.’ Because we really don’t,” said Emilia Javorsky, director of the futures program at the Future of Life Institute — an organization founded by EA luminaries and funded in part by a foundation financed by tech billionaire Elon Musk, who calls EA a “close match” to his philosophy.

“If we don’t start drawing the lines now, the genie’s out of the bottle — and it will be almost impossible to put it back in,” Javorsky warned.

The prophets of the AI apocalypse are boosted by an avalanche of tech dollars, with much of it flowing through Open Philanthropy — a major funder of effective altruist causes, founded and financed by billionaire Facebook co-founder Dustin Moskovitz and his wife Cari Tuna, that has pumped hundreds of millions of dollars into influential think tanks and programs that place staffers in key congressional offices and at federal agencies.



“It’s an epic infiltration,” said one biosecurity researcher in Washington, granted anonymity to avoid blowback from EA-linked funders.

EAs are particularly fixated on the possibility that future AI systems could combine with gene synthesis tools and other technologies to create bioweapons that kill billions of people — a phenomenon that’s given more traditional AI and biosecurity researchers a front row seat as Silicon Valley’s hot new philosophy spreads across Washington.

Many of those researchers claim that EA’s billionaire backers — who often possess close personal and financial ties to companies like OpenAI and Anthropic — are trying to distract Washington from examining AI’s real-world impact, including its tendency to promote racial or gender bias, undermine privacy and weaken copyright protections.

They also worry that EA’s tech industry funders are acting in their self-interest, working to wall off leading AI firms from competition by promoting rules that, in the name of “AI safety,” lock down access to the technology.

“Many [EAs] do think that fewer players who are more carefully watched is safer, from their point of view,” said Hanson. “So they are not that eager to reduce concentration in this industry, or the centralization of power in this industry.”

The generally white and privileged backgrounds of EA adherents also has prompted suspicion in Washington, particularly among Black lawmakers concerned about how existing AI systems can harm marginalized communities.

“I don't mean to create stereotypes of tech bros, but we know that this is not an area that often selects for diversity of America,” Sen. Cory Booker (D-N.J.) told POLITICO in September.

“This idea that we’re going to somehow get to a point where we’re going to be living in a Terminator nightmare — yeah, I’m concerned about those existential things,” Booker said. “But the immediacy of what we’ve already been using — most Americans don't realize that AI is already out there, from resumé selection to what ads I'm seeing on my phone.”

Despite those concerns, the sheer amount of money being funneled into Washington by Open Philanthropy and other EA-linked groups has given the movement significant leverage over the AI and biosecurity debate in Washington.

“The money is overwhelmingly lopsided,” said Hanson, referring to support for AI-specific policy fellows and staff members.

AI and biosecurity staffers funded by Open Philanthropy are embedded in congressional offices at the forefront of potential AI rules, including all three of the Senate offices tapped by Majority Leader Chuck Schumer to investigate the technology. And the more than half-dozen skeptical AI and biosecurity researchers that spoke with POLITICO say the dense network of Capitol Hill and agency staffers — financed by hundreds of millions of EA dollars — is skewing how policymakers discuss AI safety, which otherwise remains a relatively niche field in Washington.

One AI and biosecurity researcher in Washington said lawmakers and other policy professionals are being pushed toward a focus on existential AI risks by sheer force of repetition.

“It's more just the object permanence of having that messaging constantly in your face,” said the researcher, who was also granted anonymity to avoid losing funding.



The researcher warned that the sweeping EA influence campaign is causing much of Washington to take as a given that existential AI risks are likely or inevitable — often with little evidence.

“We skipped entirely over the body of risk research that asks, ‘Is there risk?’” the researcher said.

Effective altruism’s newfound pull at influential groups like the RAND Corp. — the venerable policy think tank that, after receiving more than $15 million in AI and biosecurity grants from Open Philanthropy this year, played a crucial role in drafting President Joe Biden’s October executive order on AI — shows how the movement is already notching significant wins.

Despite

QOSHE - DC vs. Silicon Valley: Inside the culture clash dominating the AI debate - Brendan Bordelon
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

DC vs. Silicon Valley: Inside the culture clash dominating the AI debate

21 1
30.12.2023

In a city notorious for its cynicism, few things are quite as unsettling as an interest group that truly believes they’re the good guys.

As Washington grapples with the rise of artificial intelligence, a small army of adherents to “effective altruism” has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology.

The Silicon Valley-based movement is backed by tech billionaires and began as a rationalist approach to solving human suffering. But some observers say it has morphed into a cult obsessed with the coming AI doomsday.

The most ardent advocates of effective altruism, or EA, believe researchers are only months or years away from building an AI superintelligence able to outsmart the world’s collective efforts to control it. Through either its own volition or via terrorists seeking to develop deadly bioweapons, such an AI could wipe out humanity, they say. And some, including noted EA thinker Eliezer Yudkowsky, believe even a nuclear holocaust would be preferable to an unchecked AI future.

If stopping malignant AI required war between nuclear-armed nations, Yudkowsky argues, that would be a price worth paying.

“It’s really kind of a ridiculous idea that, you know, risking starting nuclear war might be better because humans will probably survive that and rebuild — versus AI, which will destroy us to the last person,” said Zach Graves, executive director at the Foundation for American Innovation and a longtime observer of EA’s advance into the nation’s capital.

Most EAs aren’t quite so militant. But to varying degrees and on disparate timelines, nearly all of them believe AI poses an existential threat to the human race.



As scores of tech-funded EAs spread across key policy nodes in Washington, they’re triggering a culture clash — landing in the city’s incremental, detail-oriented culture with a fervor more akin to religious converts than policy professionals.

Regulators in Washington usually dwell in a world of practical disputes, like how AI could promote racial profiling,........

© Politico


Get it on Google Play