Advertisement

Subscriber-only Newsletter

By David Wallace-Wells

Opinion Writer

In November the left-wing Israeli outlets +972 magazine and Local Call published a disturbing investigation by the journalist Yuval Abraham into the Israel Defense Forces’ use of an artificial intelligence system for identifying targets in Gaza — which one former intelligence official described as a “mass assassination factory.”

Toward the end of a year clouded by visions of an A.I. apocalypsevisions that sometimes featured autonomous weapons systems going rogue — you might have expected an enormous and alarmed response. Instead, the report that a war was being conducted partly by A.I. made only a small ripple in debates over Israel’s conduct in Gaza.

Perhaps that was partly because — to an unnerving degree — experts accept that forms of A.I. are already in widespread use among the world’s leading militaries, including in the United States, where the Pentagon has been developing A.I. for military purposes at least since the Obama administration. According to Foreign Affairs, at least 30 countries now operate defense systems that have autonomous modes. Many of us still regard artificial intelligence wars as visions from a science-fiction future, but A.I. is already stitched into global military operations as surely as it’s stitched into the fabric of our everyday lives.

It’s not just out-of-control A.I. that poses a threat. Under-control systems can do harm, too. The Washington Post called the war in Ukraine a “super lab of invention,” marking a “revolution in drone warfare using A.I.” The Pentagon is developing a response to A.I.-powered drone swarms, a threat that has become less distant as it counters drone attacks by Yemeni Houthis in the Red Sea. According to The Associated Press, some analysts have suggested that it is only a matter of time before “drones will be used to identify, select and attack targets without help from humans.” Such swarms, directed by systems operating too fast for human oversight, “are about to change the balance of military power,” Elliot Ackerman and Admiral James Stavridis, a former allied commander of NATO, predicted in The Wall Street Journal last month. Others have suggested that future is here.

Like the invasion of Ukraine, the ferocious offensive in Gaza has looked at times like a throwback, in some ways more closely resembling a 20th-century total war than the counterinsurgencies and smart campaigns to which Americans have grown more accustomed. By December, nearly 70 percent of Gaza’s homes and more than half its buildings had been damaged or destroyed. Today fewer than one-third of its hospitals remain functioning, and 1.1 million Gazans are facing “catastrophic” food insecurity, according to the United Nations. It may look like an old-fashioned conflict, but the Israel Defense Forces’ offensive is also an ominous hint of the military future — both enacted and surveilled by technologies arising only since the war on terrorism began.

Last week +972 and Local Call published a follow-up investigation by Abraham, which is very much worth reading in full. (The Guardian also published a piece drawing from the same reporting, under the headline “The Machine Did It Coldly.” The reporting has been brought to the attention of John Kirby, the U.S. national security spokesman, and been discussed by Aida Touma-Sliman, an Israeli Arab member of the Knesset, and by the United Nations secretary general, António Guterres, who said he was “deeply troubled” by it.) The November report describes a system called Habsora (the Gospel), which, according to the current and former Israeli intelligence officers interviewed by Abraham, identifies “buildings and structures that the army claims militants operate from.” The new investigation, which has been contested by the Israel Defense Forces, documents another system, known as Lavender, used to compile a “kill list” of suspected combatants. The Lavender system, he writes, “has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war.”

Functionally, Abraham suggests, the destruction of Gaza — the killing of more than 30,000 Palestinians, a majority of them civilians including more than 13,000 children — offers a vision of war waged by A.I. “According to the sources,” he writes, “its influence on the military’s operations was such that they essentially treated the outputs of the A.I. machine ‘as if it were a human decision,’” though the algorithm had an acknowledged 10 percent error rate. One source told Abraham that humans would normally review each recommendation for just 20 seconds — “just to make sure the Lavender-marked target is male” before giving the recommendation a “rubber stamp.”

The more abstract questions raised by the prospect of A.I. warfare are unsettling on the matters of not just machine error but also ultimate responsibility: Who is accountable for an attack or a campaign conducted with little or no human input or oversight? But while one nightmare about military A.I. is that it is given control of decision making, another is that it helps armies become simply more efficient about the decisions being made already. And as Abraham describes it, Lavender is not wreaking havoc in Gaza on its own misfiring accord. Instead it is being used to weigh likely military value against collateral damage in very particular ways — less like a black box oracle of military judgment or a black hole of moral responsibility and more like the revealed design of the war aims of the Israel Defense Forces.

At one point in October, Abraham reports, the Israel Defense Forces targeted junior combatants identified by Lavender only if the likely collateral damage could be limited to 15 or 20 civilian deaths — a shockingly large number, given that no collateral damage had been considered acceptable for low-level combatants. More senior commanders, Abraham reports, would be targeted even if it meant killing more than 100 civilians. A second program, called Where’s Daddy?, was used to track the combatants to their homes before targeting them there, Abraham writes, because doing so at those locations, along with their families, was “easier” than tracking them to military outposts. And increasingly, to avoid wasting smart bombs to target the homes of suspected junior operatives, the Israel Defense Forces chose to use much less precise dumb bombs instead.

This is not exactly the dark A.I. magic of science fiction. It’s more like a Wizard of Oz phenomenon: What appears at first to be an otherworldly spectacle turns out to be a man behind a curtain, fiddling the switches. In fact, in its response to the new report, the Israel Defense Forces said that it “does not use an artificial intelligence system that identifies terrorist operatives,” writing that “information systems are merely tools for analysts in the target identification process.” The Israel Defense Forces previously bragged about its use of A.I. in targeting Hamas, and according to Haaretz, established broad “kill zones” in Gaza, with anyone crossing into them assumed to be a terrorist and shot. (The Israel Defense Forces denied that it has defined kill zones.) On CNN, the analyst Barak Ravid told Anderson Cooper of a conversation he had with an Israeli reserve officer who told him that “the orders are basically — from the commanders on the ground — shoot every man in fighting age,” a description that matches comments last week by the former C.I.A. director and secretary of defense Leon Panetta, who said, “In my experience, the Israelis usually fire and then ask questions.”

Where do things go from here? The question applies not just to Israel’s conduct in Gaza or the drone escalation in Ukraine, where pilotless dogfights have already shaped the course of the war, where Russia has extensively deployed electronic warfare tools to jam Ukrainian drones and where, according to an analysis in War on the Rocks, Russia is “attempting to make strides to automate the entire kill chain.”

In a February essay, “The Perilous Coming Age of A.I. Warfare,” Paul Scharre of the Center for a New American Security sketches a number of possible near-term futures, from autonomous swarms going to battle with one another as independently as high-frequency trading bots to the possibility that A.I. may be given authority over existing nuclear arsenals.

He also floats a proactive and possibly optimistic five-point plan: that governments agree to human supervision of military A.I., that they ban autonomous weapons that target people, that they develop a best-practices protocol for preventing accidents, that nations restrict control over nuclear weapons to humans and that countries adopt a conventional guidebook for the conduct of drones. “Without limits, humanity risks barreling toward a future of dangerous, machine-driven warfare,” Scharre writes, and the window to take action is “closing fast.”

Not everyone agrees that we are approaching the equivalent of a military singularity, past which war will become unrecognizable, rather than a slower evolution, with more changes below the surface. “Military revolutions have often been less radical than initially presumed by their advocates,” the military scholar Anthony King writes for War on the Rocks. And while he believes that we aren’t all that close to the end of human oversight and calls it “very unlikely” that we find ourselves in a world of truly autonomous warfare anytime soon, he also believes that “data and A.I. are a — maybe even the — critical intelligence function for contemporary warfare.” In fact, “any military force that wants to prevail on the battlefields of the future will need to harness the potential of big data — it will have to master digitized information flooding through the battle space,” he writes. “Humans simply do not have the capacity to do this.” Presumably, A.I. will.

Advertisement

QOSHE - What War by A.I. Actually Looks Like - David Wallace-Wells
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

What War by A.I. Actually Looks Like

15 28
11.04.2024

Advertisement

Subscriber-only Newsletter

By David Wallace-Wells

Opinion Writer

In November the left-wing Israeli outlets 972 magazine and Local Call published a disturbing investigation by the journalist Yuval Abraham into the Israel Defense Forces’ use of an artificial intelligence system for identifying targets in Gaza — which one former intelligence official described as a “mass assassination factory.”

Toward the end of a year clouded by visions of an A.I. apocalypsevisions that sometimes featured autonomous weapons systems going rogue — you might have expected an enormous and alarmed response. Instead, the report that a war was being conducted partly by A.I. made only a small ripple in debates over Israel’s conduct in Gaza.

Perhaps that was partly because — to an unnerving degree — experts accept that forms of A.I. are already in widespread use among the world’s leading militaries, including in the United States, where the Pentagon has been developing A.I. for military purposes at least since the Obama administration. According to Foreign Affairs, at least 30 countries now operate defense systems that have autonomous modes. Many of us still regard artificial intelligence wars as visions from a science-fiction future, but A.I. is already stitched into global military operations as surely as it’s stitched into the fabric of our everyday lives.

It’s not just out-of-control A.I. that poses a threat. Under-control systems can do harm, too. The Washington Post called the war in Ukraine a “super lab of invention,” marking a “revolution in drone warfare using A.I.” The Pentagon is developing a response to A.I.-powered drone swarms, a threat that has become less distant as it counters drone attacks by Yemeni Houthis in the Red Sea. According to The Associated Press, some analysts have suggested that it is only a matter of time before “drones will be used to identify, select and attack targets without help from humans.” Such swarms, directed by systems operating too fast for human oversight, “are about to change the balance of military power,” Elliot Ackerman and Admiral James Stavridis, a former allied commander of NATO, predicted in The Wall Street Journal last month. Others have suggested that future is here.

Like the invasion of Ukraine, the ferocious offensive in Gaza has looked at times like a throwback, in some........

© The New York Times


Get it on Google Play