By Josh Tyrangiel

Columnist|Follow author

Follow

March 6, 2024 at 7:15 a.m. EST

(Joan Wong for The Washington Post; iStock)

Listen26 min

Share

Comment on this storyComment

Add to your saved stories

Save

My awakening began in the modern fashion — late at night, on YouTube. Months later the video still has just 3,900 views, so I’d better describe it.

A few dozen people have gathered to watch a presentation. It’s capably produced — like a midsize college professor’s audition for a TED Talk. The presenter, in a patterned blazer and blue oxford, is retired four-star general Gustave Perna. “I spent 40 years in the Army,” Perna begins, the hard edges of his New Jersey accent clanging a little in the room. “I was an average infantry officer. I was a great logistician.”

It’s a leisurely start. And yet the closest comparison I have for what comes next is Star Wars. Because once he gets through his slow-crawl prologue, Perna tells a story so tense and futuristic that, by the end, it’s possible to glimpse a completely different way in which we might live as citizens. Also, there’s warp speed.

Advertisement

Perhaps Perna’s name sounds familiar. It should. He oversaw the effort to produce and distribute the first coronavirus vaccines — a recent triumph of U.S. policy that’s been erased by the stupidity of U.S. politics. Perna was a month from retirement in May 2020 when he got a Saturday morning call from the chairman of the Joint Chiefs. Arriving in Washington two days later to begin Operation Warp Speed, his arsenal consisted of three colonels, no money and no plan.

The audience is focusing now. Perna tells them that what he needed more than anything was “to see myself.” On the battlefield this means knowing your troops, positions and supplies. It means roughly the same thing here, except the battlefield is boundaryless. Perna needed up-to-the-minute data from all the relevant state and federal agencies, drug companies, hospitals, pharmacies, manufacturers, truckers, dry ice makers, etc. Oh, and that data needed to be standardized and operationalized for swift decision-making.

It’s hard to comprehend, so let’s reduce the complexity to just a single physical material: plastic. Perna had to have eyes on the national capacity to produce and supply plastic — for syringes, needles, bags, vials. Otherwise, with thousands of people dying each day, he could find himself with hundreds of millions of vaccine doses and nothing to put them in.

To see himself, Perna needed a real-time digital dashboard of an entire civilization.

Follow this authorJosh Tyrangiel's opinions

Follow

This being Washington, consultants lined up at his door. Perna gave each an hour, but none could define the problem let alone offer a credible solution. “Excruciating,” Perna tells the room, and here the Jersey accent helps drive home his disgust. Then he met Julie and Aaron. They told him, “Sir, we’re going to give you all the data you need so that you can assess, determine risk, and make decisions rapidly.” Perna shut down the process immediately. “I said great, you’re hired.”

Advertisement

Julie and Aaron work for Palantir, a company whose name curdles the blood of progressives and some of the military establishment. We’ll get to why. But Perna says Palantir did exactly what it promised. Using artificial intelligence, the company optimized thousands of data streams and piped them into an elegant interface. In a few short weeks, Perna had his God view of the problem. A few months after that, Operation Warp Speed delivered vaccines simultaneously to all 50 states. When governors called panicking that they’d somehow been shorted, Perna could share a screen with the precise number of vials in their possession. “‘Oh, no, general, that’s not true.’ Oh, yes. It is.”

The video cuts off with polite applause. The audience doesn’t seem to understand they’ve just been transported to a galaxy far, far away.

When Joe Biden delivers his State of the Union on March 7, he’ll likely become the first president to use the phrase artificial intelligence in the address. The president has been good on AI. His executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” threw a switch activating the federal bureaucracy’s engagement. He’s delegating to smart people and banging the drum about generative AI’s ability to create misinformation and harm national security. That’s plenty for a speech.

But the vision remains so small compared with the possibilities. This is technology that could transform almost everything about our society, yet neither the president nor his political rivals have imagined how it might do the same for the government itself. So allow me.

According to a 2023 year end Gallup poll, Americans’ confidence in 15 institutions — covering things such as health care, education and regulation — is at historic lows. The poll’s conclusion is that government is suffering an acute crisis of legitimacy. We no longer trust it to fix important things in our lives. If confidence in the effectiveness of government keeps eroding at this pace, how much longer do you think we can remain united? How easy do we want to make our dismantling for the nihilists already cheering it on?

Properly deployed, AI can help blaze a new path to the shining city on a hill. In 2023, the national taxpayer advocate reported that the IRS answered only 29 percent of its phone calls during tax season. Human-based eligibility decisions for the Supplemental Nutrition Assistance Program, have a 44 percent error rate. Large-language-model-powered chatbots could already be providing better service — at all hours, in all languages, at less cost — for people who rely on the federal government for veterans benefits, student loans, unemployment, social security and Medicare. That’s table stakes.

Advertisement

Now think about Warp Speeding entire agencies and functions: the IRS, which, in 2024, still makes you guess how much you owe it, public health surveillance and response, traffic management, maintenance of interstates and bridges, disaster preparedness and relief. AI can revolutionize the relationship between citizens and the government. We have the technology. We’ve already used it.

Mention Operation Warp Speed to skeptics and they’ll wave you off. It doesn’t count. In a crisis the great sloth of government can sprint, but in regular times procurement rules, agency regulators and the endless nitpicking of politics make big things impossible. All true.

There’s another strain of skepticism that goes like this: Are you insane? AI might create all kinds of efficiency, but it’s also been known to have systemic biases that could get encoded into official government systems, lack transparency that could undermine public trust, make loads of federal jobs obsolete, and be vulnerable to data breaches that compromise privacy and sensitive information. If AI were a Big Pharma product the ads would be 10 minutes long.

We can put guardrails around how the government uses AI — anonymizing personal data as they do in the European Union, creating oversight bodies for continuous monitoring — but I’m not naive. Some things will still go wrong. Which leaves us to weigh the risks of the cure against the deadliness of the disease.

To check my premise, I set up a Zoom call with Perna. He was in sweats at his home in Alabama, and if he missed carrying the weight of the world he did a great job hiding it. He consults a little for Palantir now, but mostly he was excited to talk about grandkids, the Yankees and the best New York City slice joints. His mood shifted when I asked what government could improve if it embraced AI. “Everything,” he snapped, before the question was fully out. “I don’t understand how we’re not using it for organ donation right now. We should be ashamed. Why do we need 80,000 new people at the IRS? We could revolutionize the budget process. I tell Palantir, why are you playing around with the Department of Defense? Think bigger.”

What Palantir does has long been draped in mystery. It’s a software company that works with artificial intelligence and is named for the indestructible crystal balls in The Lord of the Rings, so they’re not exactly discouraging it. But the foundation of its products is almost comically dull.

Imagine all of an organization’s data sources as a series of garden hoses in your backyard. Let’s say the organization is a hospital. There are hoses for personnel, equipment, drugs, insurance companies, medical supplies, scheduling, bed availability and probably dozens of other things. Many of the hoses connect up to vendors and many connect to patients. No one can remember what some of them are supposed to connect to. All were bought at different times from different manufacturers and are different sizes and lengths. And it’s a hospital, so hose maintenance has never been anyone’s top priority. Now look out the window. There’s a pile of knotted rubber so dense you can’t see grass.

Palantir untangles hoses.

“We’ve always been the mole people of Silicon Valley,” says Akshay Krishnaswamy, Palantir’s chief architect. “It’s like we go into the plumbing of all this stuff and come out and say, ‘Let’s help you build a beautiful ontology.’”

Advertisement

In metaphysics, ontology is the study of being. In software and AI, it’s come to mean the untangling of messes and the creation of a functional information ecosystem. Once Palantir standardizes an organization’s data and defines the relationships between the streams, it can build an application or interface on top of it. This combination — integrated data and a useful app — is what allows everyone from middle managers to four-star generals to have an AI co-pilot, to see themselves with the God view. “It’s the Iron Man suit for the person who’s using it,” says Krishnaswamy. “It’s like, they’re still going to have to make decisions but they feel like they’re now flying around at Mach 5.”

The most dramatic expression of Palantir’s capabilities is in Ukraine, where the company merges real-time views from hundreds of commercial satellites with communications technology and weapons data. All of that information is then seamlessly displayed on laptops and handheld dashboards for commanders on the battlefield. A senior U.S. military official told me, “The Ukrainian force is incredibly tough, but it’s not much of a fight without iPads and Palantir.”

I mentioned that progressives and some of the military establishment dislike Palantir. Each has a reason. The company was co-founded in 2003 by Peter Thiel, which explains much of the hatred from the far left. Thiel spoke at the 2016 Republican convention, endorsed Donald Trump in 2016, dislikes multiculturalism, financed a lawsuit to kill Gawker and then tried to buy its corpse. The enmity here is mutual, but also kind of trivial.

Palantir has another co-founder. His name is Alex Karp, and many people in the Pentagon find him very annoying. The quick explanation is that Karp is loud and impatient, and he’s not one of them. But it’s more troubling than that.

Karp was born in New York City to a Black mother and a Jewish father. He’s severely dyslexic, a socialist, a 2016 Hillary Clinton supporter. When we spoke in Palantir’s New York offices, it was clear that he’s both whip-smart and keeps a careful accounting of the slights he’s accumulated. “Quite frankly,” Karp told me, “just because of biographical issues, I assume I am going to be screwed, right?” It was like meeting the protagonist from a book co-authored by Ralph Ellison and Philip Roth.

Thiel and Karp were law school classmates at Stanford in the early ’90s. They argued plenty, but agreed about enough to create Palantir with partial funding (less than $2 million) from In-Q-Tel, an investment arm of the CIA, and a few core beliefs. The first is that the United States is exceptional, and working to strengthen its position in the world benefits all humanity. “I’ve lived abroad,” Karp says. “I know [America] is the only country that’s remotely as fair and meritocratic as America is. And I tend to be more focused on that than the obvious shortcomings.” In a speech last year, Karp, who is CEO, explained what this means for the company: “If you don’t think the U.S. government should have the best software in the world … We respectfully ask you not to join Palantir. Not in like you’re an idiot, just we have this belief structure.”

Advertisement

The company’s second core belief springs from the chip on Karp’s shoulder. Like generations of Black and Jewish entrepreneurs before him, Karp presumes his company isn’t going to win any deals on the golf course. So to get contracts from Fortune 500 companies and governments Palantir must do things other software companies won’t, and do them so fast and cheap that the results are irrefutable.

This approach has worked exceedingly well in the corporate world. Palantir’s market capitalization is $52 billion and its stock has climbed more than 150 percent in the past year, largely because of demand for its AI products. But for much of its existence, an openly patriotic company with software better, faster and cheaper than its competitors was shut out of U.S. defense contracts. In the mid-2010s this put Palantir’s survival at risk and sharpened Karp’s indignation to a fine point. Either his biography had made him paranoid or something was amiss.

In 2016, Palantir took the unprecedented step of suing the Pentagon to find out. The case alleged the Defense Department was in violation of the Federal Acquisition Streamlining Act, a 1994 law that prohibits the government from starting new bloat-filled projects if an off-the-shelf solution is available. The House Committee on Government Operations made its intent unusually clear: “The Federal Government must stop ‘reinventing the wheel’ and learn to depend on the wide array of products and services sold to the general public.”

The record of Palantir v. United States is about as one-sided as these things can be. In the Court of Federal Claims, Palantir was able to document soldiers, officers and procurement people acknowledging the supremacy and lower cost of its in-market products — and show the Pentagon was still buying a more expensive proposal, years from effective deployment, offered by a consortium of Raytheon, Northrop Grumman and Lockheed Martin. The Army’s defense can be summarized as, “Yeah, well that’s kinda how we do stuff.” Palantir’s lawyers responded with insults about structural inertia, backed with receipts. Boies, Schiller & Flexner had themselves a time.

Palantir’s victory was resounding, and opened the door to what is now a more functional relationship. Wednesday, the Army announced that Palantir won a $178 million contract to make 10 prototypes for the next phase of its tactical intelligence targeting node (Titan) program. Titan is a ground station that uses sensor data from space, sky and land to improve long-range weapons precision.

Still, Karp insists rivals regularly win contracts with video presentations of unbuilt solutions over existing software from Palantir. Several people I spoke with in the Defense Department volunteered that Palantir’s software is excellent — and a few said they’d be happy if the company would go away. It challenges too many things about the procurement culture and process. One noted that Palantir’s D.C. office is in Georgetown near (gasp) a Lululemon as opposed to in the traditional valley of contractors adjacent to the Pentagon.

Advertisement

When I shared this little Jane Austen comedy of manners with Karp he shook his head. “You talk to the right people. Like, maybe we’re not that likable and maybe some of it is our fault. But Jesus, our stuff works.”

Palantir’s saga doesn’t prove that government employees are bad, merely that humans can tolerate limitless amounts of dysfunction, especially when everyone around them is doing the same. They’re trapped in a system where all incentives point toward the status quo. Perna wants Palantir to think bigger, but remember: The Defense Department can embrace and expedite things in the name of national security that others cannot. It’s one of the most AI-friendly parts of the government.

The challenge then is fixing a massive system that has become constitutionally resistant to solutions, particularly ones fueled by technology such as artificial intelligence. It’s a Mobius strip that no one can seem to straighten out. But Karp sees a direct line between Palantir’s experience and the peril of the current moment. “Every time I see ordinary interactions between ordinary citizens and the government, it’s very high friction for no reason,” he says. “And then there’s almost no output. Forget the dollars spent. Whether it’s immigration, health records, taxation, getting your car to work, you’re going to have a bad experience, right? And that bad experience, makes you think, ‘Hmm, nothing works here. And because nothing works here I’m going to tear down the whole system.’”

A few months before Palantir sued the United States in 2016, Eric Schmidt got a call from Defense Secretary Ashton B. Carter. Carter was launching something called the Defense Innovation Board to try to get more tech thinking into the Pentagon. He wanted Schmidt, then the executive chairman of Google’s parent company Alphabet, to join. “I declined,” says Schmidt. “And Carter said, ‘Well, you know, do it anyway,’”

I’ve spoken with Schmidt several times over the years and he’s been about as predictable as a Holiday Inn. But as he recalled his time on the Defense Innovation Board there was a different tone, like the guy in a horror movie who’s been chilled by his encounter with a vaguely threatening supernatural force. The quiet one who says, “You don’t know what’s out there, man.”

Carter let the Defense Innovation Board examine everything it needed to assess how the Pentagon develops, acquires and uses technology — the 99.9 percent of the iceberg that remained out of sight in the Palantir court case. Pretty quickly Schmidt concluded the entire federal apparatus has accidentally mutated into software’s perfect enemy. “AI is fundamentally software,” says Schmidt. “You can’t have AI in the government or the military until you solve the problem of software in the government and military.”

Most government projects work backward from an outcome — a bridge will be built from point X to point Y and cost Z. Software is an abstraction moving toward a destination that’s always changing. Google didn’t create a search box and then close up shop; it kept spending and staffing because that’s how technology gets better and more usable. Unlike a bridge, software is never done. Try selling that to bureaucrats who are told they must pay for only what they can document.

Advertisement

Schmidt described for me the normal course of software development — prototyping with a small group of engineers, getting lots of user feedback, endless refinement and iteration. “Every single thing I just told you is illegal,” Schmidt says.

If only this were true. We could then just make things legal and move on. In fact, Congress — though hardly blameless — has given the Defense Department countless workarounds and special authorities over the years. Most have been forgotten or ignored by public servants who are too scared to embrace them. Take one of Schmidt’s examples; you really are allowed to conduct software user surveys, but most staffers at the Office of Information and Regulatory Affairs interpret the legal guidance to mean a six-month review process is required before granting permission. A six-month wait for a product that never stops moving. That means normal software practices are worse than illegal. They’re a form of bureaucratic torture.

The Defense Innovation Board channeled its bewilderment into a masterpiece: “Software is Never Done: Refactoring the Acquisition Code for Competitive Advantage.” I’m not being ironic. It’s the most reasonable, stylish and solutions-based critique of modern government I’ve ever read. The authors did the unglamorous work of going through the infested garden of processes and rules and called out many of the nastiest weeds. Then they made common-sense recommendations — treat software as a living thing that crosses budget lines; do cost assessments that prioritize speed, security, functionality and code quality; collect data from the department’s weapons systems and create a secure repository to evaluate their effectiveness — and urged Congress to pass them.

They also referenced the dozen previous software reports commissioned by the military dating back to 1982, all of which came to similar conclusions. The problem isn’t a lack of solutions, it’s getting Congress to approve the politically risky ones and “the frozen middle” to implement them: “We question neither the integrity nor the patriotism of this group. They are simply not incentivized to the way we believe modern software should be acquired and implemented, and the enormous inertia they represent is a profound barrier to change.”

When software becomes a crisis, politicians call Jennifer Pahlka. Pahlka was deputy chief technology officer in the Obama administration and was crucial to the rescue of healthcare.gov — the most flawed, fraught and ultimately successful software project in government history. In 2020, Gavin Newsom bat-signaled her to untangle California’s unemployment insurance program as it buckled under the weight of the covid-19 response. “I come to this work,” says Pahlka, “with the assumption that people are having a f------ nervous breakdown.”

Advertisement

Pahlka served with Schmidt on the Defense Innovation Board, which affirmed decades of her experience at the convergence of software and government. The dysfunction loop begins when absurd processes are given to public servants who will be judged on their compliance with absurdity. If they do their jobs right, the nation purchases obsolete overpriced software. If they make a mistake or take a risk that defies the absurdity, politicians hold hearings and jump all over them — which is far simpler than fixing the process. Each recrimination drives more good people out of public service. Rinse, repeat.

What Pahlka has noticed recently is that the wave is cresting. More things are breaking, and the remaining competent public servants who understand technology are just barely hanging on. “Most of what I do on a daily basis is like therapy,” Pahlka says. “I tell people, ‘Those feelings you’re having are normal. The only way to get through them is to share them.’” The dedication in her excellent book, “Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better,” said, “To public servants everywhere. Don’t give up.” Pahlka told me, “I’ve had people come up to me and ask me to sign and they just start crying.”

It’s not just the rank and file. Schmidt ended up serving four years on the Defense Innovation Board. When we were wrapping up our conversation, he took a breath and paused for a moment. “I’m not going to make a more emotional argument, I’m just going to tell you the following: Government will perform sub optimally until it adopts the software practices of the industry.” He sounded pretty emotional.

It did not take someone with John F. Kennedy’s charisma to inspire Americans to go to the moon. The moon is big and pretty. Humanity has been dreaming about it for eons. Calvin Coolidge levels of charm would have sufficed.

The challenge of using AI for better government is very different. The excitement about a new thing is tempered by fear and confusion. To get the maximum reward from AI, the country must first go through an unprecedented vegetable-eating exercise to clean up its bureaucracy. Turning that into poetry is hard. There’s no ideal messenger, but an octogenarian whose best speeches are about grief and a septuagenarian whose speeches are barely speeches is perhaps not the optimal set of choices.

Nevertheless, the moment will not wait. So what can the president say in an AI State of the Union?

The truth. The relationship between citizens and government is fractured. It’s crucial to the republic’s survival that we stop defending the status quo. New technology can help us repair the damage and open the door to a level of service and efficiency that will make Scandinavians seethe with envy. Almost all of this AI tech has been created by American ingenuity inside American companies, and the American people deserve its benefits.

Next, say the thing Democrats don’t want to say: Not every government job should be a job for life. LLMs can provide better service and responsiveness for many day-to-day interactions between citizens and various agencies. They’re not just cheaper, they’re also faster, and, when trained right, less prone to error or misinterpretation. That means it’s possible the federal government will soon have fewer employees. But AI will never replace human judgment — about benefits, penalties or anything in between. It’s a tool to be used by Americans to make better decisions for our national well-being.

Advertisement

That earns you the right to say the thing reasonable Republicans don’t want to hear: their bluff is going to be called. If they continue to indulge the party’s idiotic fantasies of burning the entire federal apparatus to the ground, they’ll be left holding the ashes. They need to admit that a properly run government has an important role in people’s lives, and they need to co-sign fixing it. Without crossing their fingers behind their backs.

All this is preamble to the work — methodical demolition and joyful construction. Pahlka says the policy guidelines that govern the Defense Department equal 100 stacked copies of “War and Peace.” There are more than 7,000 pages of unemployment regulations. Luckily, untangling the United States’ hairball of fine print is the perfect job for AI. Banks already use it to deduplicate obsolete compliance rules. Pahlka is working to demonstrate its feasibility inside agencies. The Pentagon is experimenting with an AI program called Gamechanger that helps bureaucrats navigate its own bureaucracy. It’s easy to mock, and we’ll still need countless human hours of oversight — many of them from Congress — to ensure the job’s done right. But it’s exactly the kind of humble first step that deserves praise. Turbocharge these efforts, then start building. But not everywhere, at least not at first.

One of the secrets of great software is that it’s not built all at once. Projects get broken down into manageable units called sprints; teams get feedback, make adjustments in real-time, then use that knowledge to tackle the next sprint. It’s a form of common sense that the industry calls agile development.

The United States should do its first agile AI sprint in its most broken place, where the breach of trust and services is the most shameful. You likely know the statistics about Veterans Affairs but there’s one worth repeating: 6,392 veterans died by suicide in 2021, the most recent year numbers are available. A ProPublica review of inspector general reports found VA employees regularly “botched screenings meant to assess veterans’ risk of suicide or violence; sometimes they didn’t perform the screenings at all.”

What if we treat VA like the crisis it is? It’s not as simple as untangling hoses between veterans and the department. A lot of care is managed manually. But when we create digital infrastructure, appointment scheduling can run on AI. A cascade of benefits would follow, such as reduced wait times, analytics that predict demand for services, and automated reminders and follow-ups so VA staff can focus on patients over paperwork. Next make a first alert chatbot for veterans that, only with their consent, can be used to look for signs of crisis or suicidal thoughts, offers coping mechanisms and resources, and escalates cases to mental health providers.

Advertisement

The big one is personalized care. Veterans deserve to be empowered with a God view of their own treatment, and that data can be anonymized and analyzed for insights into veteran-specific conditions such as post-traumatic stress disorder and traumatic brain injuries. Is there risk? There is. Is the risk worse than an average of 18 veterans killing themselves each day? I don’t think so.

Let’s give ourselves a countdown clock: One year to make it happen. It’s a problem similar in scale, complexity and importance to Operation Warp Speed. There’s a grandpa in Alabama who might be convinced to help.

There are more questions — part of getting AI into government is realizing there will be no getting it out. It turns out that good software and good government are more similar than we knew: Neither is ever done. The past few decades the federal government stopped changing. One side tried to cripple it while the other responded with smothering levels of affection and excuses. These equal and irrational forces created stasis and decay, but American lives kept moving forward with new needs and expectations.

This new era of AI has presented a once-in-a-century chance to wipe away a lot of the damage and renew the mission. Not to the moon, but to a more perfect union.

Share

Comments

Popular opinions articles

HAND CURATED

View 3 more stories

My awakening began in the modern fashion — late at night, on YouTube. Months later the video still has just 3,900 views, so I’d better describe it.

A few dozen people have gathered to watch a presentation. It’s capably produced — like a midsize college professor’s audition for a TED Talk. The presenter, in a patterned blazer and blue oxford, is retired four-star general Gustave Perna. “I spent 40 years in the Army,” Perna begins, the hard edges of his New Jersey accent clanging a little in the room. “I was an average infantry officer. I was a great logistician.”

It’s a leisurely start. And yet the closest comparison I have for what comes next is Star Wars. Because once he gets through his slow-crawl prologue, Perna tells a story so tense and futuristic that, by the end, it’s possible to glimpse a completely different way in which we might live as citizens. Also, there’s warp speed.

Perhaps Perna’s name sounds familiar. It should. He oversaw the effort to produce and distribute the first coronavirus vaccines — a recent triumph of U.S. policy that’s been erased by the stupidity of U.S. politics. Perna was a month from retirement in May 2020 when he got a Saturday morning call from the chairman of the Joint Chiefs. Arriving in Washington two days later to begin Operation Warp Speed, his arsenal consisted of three colonels, no money and no plan.

The audience is focusing now. Perna tells them that what he needed more than anything was “to see myself.” On the battlefield this means knowing your troops, positions and supplies. It means roughly the same thing here, except the battlefield is boundaryless. Perna needed up-to-the-minute data from all the relevant state and federal agencies, drug companies, hospitals, pharmacies, manufacturers, truckers, dry ice makers, etc. Oh, and that data needed to be standardized and operationalized for swift decision-making.

It’s hard to comprehend, so let’s reduce the complexity to just a single physical material: plastic. Perna had to have eyes on the national capacity to produce and supply plastic — for syringes, needles, bags, vials. Otherwise, with thousands of people dying each day, he could find himself with hundreds of millions of vaccine doses and nothing to put them in.

To see himself, Perna needed a real-time digital dashboard of an entire civilization.

This being Washington, consultants lined up at his door. Perna gave each an hour, but none could define the problem let alone offer a credible solution. “Excruciating,” Perna tells the room, and here the Jersey accent helps drive home his disgust. Then he met Julie and Aaron. They told him, “Sir, we’re going to give you all the data you need so that you can assess, determine risk, and make decisions rapidly.” Perna shut down the process immediately. “I said great, you’re hired.”

Julie and Aaron work for Palantir, a company whose name curdles the blood of progressives and some of the military establishment. We’ll get to why. But Perna says Palantir did exactly what it promised. Using artificial intelligence, the company optimized thousands of data streams and piped them into an elegant interface. In a few short weeks, Perna had his God view of the problem. A few months after that, Operation Warp Speed delivered vaccines simultaneously to all 50 states. When governors called panicking that they’d somehow been shorted, Perna could share a screen with the precise number of vials in their possession. “‘Oh, no, general, that’s not true.’ Oh, yes. It is.”

The video cuts off with polite applause. The audience doesn’t seem to understand they’ve just been transported to a galaxy far, far away.

When Joe Biden delivers his State of the Union on March 7, he’ll likely become the first president to use the phrase artificial intelligence in the address. The president has been good on AI. His executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” threw a switch activating the federal bureaucracy’s engagement. He’s delegating to smart people and banging the drum about generative AI’s ability to create misinformation and harm national security. That’s plenty for a speech.

But the vision remains so small compared with the possibilities. This is technology that could transform almost everything about our society, yet neither the president nor his political rivals have imagined how it might do the same for the government itself. So allow me.

According to a 2023 year end Gallup poll, Americans’ confidence in 15 institutions — covering things such as health care, education and regulation — is at historic lows. The poll’s conclusion is that government is suffering an acute crisis of legitimacy. We no longer trust it to fix important things in our lives. If confidence in the effectiveness of government keeps eroding at this pace, how much longer do you think we can remain united? How easy do we want to make our dismantling for the nihilists already cheering it on?

Properly deployed, AI can help blaze a new path to the shining city on a hill. In 2023, the national taxpayer advocate reported that the IRS answered only 29 percent of its phone calls during tax season. Human-based eligibility decisions for the Supplemental Nutrition Assistance Program, have a 44 percent error rate. Large-language-model-powered chatbots could already be providing better service — at all hours, in all languages, at less cost — for people who rely on the federal government for veterans benefits, student loans, unemployment, social security and Medicare. That’s table stakes.

Now think about Warp Speeding entire agencies and functions: the IRS, which, in 2024, still makes you guess how much you owe it, public health surveillance and response, traffic management, maintenance of interstates and bridges, disaster preparedness and relief. AI can revolutionize the relationship between citizens and the government. We have the technology. We’ve already used it.

Mention Operation Warp Speed to skeptics and they’ll wave you off. It doesn’t count. In a crisis the great sloth of government can sprint, but in regular times procurement rules, agency regulators and the endless nitpicking of politics make big things impossible. All true.

There’s another strain of skepticism that goes like this: Are you insane? AI might create all kinds of efficiency, but it’s also been known to have systemic biases that could get encoded into official government systems, lack transparency that could undermine public trust, make loads of federal jobs obsolete, and be vulnerable to data breaches that compromise privacy and sensitive information. If AI were a Big Pharma product the ads would be 10 minutes long.

We can put guardrails around how the government uses AI — anonymizing personal data as they do in the European Union, creating oversight bodies for continuous monitoring — but I’m not naive. Some things will still go wrong. Which leaves us to weigh the risks of the cure against the deadliness of the disease.

To check my premise, I set up a Zoom call with Perna. He was in sweats at his home in Alabama, and if he missed carrying the weight of the world he did a great job hiding it. He consults a little for Palantir now, but mostly he was excited to talk about grandkids, the Yankees and the best New York City slice joints. His mood shifted when I asked what government could improve if it embraced AI. “Everything,” he snapped, before the question was fully out. “I don’t understand how we’re not using it for organ donation right now. We should be ashamed. Why do we need 80,000 new people at the IRS? We could revolutionize the budget process. I tell Palantir, why are you playing around with the Department of Defense? Think bigger.”

What Palantir does has long been draped in mystery. It’s a software company that works with artificial intelligence and is named for the indestructible crystal balls in The Lord of the Rings, so they’re not exactly discouraging it. But the foundation of its products is almost comically dull.

Imagine all of an organization’s data sources as a series of garden hoses in your backyard. Let’s say the organization is a hospital. There are hoses for personnel, equipment, drugs, insurance companies, medical supplies, scheduling, bed availability and probably dozens of other things. Many of the hoses connect up to vendors and many connect to patients. No one can remember what some of them are supposed to connect to. All were bought at different times from different manufacturers and are different sizes and lengths. And it’s a hospital, so hose maintenance has never been anyone’s top priority. Now look out the window. There’s a pile of knotted rubber so dense you can’t see grass.

Palantir untangles hoses.

“We’ve always been the mole people of Silicon Valley,” says Akshay Krishnaswamy, Palantir’s chief architect. “It’s like we go into the plumbing of all this stuff and come out and say, ‘Let’s help you build a beautiful ontology.’”

In metaphysics, ontology is the study of being. In software and AI, it’s come to mean the untangling of messes and the creation of a functional information ecosystem. Once Palantir standardizes an organization’s data and defines the relationships between the streams, it can build an application or interface on top of it. This combination — integrated data and a useful app — is what allows everyone from middle managers to four-star generals to have an AI co-pilot, to see themselves with the God view. “It’s the Iron Man suit for the person who’s using it,” says Krishnaswamy. “It’s like, they’re still going to have to make decisions but they feel like they’re now flying around at Mach 5.”

The most dramatic expression of Palantir’s capabilities is in Ukraine, where the company merges real-time views from hundreds of commercial satellites with communications technology and weapons data. All of that information is then seamlessly displayed on laptops and handheld dashboards for commanders on the battlefield. A senior U.S. military official told me, “The Ukrainian force is incredibly tough, but it’s not much of a fight without iPads and Palantir.”

I mentioned that progressives and some of the military establishment dislike Palantir. Each has a reason. The company was co-founded in 2003 by Peter Thiel, which explains much of the hatred from the far left. Thiel spoke at the 2016 Republican convention, endorsed Donald Trump in 2016, dislikes multiculturalism, financed a lawsuit to kill Gawker and then tried to buy its corpse. The enmity here is mutual, but also kind of trivial.

Palantir has another co-founder. His name is Alex Karp, and many people in the Pentagon find him very annoying. The quick explanation is that Karp is loud and impatient, and he’s not one of them. But it’s more troubling than that.

Karp was born in New York City to a Black mother and a Jewish father. He’s severely dyslexic, a socialist, a 2016 Hillary Clinton supporter. When we spoke in Palantir’s New York offices, it was clear that he’s both whip-smart and keeps a careful accounting of the slights he’s accumulated. “Quite frankly,” Karp told me, “just because of biographical issues, I assume I am going to be screwed, right?” It was like meeting the protagonist from a book co-authored by Ralph Ellison and Philip Roth.

Thiel and Karp were law school classmates at Stanford in the early ’90s. They argued plenty, but agreed about enough to create Palantir with partial funding (less than $2 million) from In-Q-Tel, an investment arm of the CIA, and a few core beliefs. The first is that the United States is exceptional, and working to strengthen its position in the world benefits all humanity. “I’ve lived abroad,” Karp says. “I know [America] is the only country that’s remotely as fair and meritocratic as America is. And I tend to be more focused on that than the obvious shortcomings.” In a speech last year, Karp, who is CEO, explained what this means for the company: “If you don’t think the U.S. government should have the best software in the world … We respectfully ask you not to join Palantir. Not in like you’re an idiot, just we have this belief structure.”

The company’s second core belief springs from the chip on Karp’s shoulder. Like generations of Black and Jewish entrepreneurs before him, Karp presumes his company isn’t going to win any deals on the golf course. So to get contracts from Fortune 500 companies and governments Palantir must do things other software companies won’t, and do them so fast and cheap that the results are irrefutable.

This approach has worked exceedingly well in the corporate world. Palantir’s market capitalization is $52 billion and its stock has climbed more than 150 percent in the past year, largely because of demand for its AI products. But for much of its existence, an openly patriotic company with software better, faster and cheaper than its competitors was shut out of U.S. defense contracts. In the mid-2010s this put Palantir’s survival at risk and sharpened Karp’s indignation to a fine point. Either his biography had made him paranoid or something was amiss.

In 2016, Palantir took the unprecedented step of suing the Pentagon to find out. The case alleged the Defense Department was in violation of the Federal Acquisition Streamlining Act, a 1994 law that prohibits the government from starting new bloat-filled projects if an off-the-shelf solution is available. The House Committee on Government Operations made its intent unusually clear: “The Federal Government must stop ‘reinventing the wheel’ and learn to depend on the wide array of products and services sold to the general public.”

The record of Palantir v. United States is about as one-sided as these things can be. In the Court of Federal Claims, Palantir was able to document soldiers, officers and procurement people acknowledging the supremacy and lower cost of its in-market products — and show the Pentagon was still buying a more expensive proposal, years from effective deployment, offered by a consortium of Raytheon, Northrop Grumman and Lockheed Martin. The Army’s defense can be summarized as, “Yeah, well that’s kinda how we do stuff.” Palantir’s lawyers responded with insults about structural inertia, backed with receipts. Boies, Schiller & Flexner had themselves a time.

Palantir’s victory was resounding, and opened the door to what is now a more functional relationship. Wednesday, the Army announced that Palantir won a $178 million contract to make 10 prototypes for the next phase of its tactical intelligence targeting node (Titan) program. Titan is a ground station that uses sensor data from space, sky and land to improve long-range weapons precision.

Still, Karp insists rivals regularly win contracts with video presentations of unbuilt solutions over existing software from Palantir. Several people I spoke with in the Defense Department volunteered that Palantir’s software is excellent — and a few said they’d be happy if the company would go away. It challenges too many things about the procurement culture and process. One noted that Palantir’s D.C. office is in Georgetown near (gasp) a Lululemon as opposed to in the traditional valley of contractors adjacent to the Pentagon.

When I shared this little Jane Austen comedy of manners with Karp he shook his head. “You talk to the right people. Like, maybe we’re not that likable and maybe some of it is our fault. But Jesus, our stuff works.”

Palantir’s saga doesn’t prove that government employees are bad, merely that humans can tolerate limitless amounts of dysfunction, especially when everyone around them is doing the same. They’re trapped in a system where all incentives point toward the status quo. Perna wants Palantir to think bigger, but remember: The Defense Department can embrace and expedite things in the name of national security that others cannot. It’s one of the most AI-friendly parts of the government.

The challenge then is fixing a massive system that has become constitutionally resistant to solutions, particularly ones fueled by technology such as artificial intelligence. It’s a Mobius strip that no one can seem to straighten out. But Karp sees a direct line between Palantir’s experience and the peril of the current moment. “Every time I see ordinary interactions between ordinary citizens and the government, it’s very high friction for no reason,” he says. “And then there’s almost no output. Forget the dollars spent. Whether it’s immigration, health records, taxation, getting your car to work, you’re going to have a bad experience, right? And that bad experience, makes you think, ‘Hmm, nothing works here. And because nothing works here I’m going to tear down the whole system.’”

A few months before Palantir sued the United States in 2016, Eric Schmidt got a call from Defense Secretary Ashton B. Carter. Carter was launching something called the Defense Innovation Board to try to get more tech thinking into the Pentagon. He wanted Schmidt, then the executive chairman of Google’s parent company Alphabet, to join. “I declined,” says Schmidt. “And Carter said, ‘Well, you know, do it anyway,’”

I’ve spoken with Schmidt several times over the years and he’s been about as predictable as a Holiday Inn. But as he recalled his time on the Defense Innovation Board there was a different tone, like the guy in a horror movie who’s been chilled by his encounter with a vaguely threatening supernatural force. The quiet one who says, “You don’t know what’s out there, man.”

Carter let the Defense Innovation Board examine everything it needed to assess how the Pentagon develops, acquires and uses technology — the 99.9 percent of the iceberg that remained out of sight in the Palantir court case. Pretty quickly Schmidt concluded the entire federal apparatus has accidentally mutated into software’s perfect enemy. “AI is fundamentally software,” says Schmidt. “You can’t have AI in the government or the military until you solve the problem of software in the government and military.”

Most government projects work backward from an outcome — a bridge will be built from point X to point Y and cost Z. Software is an abstraction moving toward a destination that’s always changing. Google didn’t create a search box and then close up shop; it kept spending and staffing because that’s how technology gets better and more usable. Unlike a bridge, software is never done. Try selling that to bureaucrats who are told they must pay for only what they can document.

Schmidt described for me the normal course of software development — prototyping with a small group of engineers, getting lots of user feedback, endless refinement and iteration. “Every single thing I just told you is illegal,” Schmidt says.

If only this were true. We could then just make things legal and move on. In fact, Congress — though hardly blameless — has given the Defense Department countless workarounds and special authorities over the years. Most have been forgotten or ignored by public servants who are too scared to embrace them. Take one of Schmidt’s examples; you really are allowed to conduct software user surveys, but most staffers at the Office of Information and Regulatory Affairs interpret the legal guidance to mean a six-month review process is required before granting permission. A six-month wait for a product that never stops moving. That means normal software practices are worse than illegal. They’re a form of bureaucratic torture.

The Defense Innovation Board channeled its bewilderment into a masterpiece: “Software is Never Done: Refactoring the Acquisition Code for Competitive Advantage.” I’m not being ironic. It’s the most reasonable, stylish and solutions-based critique of modern government I’ve ever read. The authors did the unglamorous work of going through the infested garden of processes and rules and called out many of the nastiest weeds. Then they made common-sense recommendations — treat software as a living thing that crosses budget lines; do cost assessments that prioritize speed, security, functionality and code quality; collect data from the department’s weapons systems and create a secure repository to evaluate their effectiveness — and urged Congress to pass them.

They also referenced the dozen previous software reports commissioned by the military dating back to 1982, all of which came to similar conclusions. The problem isn’t a lack of solutions, it’s getting Congress to approve the politically risky ones and “the frozen middle” to implement them: “We question neither the integrity nor the patriotism of this group. They are simply not incentivized to the way we believe modern software should be acquired and implemented, and the enormous inertia they represent is a profound barrier to change.”

When software becomes a crisis, politicians call Jennifer Pahlka. Pahlka was deputy chief technology officer in the Obama administration and was crucial to the rescue of healthcare.gov — the most flawed, fraught and ultimately successful software project in government history. In 2020, Gavin Newsom bat-signaled her to untangle California’s unemployment insurance program as it buckled under the weight of the covid-19 response. “I come to this work,” says Pahlka, “with the assumption that people are having a f------ nervous breakdown.”

Pahlka served with Schmidt on the Defense Innovation Board, which affirmed decades of her experience at the convergence of software and government. The dysfunction loop begins when absurd processes are given to public servants who will be judged on their compliance with absurdity. If they do their jobs right, the nation purchases obsolete overpriced software. If they make a mistake or take a risk that defies the absurdity, politicians hold hearings and jump all over them — which is far simpler than fixing the process. Each recrimination drives more good people out of public service. Rinse, repeat.

What Pahlka has noticed recently is that the wave is cresting. More things are breaking, and the remaining competent public servants who understand technology are just barely hanging on. “Most of what I do on a daily basis is like therapy,” Pahlka says. “I tell people, ‘Those feelings you’re having are normal. The only way to get through them is to share them.’” The dedication in her excellent book, “Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better,” said, “To public servants everywhere. Don’t give up.” Pahlka told me, “I’ve had people come up to me and ask me to sign and they just start crying.”

It’s not just the rank and file. Schmidt ended up serving four years on the Defense Innovation Board. When we were wrapping up our conversation, he took a breath and paused for a moment. “I’m not going to make a more emotional argument, I’m just going to tell you the following: Government will perform sub optimally until it adopts the software practices of the industry.” He sounded pretty emotional.

It did not take someone with John F. Kennedy’s charisma to inspire Americans to go to the moon. The moon is big and pretty. Humanity has been dreaming about it for eons. Calvin Coolidge levels of charm would have sufficed.

The challenge of using AI for better government is very different. The excitement about a new thing is tempered by fear and confusion. To get the maximum reward from AI, the country must first go through an unprecedented vegetable-eating exercise to clean up its bureaucracy. Turning that into poetry is hard. There’s no ideal messenger, but an octogenarian whose best speeches are about grief and a septuagenarian whose speeches are barely speeches is perhaps not the optimal set of choices.

Nevertheless, the moment will not wait. So what can the president say in an AI State of the Union?

The truth. The relationship between citizens and government is fractured. It’s crucial to the republic’s survival that we stop defending the status quo. New technology can help us repair the damage and open the door to a level of service and efficiency that will make Scandinavians seethe with envy. Almost all of this AI tech has been created by American ingenuity inside American companies, and the American people deserve its benefits.

Next, say the thing Democrats don’t want to say: Not every government job should be a job for life. LLMs can provide better service and responsiveness for many day-to-day interactions between citizens and various agencies. They’re not just cheaper, they’re also faster, and, when trained right, less prone to error or misinterpretation. That means it’s possible the federal government will soon have fewer employees. But AI will never replace human judgment — about benefits, penalties or anything in between. It’s a tool to be used by Americans to make better decisions for our national well-being.

That earns you the right to say the thing reasonable Republicans don’t want to hear: their bluff is going to be called. If they continue to indulge the party’s idiotic fantasies of burning the entire federal apparatus to the ground, they’ll be left holding the ashes. They need to admit that a properly run government has an important role in people’s lives, and they need to co-sign fixing it. Without crossing their fingers behind their backs.

All this is preamble to the work — methodical demolition and joyful construction. Pahlka says the policy guidelines that govern the Defense Department equal 100 stacked copies of “War and Peace.” There are more than 7,000 pages of unemployment regulations. Luckily, untangling the United States’ hairball of fine print is the perfect job for AI. Banks already use it to deduplicate obsolete compliance rules. Pahlka is working to demonstrate its feasibility inside agencies. The Pentagon is experimenting with an AI program called Gamechanger that helps bureaucrats navigate its own bureaucracy. It’s easy to mock, and we’ll still need countless human hours of oversight — many of them from Congress — to ensure the job’s done right. But it’s exactly the kind of humble first step that deserves praise. Turbocharge these efforts, then start building. But not everywhere, at least not at first.

One of the secrets of great software is that it’s not built all at once. Projects get broken down into manageable units called sprints; teams get feedback, make adjustments in real-time, then use that knowledge to tackle the next sprint. It’s a form of common sense that the industry calls agile development.

The United States should do its first agile AI sprint in its most broken place, where the breach of trust and services is the most shameful. You likely know the statistics about Veterans Affairs but there’s one worth repeating: 6,392 veterans died by suicide in 2021, the most recent year numbers are available. A ProPublica review of inspector general reports found VA employees regularly “botched screenings meant to assess veterans’ risk of suicide or violence; sometimes they didn’t perform the screenings at all.”

What if we treat VA like the crisis it is? It’s not as simple as untangling hoses between veterans and the department. A lot of care is managed manually. But when we create digital infrastructure, appointment scheduling can run on AI. A cascade of benefits would follow, such as reduced wait times, analytics that predict demand for services, and automated reminders and follow-ups so VA staff can focus on patients over paperwork. Next make a first alert chatbot for veterans that, only with their consent, can be used to look for signs of crisis or suicidal thoughts, offers coping mechanisms and resources, and escalates cases to mental health providers.

The big one is personalized care. Veterans deserve to be empowered with a God view of their own treatment, and that data can be anonymized and analyzed for insights into veteran-specific conditions such as post-traumatic stress disorder and traumatic brain injuries. Is there risk? There is. Is the risk worse than an average of 18 veterans killing themselves each day? I don’t think so.

Let’s give ourselves a countdown clock: One year to make it happen. It’s a problem similar in scale, complexity and importance to Operation Warp Speed. There’s a grandpa in Alabama who might be convinced to help.

There are more questions — part of getting AI into government is realizing there will be no getting it out. It turns out that good software and good government are more similar than we knew: Neither is ever done. The past few decades the federal government stopped changing. One side tried to cripple it while the other responded with smothering levels of affection and excuses. These equal and irrational forces created stasis and decay, but American lives kept moving forward with new needs and expectations.

This new era of AI has presented a once-in-a-century chance to wipe away a lot of the damage and renew the mission. Not to the moon, but to a more perfect union.

QOSHE - Let AI remake the whole U.S. government (oh, and save the country) - Josh Tyrangiel
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Let AI remake the whole U.S. government (oh, and save the country)

13 36
06.03.2024

By Josh Tyrangiel

Columnist|Follow author

Follow

March 6, 2024 at 7:15 a.m. EST

(Joan Wong for The Washington Post; iStock)

Listen26 min

Share

Comment on this storyComment

Add to your saved stories

Save

My awakening began in the modern fashion — late at night, on YouTube. Months later the video still has just 3,900 views, so I’d better describe it.

A few dozen people have gathered to watch a presentation. It’s capably produced — like a midsize college professor’s audition for a TED Talk. The presenter, in a patterned blazer and blue oxford, is retired four-star general Gustave Perna. “I spent 40 years in the Army,” Perna begins, the hard edges of his New Jersey accent clanging a little in the room. “I was an average infantry officer. I was a great logistician.”

It’s a leisurely start. And yet the closest comparison I have for what comes next is Star Wars. Because once he gets through his slow-crawl prologue, Perna tells a story so tense and futuristic that, by the end, it’s possible to glimpse a completely different way in which we might live as citizens. Also, there’s warp speed.

Advertisement

Perhaps Perna’s name sounds familiar. It should. He oversaw the effort to produce and distribute the first coronavirus vaccines — a recent triumph of U.S. policy that’s been erased by the stupidity of U.S. politics. Perna was a month from retirement in May 2020 when he got a Saturday morning call from the chairman of the Joint Chiefs. Arriving in Washington two days later to begin Operation Warp Speed, his arsenal consisted of three colonels, no money and no plan.

The audience is focusing now. Perna tells them that what he needed more than anything was “to see myself.” On the battlefield this means knowing your troops, positions and supplies. It means roughly the same thing here, except the battlefield is boundaryless. Perna needed up-to-the-minute data from all the relevant state and federal agencies, drug companies, hospitals, pharmacies, manufacturers, truckers, dry ice makers, etc. Oh, and that data needed to be standardized and operationalized for swift decision-making.

It’s hard to comprehend, so let’s reduce the complexity to just a single physical material: plastic. Perna had to have eyes on the national capacity to produce and supply plastic — for syringes, needles, bags, vials. Otherwise, with thousands of people dying each day, he could find himself with hundreds of millions of vaccine doses and nothing to put them in.

To see himself, Perna needed a real-time digital dashboard of an entire civilization.

Follow this authorJosh Tyrangiel's opinions

Follow

This being Washington, consultants lined up at his door. Perna gave each an hour, but none could define the problem let alone offer a credible solution. “Excruciating,” Perna tells the room, and here the Jersey accent helps drive home his disgust. Then he met Julie and Aaron. They told him, “Sir, we’re going to give you all the data you need so that you can assess, determine risk, and make decisions rapidly.” Perna shut down the process immediately. “I said great, you’re hired.”

Advertisement

Julie and Aaron work for Palantir, a company whose name curdles the blood of progressives and some of the military establishment. We’ll get to why. But Perna says Palantir did exactly what it promised. Using artificial intelligence, the company optimized thousands of data streams and piped them into an elegant interface. In a few short weeks, Perna had his God view of the problem. A few months after that, Operation Warp Speed delivered vaccines simultaneously to all 50 states. When governors called panicking that they’d somehow been shorted, Perna could share a screen with the precise number of vials in their possession. “‘Oh, no, general, that’s not true.’ Oh, yes. It is.”

The video cuts off with polite applause. The audience doesn’t seem to understand they’ve just been transported to a galaxy far, far away.

When Joe Biden delivers his State of the Union on March 7, he’ll likely become the first president to use the phrase artificial intelligence in the address. The president has been good on AI. His executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” threw a switch activating the federal bureaucracy’s engagement. He’s delegating to smart people and banging the drum about generative AI’s ability to create misinformation and harm national security. That’s plenty for a speech.

But the vision remains so small compared with the possibilities. This is technology that could transform almost everything about our society, yet neither the president nor his political rivals have imagined how it might do the same for the government itself. So allow me.

According to a 2023 year end Gallup poll, Americans’ confidence in 15 institutions — covering things such as health care, education and regulation — is at historic lows. The poll’s conclusion is that government is suffering an acute crisis of legitimacy. We no longer trust it to fix important things in our lives. If confidence in the effectiveness of government keeps eroding at this pace, how much longer do you think we can remain united? How easy do we want to make our dismantling for the nihilists already cheering it on?

Properly deployed, AI can help blaze a new path to the shining city on a hill. In 2023, the national taxpayer advocate reported that the IRS answered only 29 percent of its phone calls during tax season. Human-based eligibility decisions for the Supplemental Nutrition Assistance Program, have a 44 percent error rate. Large-language-model-powered chatbots could already be providing better service — at all hours, in all languages, at less cost — for people who rely on the federal government for veterans benefits, student loans, unemployment, social security and Medicare. That’s table stakes.

Advertisement

Now think about Warp Speeding entire agencies and functions: the IRS, which, in 2024, still makes you guess how much you owe it, public health surveillance and response, traffic management, maintenance of interstates and bridges, disaster preparedness and relief. AI can revolutionize the relationship between citizens and the government. We have the technology. We’ve already used it.

Mention Operation Warp Speed to skeptics and they’ll wave you off. It doesn’t count. In a crisis the great sloth of government can sprint, but in regular times procurement rules, agency regulators and the endless nitpicking of politics make big things impossible. All true.

There’s another strain of skepticism that goes like this: Are you insane? AI might create all kinds of efficiency, but it’s also been known to have systemic biases that could get encoded into official government systems, lack transparency that could undermine public trust, make loads of federal jobs obsolete, and be vulnerable to data breaches that compromise privacy and sensitive information. If AI were a Big Pharma product the ads would be 10 minutes long.

We can put guardrails around how the government uses AI — anonymizing personal data as they do in the European Union, creating oversight bodies for continuous monitoring — but I’m not naive. Some things will still go wrong. Which leaves us to weigh the risks of the cure against the deadliness of the disease.

To check my premise, I set up a Zoom call with Perna. He was in sweats at his home in Alabama, and if he missed carrying the weight of the world he did a great job hiding it. He consults a little for Palantir now, but mostly he was excited to talk about grandkids, the Yankees and the best New York City slice joints. His mood shifted when I asked what government could improve if it embraced AI. “Everything,” he snapped, before the question was fully out. “I don’t understand how we’re not using it for organ donation right now. We should be ashamed. Why do we need 80,000 new people at the IRS? We could revolutionize the budget process. I tell Palantir, why are you playing around with the Department of Defense? Think bigger.”

What Palantir does has long been draped in mystery. It’s a software company that works with artificial intelligence and is named for the indestructible crystal balls in The Lord of the Rings, so they’re not exactly discouraging it. But the foundation of its products is almost comically dull.

Imagine all of an organization’s data sources as a series of garden hoses in your backyard. Let’s say the organization is a hospital. There are hoses for personnel, equipment, drugs, insurance companies, medical supplies, scheduling, bed availability and probably dozens of other things. Many of the hoses connect up to vendors and many connect to patients. No one can remember what some of them are supposed to connect to. All were bought at different times from different manufacturers and are different sizes and lengths. And it’s a hospital, so hose maintenance has never been anyone’s top priority. Now look out the window. There’s a pile of knotted rubber so dense you can’t see grass.

Palantir untangles hoses.

“We’ve always been the mole people of Silicon Valley,” says Akshay Krishnaswamy, Palantir’s chief architect. “It’s like we go into the plumbing of all this stuff and come out and say, ‘Let’s help you build a beautiful ontology.’”

Advertisement

In metaphysics, ontology is the study of being. In software and AI, it’s come to mean the untangling of messes and the creation of a functional information ecosystem. Once Palantir standardizes an organization’s data and defines the relationships between the streams, it can build an application or interface on top of it. This combination — integrated data and a useful app — is what allows everyone from middle managers to four-star generals to have an AI co-pilot, to see themselves with the God view. “It’s the Iron Man suit for the person who’s using it,” says Krishnaswamy. “It’s like, they’re still going to have to make decisions but they feel like they’re now flying around at Mach 5.”

The most dramatic expression of Palantir’s capabilities is in Ukraine, where the company merges real-time views from hundreds of commercial satellites with communications technology and weapons data. All of that information is then seamlessly displayed on laptops and handheld dashboards for commanders on the battlefield. A senior U.S. military official told me, “The Ukrainian force is incredibly tough, but it’s not much of a fight without iPads and Palantir.”

I mentioned that progressives and some of the military establishment dislike Palantir. Each has a reason. The company was co-founded in 2003 by Peter Thiel, which explains much of the hatred from the far left. Thiel spoke at the 2016 Republican convention, endorsed Donald Trump in 2016, dislikes multiculturalism, financed a lawsuit to kill Gawker and then tried to buy its corpse. The enmity here is mutual, but also kind of trivial.

Palantir has another co-founder. His name is Alex Karp, and many people in the Pentagon find him very annoying. The quick explanation is that Karp is loud and impatient, and he’s not one of them. But it’s more troubling than that.

Karp was born in New York City to a Black mother and a Jewish father. He’s severely dyslexic, a socialist, a 2016 Hillary Clinton supporter. When we spoke in Palantir’s New York offices, it was clear that he’s both whip-smart and keeps a careful accounting of the slights he’s accumulated. “Quite frankly,” Karp told me, “just because of biographical issues, I assume I am going to be screwed, right?” It was like meeting the protagonist from a book co-authored by Ralph Ellison and Philip Roth.

Thiel and Karp were law school classmates at Stanford in the early ’90s. They argued plenty, but agreed about enough to create Palantir with partial funding (less than $2 million) from In-Q-Tel, an investment arm of the CIA, and a few core beliefs. The first is that the United States is exceptional, and working to strengthen its position in the world benefits all humanity. “I’ve lived abroad,” Karp says. “I know [America] is the only country that’s remotely as fair and meritocratic as America is. And I tend to be more focused on that than the obvious shortcomings.” In a speech last year, Karp, who is CEO, explained what this means for the company: “If you don’t think the U.S. government should have the best software in the world … We respectfully ask you not to join Palantir. Not in like you’re an idiot, just we have this belief structure.”

Advertisement

The company’s second core belief springs from the chip on Karp’s shoulder. Like generations of Black and Jewish entrepreneurs before him, Karp presumes his company isn’t going to win any deals on the golf course. So to get contracts from Fortune 500 companies and governments Palantir must do things other software companies won’t, and do them so fast and cheap that the results are irrefutable.

This approach has worked exceedingly well in the corporate world. Palantir’s market capitalization is $52 billion and its stock has climbed more than 150 percent in the past year, largely because of demand for its AI products. But for much of its existence, an openly patriotic company with software better, faster and cheaper than its competitors was shut out of U.S. defense contracts. In the mid-2010s this put Palantir’s survival at risk and sharpened Karp’s indignation to a fine point. Either his biography had made him paranoid or something was amiss.

In 2016, Palantir took the unprecedented step of suing the Pentagon to find out. The case alleged the Defense Department was in violation of the Federal Acquisition Streamlining Act, a 1994 law that prohibits the government from starting new bloat-filled projects if an off-the-shelf solution is available. The House Committee on Government Operations made its intent unusually clear: “The Federal Government must stop ‘reinventing the wheel’ and learn to depend on the wide array of products and services sold to the general public.”

The record of Palantir v. United States is about as one-sided as these things can be. In the Court of Federal Claims, Palantir was able to document soldiers, officers and procurement people acknowledging the supremacy and lower cost of its in-market products — and show the Pentagon was still buying a more expensive proposal, years from effective deployment, offered by a consortium of Raytheon, Northrop........

© Washington Post


Get it on Google Play