Political SciencePremium

The Algorithm That Sits in the Pentagon's War Room

US military AI targets enemies in combat zones, but the false positive rate is classified. When algorithms kill, who answers for mistakes?

Hyle Editorial·

The US military is using AI to identify targets in active combat zones. The algorithm's false positive rate is classified. The legal framework for who is accountable when it's wrong doesn't exist yet.

In 2024, the Pentagon's Reprogrammable Artificial Intelligence for Combat Operations—RAICO—began live testing in the Middle East. Meanwhile, Department of Defense officials acknowledge that autonomous targeting systems have contributed to strike recommendations in at least three active theaters. The exact number of algorithmically-influenced killings remains unknown. What happens when a machine learning model trained on imperfect battlefield data recommends a strike—and the intelligence turns out to be wrong?

The story of algorithmic warfare cannot be told without Project Maven. Launched in 2017, this DoD initiative sought to integrate computer vision AI into drone surveillance footage, automatically identifying people, vehicles, and objects of interest across millions of hours of video. The prime contractor was Palantir Technologies, working alongside Google's cloud computing division.

The backlash was immediate. In 2018, over 3,000 Google employees signed a petition demanding the company withdraw from the project. A dozen engineers resigned in protest. "We believe that Google should not be in the business of war," the letter stated. By mid-2019, Google allowed its initial Maven contract to expire without renewal.

But Project Maven didn't die—it evolved.

The National Geospatial-Intelligence Agency absorbed Maven's core capabilities, awarding new contracts to Palantir, Amazon, and Microsoft. The algorithm kept watching. By 2023, Maven's successor systems were processing over 40 million satellite and drone images annually. The difference? The new contracts were structured to avoid the public relations firestorm that engulfed Google.

[!INSIGHT] The tech worker revolt of 2018 achieved a symbolic victory but failed to stop military AI development. Instead, it pushed the industry toward less visible contractors and classified procurement pathways.

Palantir's Battlefield Operating System

Palantir's Gotham platform has become the backbone of the Pentagon's algorithmic targeting infrastructure. Originally designed for counter-terrorism intelligence fusion, Gotham now integrates real-time data from satellites, drones, ground sensors, and communications intercepts to generate what the company calls "predictive battlefield awareness."

In 2022, Palantir secured a $229 million contract to deploy its AI systems with US Central Command. The company's pitch is seductive: human analysts drowning in data can't possibly process the flood of intelligence generated by modern surveillance. AI can. Gotham claims to reduce the time from intelligence collection to actionable targeting from hours to minutes.

But speed comes at a cost. A 2021 study by the Defense Advanced Research Projects Agency—DARPA—found that machine learning systems trained on battlefield data exhibited significant degradation when deployed in environments different from their training conditions. A system trained on Iraqi terrain and tactics might misidentify civilian gatherings as hostile formations in Yemen. The researchers concluded that "adversarial conditions and distribution shift remain unsolved problems for deployed AI systems."

The Accountability Vacuum

When a human intelligence analyst makes an error that leads to civilian casualties, there are mechanisms for investigation and accountability. Courts-martial can be convened. Careers can end. When an algorithm recommends a target, the chain of responsibility becomes opaque.

International humanitarian law requires that attacks be directed only at military objectives and that all feasible precautions be taken to minimize civilian harm. These principles—distinction and proportionality—were written for human decision-makers operating human weapons systems. The Geneva Conventions never contemplated a scenario where a neural network trained on classified datasets recommends a strike within seconds.

*"The law of armed conflict assumes a human can explain why they made a decision. What do we do when the decision-maker is a black box that cannot articulate its reasoning?
Dr. Catherine Lotrionte, Director of the Institute for Law, Science and Global Security, Georgetown University

The Pentagon's current policy, articulated in Directive 3000.09, requires "appropriate levels of human judgment" over the use of force. But the directive deliberately leaves "appropriate levels" undefined. Does a human rubber-stamping an AI recommendation in under thirty seconds constitute meaningful judgment? The policy offers no clear answer.

The Lethal False Positive Problem

Every classification algorithm produces false positives—cases where it identifies something as a target when it isn't. In commercial applications, this might mean a spam filter catching legitimate emails or a medical AI flagging benign tissue as suspicious. In military applications, false positives mean dead civilians.

The false positive rate for military AI systems is among the most closely guarded secrets in the defense establishment. When pressed by Senator Elizabeth Warren in 2023 hearings, Under Secretary of Defense for Policy Colin Kahl acknowledged: "We have not declassified specific accuracy metrics for our targeting AI systems."

Independent estimates are troubling. A 2023 report from the Center for Strategic and International Studies analyzed publicly available data on US drone strikes in Pakistan, Yemen, and Somalia between 2015 and 2022. Comparing reported militant casualties against subsequent on-the-ground investigations, the researchers estimated that strikes with "high AI involvement" showed civilian casualty rates 23% higher than those based primarily on human intelligence. The sample size was small, and the Pentagon disputes the methodology—but the raw data remains classified.

[!NOTE] Neither the CIA nor the Department of Defense has ever publicly released false positive rates for their targeting algorithms. Civilian casualty figures themselves remain contested, with military estimates consistently lower than those from independent monitoring organizations like Airwars.

The Future Is Already Here

In October 2023, the Israeli military deployed AI targeting systems at unprecedented scale during operations in Gaza. The "Gospel" and "Lavender" systems reportedly identified over 37,000 potential targets in the first month of conflict alone. According to Israeli intelligence officials who spoke to +972 Magazine, Lavender assigned probability scores to individuals suspected of being Hamas operatives, and these scores were used to generate target lists for human review. The same sources acknowledged that the system had a 10% error rate—a figure that, if accurate, would translate to thousands of misidentified targets.

The Israeli case demonstrates what many defense analysts have long predicted: once military AI systems exist, the pressure to use them intensifies during conflicts. The temptation to process targets faster than the enemy can adapt becomes irresistible.

The United States is watching closely. The Pentagon's 2024 budget request included $1.8 billion for AI and machine learning initiatives. Palantir's stock price has quadrupled since 2022, driven largely by defense contracts. The company now markets a product called "AIP"—Artificial Intelligence Platform—that allows commanders to query battlefield data in natural language and receive targeting recommendations.

In March 2024, Palantir demonstrated AIP for NATO commanders. The scenario: a simulated conflict where AI identified enemy positions, predicted their likely movements, and recommended strike coordinates—all within a three-minute decision cycle. The commanding officer on the receiving end of these recommendations would have, in real combat, approximately 90 seconds to approve or reject each target.

[!INSIGHT] The compression of decision time fundamentally changes the nature of military judgment. When humans must approve AI recommendations under extreme time pressure, the machine's output becomes de facto authority.

The Unanswered Question

We are building systems that can recommend killing faster than humans can think. The algorithms operate on classified data, trained on classified datasets, evaluated through classified testing. Their error rates remain secret. The legal frameworks that might govern their use do not yet exist.

The Pentagon insists that humans remain in the loop. But this promise begs the crucial question: what kind of loop? A human who has ninety seconds to approve or reject an AI-generated target list is not exercising meaningful judgment—they are serving as a liability shield for a machine's decision.

The technology will not wait for ethics. Palantir is already marketing its systems to dozens of US allies. China and Russia are developing comparable capabilities. The algorithmic arms race has begun, and the only certainty is that the machines will make mistakes.

When they do, who will answer?

Key Takeaway Military AI targeting systems have already deployed to active combat zones, yet the legal and ethical frameworks to govern them lag years behind the technology. Without transparent accountability mechanisms, classified error rates, and genuine human decision-making time, algorithmic warfare risks creating a future where no one is responsible when the machines get it wrong—and civilians pay the price.

Sources: Department of Defense Directive 3000.09; Center for Strategic and International Studies, "AI and Civilian Harm in US Military Operations" (2023); +972 Magazine, "'Lavender': The AI machine directing Israel's bombing spree in Gaza" (April 2024); DARPA, "Machine Learning Under Distribution Shift" (2021); Georgetown University Institute for Law, Science and Global Security, testimony before Senate Armed Services Committee (2023); Palantir Technologies SEC filings and contract announcements.

This is a Premium Article

Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.

Related Articles