OpinionNews

AI got the blame for the Iran school bombing. The truth is far more worrying

The Audio Long Read37m 34s

A Guardian Long Read investigation argues that the bombing of an Iranian primary school killing ~180 people was wrongly attributed to AI (Claude), when the real culprit was Palantir's Maven Smart System — a targeting platform that compressed military decision-making to 1,000 targets per hour. The author contends that the media's focus on AI chatbots obscured deeper questions about who authorized the war, whether the strike was a war crime, and how bureaucratic optimization eliminated the human deliberation that historically caught fatal targeting errors.

Summary

The article opens with the February 2026 bombing of the Shajaratayeb primary school in southern Iran during 'Operation Epic Fury,' which killed 175–180 people, mostly girls aged 7–12. Media coverage quickly focused on whether Claude, Anthropic's chatbot, had selected the school as a target — a framing the author argues was fundamentally wrong and dangerously distracting.

The author explains that the actual system responsible was Palantir's Maven Smart System, which grew out of the Pentagon's 'Third Offset Strategy' announced in 2014. This strategy aimed to compensate for eroding U.S. military advantages over China and Russia by using AI to make decisions faster than adversaries. Project MAVEN was established in 2017 to address the problem of intelligence analysts drowning in drone footage. After Google abandoned the contract following employee protests, Palantir took it over in 2019 and developed it through a series of exercises called 'Scarlet Dragon,' ultimately achieving a benchmark of 1,000 targeting decisions per hour — roughly 3.6 seconds per target.

The Maven interface resembles project management software with a Kanban-style workflow, where targets move through pipeline stages toward execution. The core AI technologies are computer vision systems (not language models) that analyze drone footage, radar, and satellite imagery. Claude and other LLMs were only added in late 2024 as a peripheral search-and-summarization layer. The author emphasizes that the targeting intelligence and execution pipeline predates and operates independently of any chatbot.

The article then traces a long historical pattern of targeting systems that compressed human deliberation and measured only their own output. Operation Igloo White in Vietnam scattered 20,000 sensors along the Ho Chi Minh Trail and fed data to IBM computers, but could not distinguish trucks from ox carts — the North Vietnamese exploited this by playing truck recordings and hanging buckets of urine to fool chemical detectors. The Air Force claimed to have destroyed 46,000 trucks, exceeding the CIA's estimate of all trucks in North Vietnam.

The author connects this to Michael Sherry's analysis of WWII precision bombing doctrine, where optimization language (civilians 'de-housed,' labor-hours destroyed per ton of bombs) allowed practitioners to stop asking whether bombing served any strategic purpose. The historian Theodore Porter's argument is invoked: organizations adopt quantitative rules not for accuracy but for defensibility, because judgment is politically vulnerable while rules are not.

The article examines specific historical targeting failures that resulted from compressed deliberation. Mark Garlasco's 50 leadership strikes in the 2003 Iraq invasion were precise but hit zero intended targets because the intelligence was wrong and the cycle was too fast to discover this. The 1999 Chinese embassy bombing in Belgrade resulted from circular reporting — each validation relied on the last, creating the illusion of multiple independent checks while amplifying a single error. John Lindsay's research showed that once a target was 'reified' in a PowerPoint package, questioning its assumptions became socially and institutionally difficult.

A key empirical data point comes from Lieutenant Colonel John Fife's 2005 study of the 2003 invasion: on shifts led by British officers with more conservative rules of engagement, there were no friendly fire incidents and no significant collateral damage. On American-led shifts, there were. What the efficiency reformers would call 'latency' — the delay between identifying and striking — was the window in which mistakes got caught.

The author introduces the concept of a 'bureaucratic double bind': organizations cannot function without human judgment, but cannot acknowledge that judgment without appearing political, so discretion must be disguised as procedure-following. Palantir's CEO Alex Karp, in his 2025 book, celebrates eliminating the 'meetings, weekly reports, and presentations to senior leaders' that he sees as bureaucratic waste. The author argues these were precisely the spaces where people interpreted procedure and could notice when categories no longer fit reality. By encoding bureaucracy in software, Karp eliminated the interpretive flexibility the institution depended on but could never officially acknowledge.

The Shajaratayeb school appeared in Iranian business listings and on Google Maps — a basic search could have identified it as a civilian building. At 1,000 decisions per hour, no one searched. The article concludes that calling the massacre an 'AI problem' gives shelter to the human decisions that actually caused it: the decision to compress the kill chain, to define deliberation as mere latency, to start the war, and the congressional failure to stop it.

Key Insights

  • The author argues that Claude and large language models played no meaningful role in the school bombing — they were added to Palantir's ecosystem in late 2024 only as a peripheral search layer, while the core targeting AI uses computer vision systems that predate LLMs by years.
  • The author contends that Palantir's Maven Smart System achieved a benchmark of 1,000 targeting decisions per hour (3.6 seconds per target), compressing what previously required 2,000 people over an entire war into 20 soldiers using a software interface.
  • The author argues that the media's focus on whether Claude 'hallucinated' or whether Anthropic bore responsibility displaced the constitutional question of who authorized the war and the legal question of whether the strike constitutes a war crime.
  • The author uses Lieutenant Colonel Fife's 2005 study to show that British-led targeting shifts in the 2003 Iraq invasion produced zero friendly fire incidents and no significant collateral damage, while American-led shifts did not — because what efficiency reformers called 'latency' was actually the window in which mistakes were caught.
  • The author argues that Alex Karp's celebration of eliminating 'meetings, weekly reports, and presentations to senior leaders' actually destroyed the unacknowledged interpretive spaces where humans noticed when formal categories no longer matched reality, leaving a bureaucracy that can execute rules but not interpret them.
  • The author identifies a recurring historical pattern — from Operation Igloo White's truck-counting in Vietnam to WWII bombing optimization — where targeting systems that can only measure their own output begin believing their own inflated results, a phenomenon the military nicknamed 'the Great Laotian Truck Eater.'
  • The author invokes Theodore Porter's historical argument that organizations adopt quantitative rules not because numbers are more accurate but because they are more defensible — judgment is politically vulnerable, so discretion gets encoded as procedure to make it appear to disappear.
  • The author argues that the Shajaratayeb school appeared in Iranian business listings and on Google Maps and could have been identified as civilian with a basic search, but at 1,000 targeting decisions per hour, no one searched — the speed of the system structurally precluded the verification that would have caught the error.

Topics

Palantir Maven Smart System and military targeting automationHistorical pattern of targeting system failures through over-optimizationMedia misattribution of the school bombing to AI chatbotsThe bureaucratic elimination of human deliberation in kill chainsConstitutional and legal accountability for the Iran strikes

Full transcript available for MurmurCast members

Sign Up to Access

Get AI summaries like this delivered to your inbox daily

Get AI summaries delivered to your inbox

MurmurCast summarizes your YouTube channels, podcasts, and newsletters into one daily email digest.