Skip to main content

Another Look at the Bombing of Al Minab : Shared by Tom McDermott

Project Maven's Cheery Seal Features Smiling Robots of War*

On 19 March I posted my article, How AI and Distance Killed the Children at Al Minab. My focus then was on the moral chasm that opens when killing is automated and carried out at a distance — the way technology allows those who give the orders, or push the button, to remain untouched by what happens at the other end.  I also wrote about the absence of the fail-safe systems of the past - human spotters who could halt attacks on civilians.

Today's Guardian long read by Kevin T. Baker covers much of the same ground but goes considerably deeper: into the specific failures of human judgement that kept a primary school on a target list for a decade; and into the architecture of the AI systems managing modern wars, including the Maven Smart System — a Pentagon targeting platform that Google contracted to build, then abandoned, and Palantir took over. Baker's account of what Palantir's CEO actually believes he built is particularly revealing.

Baker's central argument — and it is a damning one — is that these systems are designed, whether intentionally or not, to place lethal human decisions beyond accountability. It is essential reading.  — Tom

AI Got the Blame for the Iran School Bombing. The Truth Is Far More Worrying

Kevin T. Baker | The Guardian | 26 March 2026

Click here for the article

Summary: The bombing of the Shajareh Tayyebeh primary school in Minab, which killed between 175 and 180 people — most of them girls aged seven to twelve — was rapidly framed in public debate as an AI targeting error. The author argues that this framing was false and dangerously convenient.

The targeting system that carried the school through to a strike order was not an AI chatbot but Maven — the Palantir-built platform that now processes up to 1,000 targeting decisions an hour. The article traces Maven's origins through the Pentagon's Project Maven, Google's controversial 2018 contract to build it, and the "third offset strategy" that drove the compression of the kill chain.

Baker argues that Palantir's system eliminated not bureaucratic inefficiency but the human judgement that bureaucracy had always depended on — the moment when someone might have noticed that a category no longer fit the case. The school was visible on Google Maps and listed in Iranian business directories. Nobody searched.

Calling Minab an "AI problem," the article concludes, gave the human beings who built, authorised and launched this system a place to hide.

* Note the irony in the Project Maven's seal - The Latin motto reads, “officium nostrum est adiuvare” (our duty is to help).

Quotes:

"A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal."

"Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality."

"Software is now at the helm," the CEO of Palantir, Karp writes, with hardware "serving as the means by which the recommendations of AI are implemented in the world." His model for what this should look like comes from nature: bee swarms and the murmurations of starlings. "There is no mediation of the information captured by the scouts once they return to the hive" — no weekly reports to middle management, no presentations to senior leaders, no meetings."

"The targeting cycle had been fast enough to hit 50 buildings and too fast to discover it was hitting the wrong ones."

Comments