US military leans into AI for attack on Iran, but the tech doesn’t lessen the need for human judgmen

Anthropic’s Claude is helping the US military choose targets to strike in Iran, but responsibility for the accuracy, strategy and ethics of the decisions rests with humans.

Author: Jon R. Lindsay on Mar 11, 2026
 
Source: The Conversation
AI is helping U.S. forces find and choose targets in Iran, like this airfield. U.S. Central Command via AP

The U.S. military was able “to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran” thanks in part to its use of artificial intelligence, according to The Washington Post. The military has used Claude, the AI tool from Anthropic, combined with Palantir’s Maven system, for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela.

While Claude is only a few years old, the U.S. military’s ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the U.S. can use AI in war today.

In my experience as an international relations scholar studying strategic technology at Georgia Tech, and previously as an intelligence officer in the U.S. Navy, I find that digital systems are only as good as the organizations that use them. Some organizations squander the potential of advanced technologies, while others can compensate for technological weaknesses.

Myth and reality in military AI

Science fiction tales of military AI are often misleading. Popular ideas of killer robots and drone swarms tend to overstate the autonomy of AI systems and understate the role of human beings. Success, or failure, in war usually depends not on machines but the people who use them.

In the real world, military AI refers to a huge collection of different systems and tasks. The two main categories are automated weapons and decision support systems. Automated weapon systems have some ability to select or engage targets by themselves. These weapons are more often the subject of science fiction and the focus of considerable debate.

Decision support systems, in contrast, are now at the heart of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including in current and recent wars in the Middle East, are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration and cybersecurity.

Claude is an example of a decision support system, not a weapon. Claude is embedded in the Maven Smart System, used widely by military, intelligence and law enforcement organizations. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort the information and decide on targets and priorities.

The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but human beings ultimately make the decisions.

Researcher Craig Jones explains how the U.S. military is using artificial intelligence in its attack on Iran, and some of the issues that arise from its use.

The long history of military AI

Weapons with some degree of autonomy have been used in war for well over a century. Nineteenth-century naval mines exploded on contact. German buzz bombs in World War II were gyroscopically guided. Homing torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel’s Iron Dome and the U.S. Patriot system, have long offered fully automatic modes.

Robotic drones became prevalent in the wars of the 21st century. Uncrewed systems now perform a variety of “dull, dirty and dangerous” tasks on land, at sea, in the air and in orbit. Remotely piloted vehicles like the U.S. MQ-9 Reaper or Israeli Hermes 900, which can loiter autonomously for many hours, provide a platform for reconnaissance and strikes. Combatants in the Russia-Ukraine war have pioneered the use of first-person view drones as kamikaze munitions. Some drones rely on AI to acquire targets because electronic jamming precludes remote control by human operators.

But systems that automate reconnaissance and strikes are merely the most visible parts of the automation revolution. The ability to see farther and hit faster dramatically increases the information processing burden on military organizations. This is where decision support systems come in. If automated weapons improve the eyes and arms of a military, decision support systems augment the brain.

Cold War era command and control systems anticipated modern decision support systems such as Israel’s AI-enabled Tzayad for battle management. Automation research projects like the United States’ Semi-Automatic Ground Environment, or SAGE, in the 1950s produced important innovations in computer memory and interfaces. In the U.S. war in Vietnam, Igloo White gathered intelligence data into a centralized computer for coordinating U.S. airstrikes on North Vietnamese supply lines. The U.S. Defense Advanced Research Projects Agency’s strategic computing program in the 1980s spurred advances in semiconductors and expert systems. Indeed, defense funding originally enabled the rise of AI.

Organizations enable automated warfare

Automated weapons and decision support systems rely on complementary organizational innovation. From the Electronic Battlefield of Vietnam to the AirLand Battle doctrine of the late Cold War and later concepts of network-centric warfare, the U.S. military has developed new ideas and organizational concepts.

Particularly noteworthy is the emergence of a new style of special operations during the U.S. global war on terrorism. AI-enabled decision support systems became invaluable for finding terrorist operatives, planning raids to kill or capture them, and analyzing intelligence collected in the process. Systems like Maven became essential for this style of counterterrorism.

The impressive American way of war on display in Venezuela and Iran is the fruition of decades of trial and error. The U.S. military has honed complex processes for gathering intelligence from many sources, analyzing target systems, evaluating options for attacking them, coordinating joint operations and assessing bomb damage. The only reason AI can be used throughout the targeting cycle is that countless human personnel everywhere work to keep it running.

AI gives rise to important concerns about automation bias, or the tendency for people to give excessive weight to automated decisions, in military targeting. But these are not new concerns. Igloo White was often misled by Vietnamese decoys. A state-of-the-art U.S. Aegis cruiser accidentally shot down an Iranian airliner in 1988. Intelligence mistakes led U.S. stealth bombers to accidentally strike the Chinese embassy in Belgrade, Serbia, in 1999.

Many Iraqi and Afghan civilians died due to analytical mistakes and cultural biases within the U.S. military. Most recently, evidence suggests that a Tomahawk cruise missile struck a girls school adjacent to an Iranian naval base, killing about 175 people, mostly students. This targeting could have resulted from a U.S. intelligence failure.

Automated prediction needs human judgment

The successes and failures of decision support systems in war are due more to organizational factors than technology. AI can help organizations improve their efficiency, but AI can also amplify organizational biases. While it may be tempting to blame Lavender for excessive civilian deaths in the Gaza Strip, lax Israeli rules of engagement likely matter more than automation bias.

As the name implies, decision support systems support human decision-making; AI does not replace people. Human personnel still play important roles in designing, managing, interpreting, validating, evaluating, repairing and protecting their systems and data flows. Commanders still command.

In economic terms, AI improves prediction, which means generating new data based on existing data. But prediction is only one part of decision-making. People ultimately make the judgments that matter about what to predict and how to use predictions. People have preferences, values and commitments regarding real-world outcomes, but AI systems intrinsically do not.

In my view, this means that increasing military use of AI is actually making humans more important in war, not less.

Jon R. Lindsay does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read These Next