Vectorfall.com - AI News and Updates
Illustration of AI-assisted military decision-making with a command interface and data streams
5 mars, 2026 by Thomas Karlsson
Reading time: 5 min

US–Israel Strike Reports Spotlight Anthropic’s Claude in Military Decisions

Early on February 28, reports said the United States and Israel launched a joint attack on Iran, carrying out nearly 900 missile strikes in the first 12 hours. The surge was described as enabled by new AI, with The Guardian reporting that the US military used Anthropic’s Claude to compress the targeting decision chain—intensifying scrutiny of how large language models and machine learning are entering lethal operations.

What was reported about Claude’s role in targeting

According to The Guardian, Claude was used as part of a process to shorten the time between identifying a potential target and authorizing action. Craig Jones, a political geography lecturer at Newcastle University, described the system as producing recommendations for suitable targets at computer speed—an expression that underscores the central promise of AI in command environments: turning vast, messy data into actionable options faster than human staff can.

It is important to separate what a language model is from what a targeting system is. Claude is an LLM designed to generate and summarize text, reason over documents, and assist with decision workflows. In military settings, an LLM would typically sit inside a larger software stack—connected to intelligence reports, sensor summaries, maps, and rules-of-engagement checklists—helping analysts draft assessments, surface inconsistencies, or prioritize leads. Even if the final “targeting” output appears as a recommendation, it may be the product of multiple models and databases, not the LLM alone.

Israel’s prior use of AI targeting systems and what it suggests

The report also points to Israel’s earlier use of AI-enabled targeting in Gaza, including a system widely referred to as Lavender, which was described as helping identify tens of thousands of potential targets within Hamas. Such systems are typically associated with machine learning pipelines that combine:

  • Image recognition from drones and satellites
  • Pattern analysis of communications metadata
  • Biometric and identity-resolution techniques
  • Network analysis that links people, places, and devices

This matters because it shows how “AI in warfare” is not a single tool but a production line: collection, fusion, scoring, and prioritization. LLMs can accelerate the human-facing parts—summaries, rationale templates, and cross-checking—while computer vision and other models do the sensing and classification.

Why speed changes the ethics and the operational risk

Ethics professor David Leslie (Queen Mary University of London) told The Guardian that this represents a new era of military strategy and technology, while warning about “cognitive offloading.” The concept is straightforward: as systems offer confident-looking recommendations, operators may become psychologically and procedurally distanced from consequences, even if humans remain “in the loop.”

The speed advantage also creates a governance problem. If AI expands the number of viable options—more targets, more routes, more timing windows—decision-makers can face a higher tempo of approvals with less time to scrutinize each recommendation. In practice, that can weaken:

  • Verification standards (is the target correctly identified?)
  • Proportionality assessments (is expected harm excessive?)
  • Accountability (who is responsible for an error when AI shaped the workflow?)

The article cites a deadly strike that hit a school in southern Iran, reportedly killing 165 people including many children, according to state-controlled media. Regardless of attribution disputes common in wartime reporting, incidents like this intensify demands for auditable processes: what data was used, what confidence thresholds applied, what human checks occurred, and what safeguards prevented misidentification.

The broader AI ecosystem behind military adoption

The reported use of Claude highlights how frontier AI vendors sit adjacent to national security demand. Modern military AI depends on the same building blocks driving the commercial boom:

  • GPU-heavy training and inference, dominated by NVIDIA hardware
  • Foundation models (LLMs and multimodal models) adapted via fine-tuning and retrieval
  • Data engineering that turns classified and sensor data into machine-readable formats
  • Secure deployment patterns, including on-premises and air-gapped environments

But military use raises additional requirements that consumer AI rarely meets: strict provenance of data, robust red-teaming against adversarial manipulation, and explainability that supports legal review. LLMs are also prone to hallucinations and overconfident phrasing—traits that are manageable in customer support but dangerous in lethal decision support.

Regulation, norms, and what to watch next

Global debates over lethal autonomous weapons systems are accelerating, with the UN and many governments discussing limits on autonomy in targeting and engagement. Even when a human formally authorizes a strike, AI can still meaningfully shape outcomes by narrowing choices, setting priorities, and influencing perceived certainty.

Key questions now facing militaries, vendors, and regulators include:

  • What constitutes meaningful human control when AI sets the pace?
  • Should LLM-driven recommendations be logged and independently reviewable?
  • How are models validated against bias, noisy intelligence, and adversarial deception?
  • Where is the line between decision support and de facto automation?

If the reported Claude deployment is accurate, it signals a shift: frontier commercial AI is no longer only a productivity tool—it is becoming part of the machinery that compresses time, expands strike capacity, and raises the stakes for accountability in modern conflict.

Related Articles

A student sits in a supervised exam room with an invigilator, illustrating the shift from take-home tests to proctored exams amid AI concerns.
6 mars, 2026 by Thomas Karlsson

Swedish towns face exam crunch as AI drives return to proctored tests

Distance students in Sweden are increasingly being pushed back into supervised exam halls as universities tighten rules to curb AI-assisted cheating. An SVT review highlights how gaps in local exam provision force students...

AI software optimizing a mobile network with NVIDIA GPU acceleration
5 mars, 2026 by Thomas Karlsson

NVIDIA-Backed AI Aims to Rewire Mobile Network Operations

AI is increasingly being positioned as the control layer for mobile networks, and a Swedish report highlights an ambitious vision: using AI to adapt and optimize cellular infrastructure with support from NVIDIA’s compute...

Illustration of professionals using AI assistants across different industries
3 mars, 2026 by Thomas Karlsson

AI Experts: Which Industries Will Change First

Fears that AI will simply replace workers are overstated, according to AI transformation researcher Einav Peretz Andersson at Jönköping University. Instead, she argues, roles will evolve as AI improves productivity and quality.

Anthropic CEO Dario Amodei speaking about AI safety and military use
2 mars, 2026 by Thomas Karlsson

Anthropic resists Pentagon push for unrestricted Claude access

Anthropic CEO Dario Amodei said Thursday the company cannot “in good conscience” accept US government demands for unrestricted access to its AI systems, setting up a high-stakes clash between AI safety commitments and...