• Research
  • Strategic Europe
  • About
  • Experts
Carnegie Europe logoCarnegie lettermark logo
EUUkraine
  • Donate
{
  "authors": [
    "Raluca Csernatoni"
  ],
  "type": "commentary",
  "blog": "Strategic Europe",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace",
    "Carnegie Europe"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Europe",
  "programAffiliation": "",
  "regions": [
    "Europe",
    "United States",
    "Ukraine",
    "Iran",
    "Israel"
  ],
  "topics": [
    "AI",
    "EU",
    "Foreign Policy",
    "Security",
    "Defense"
  ]
}
Strategic Europe logo

Source: Getty

Commentary
Strategic Europe

The Fog of AI War

In Ukraine, Gaza, and Iran, AI warfare has come to dominate, with barely any oversight or accountability. Europe must lead the charge on the responsible use of new military technologies.

Link Copied
By Raluca Csernatoni
Published on Apr 16, 2026
Strategic Europe

Blog

Strategic Europe

Strategic Europe offers insightful analysis, fresh commentary, and concrete policy recommendations from some of Europe’s keenest international affairs observers.

Learn More

An irreducible uncertainty haunts every battlefield: the fog of war. And for two centuries, military innovation has promised to lift that fog. Artificial Intelligence (AI) was supposed to be the technology that finally did so, replacing human guesswork with machine precision and processing oceans of data at speeds that would render uncertainty obsolete.

There are genuine advantages. AI can protect soldiers by deploying machines into kill zones first. It can process terabytes of sensor data faster than any human and can coordinate multilayered air defense systems at the speed required to stop ballistic missiles. Any serious European policy on military AI must begin by acknowledging that these capabilities save lives and that adversaries will field them regardless of what democracies decide.

But acknowledging the advantages is not the same as ignoring what happens when speed, attrition, and scale become organizing principles of warfare. U.S. President Donald Trump's dispute with Anthropic, which insisted that its models should not be used without guardrails against fully autonomous weapons and mass domestic surveillance, ended with the Pentagon designating the company a supply chain risk.

The message from the world’s largest military power is that normative constraints on military AI are obstacles to innovation rather than preconditions for lawful use.

This vacuum creates both a responsibility and an opportunity for Europe. The EU has begun to build the institutional architecture for a defense technological base through BraveTech EU, a roadmap for EU defense industry transformation, and a 2026 action plan on drone and counterdrone security. Yet, these initiatives must do more than replicate the logic of speed maximization in AI-powered defense innovation. Europe’s comparative advantage lies in its capacity to embed legal accountability, meaningful human judgement, and deliberative processes into systems before they are fielded—not after harm has occurred.

In practice, this demands a willingness to accept that some targeting cycles must remain slow. It demands a doctrine that treats the deliberative pause—the time required for a human to genuinely evaluate and override an algorithm’s recommendation—as a strategic asset rather than an operational liability. And it demands enforceable red lines at the European level, embedded in the entire AI lifecycle, from innovation, through development, defense procurement, and export controls, and finally to operational doctrine. The formal shell of human judgement should not become a legal alibi for algorithmic killing.

But European decisionmakers should also be mindful to do this without stifling the experimentation necessary to continue innovation central to acquiring vital marginal advantage on the battlefield. Three theaters of war offer the EU vital lessons: Ukraine, Gaza, and Iran.

First, AI retrained on classified battlefield data has increased Ukrainian drone engagement rates, turning a defensive war of attrition into something more survivable. Second, Israel’s integrated command systems now coordinate interceptions in real time across American Terminal High-Altitude Area Defense (THAAD) batteries, Aegis ships, and Israeli platforms, determining which system will intercept which incoming missile and preventing the waste of interceptors during Iranian barrages. And third, the American Low-cost Uncrewed Combat Attack System (LUCAS) drone, deployed for the first time during Operation Epic Fury in February 2026, uses vision-based object recognition rather than static satellite coordinates. This improves precision and reduces civilian harm compared with the Iranian Shahed drone from which it was reverse-engineered.

But on these battlefields, AI has also produced something more dangerous than classical uncertainty: a fog generated by information rather than by its absence. Whereas the classical fog of war blinded commanders with too little knowledge, the fog of AI war blinds them with too much. Algorithmic scores, probabilistic targeting lists, and recommendations arrive faster than anyone can evaluate them. The result is manufactured clarity that masks a deeper opacity of the AI black box.

The Israel Defense Forces’s use of Lavender, an AI system that assigned probabilistic scores to tens of thousands of Palestinian men based on aggregated surveillance data, laid bare the core of this problem. Lavender reportedly struggled to distinguish legitimate military targets from civilians, and the review process was too thin to filter out wrongful targets reliably. Reports have noted that human analysts spent an average of twenty seconds reviewing each recommendation, largely to confirm that a target was male. The parallel system, Gospel, generated two hundred infrastructure-targeting recommendations in under two weeks, whereas human officers might previously have produced fifty in a year. In practice, speed overwhelmed scrutiny.

The 2026 Iran conflict has deepened this pattern. It marked the United States’ first combat deployment of LUCAS attack drones, alongside large language models reportedly used to process satellite imagery, assess signals intelligence, and run battle simulations. The problem was not only the drone itself. It was the wider AI-enabled compression of sensing, analysis, and targeting. Humans had less time to question machine-generated outputs.

Across all these theatres, the same dynamic recurs: AI accelerates the targeting cycle to a tempo at which meaningful human oversight is too often procedurally present but substantively empty. Meaningful judgment would mean reviewing target identification, assessing proportionality, and deciding whether to strike. Now, the human remains present—but without real time to contest the machine.

This creates an accountability problem that classical uncertainty never posed. The old fog of war frustrated commanders but left the chain of responsibility intact. AI fragments agency among developers, data engineers, procurement officials, operators, and commanding officers, until responsibility disappears. The human remains in the loop, like a signature on a document: present and legally traceable but functionally irrelevant to the content of the decision.

International governance efforts are lagging or do not have enforcement teeth. AI systems are already radically transforming warfare, and not all of those changes are harmful. But the fog of AI war, the manufactured certainty that substitutes machine output for human judgement, will not clear itself. It must be governed. Europe, given the current abdication of American leadership on responsible military AI, may be uniquely positioned to lead that effort. The question is whether it will act before algorithmic targeting becomes the unquestioned norm of armed conflict.

Get more news and analysis from
Strategic Europe

Subscribe to Strategic Europe, delivered to you weekdays!

This publication has been produced in the context of the EU Cyber Direct – EU Cyber Diplomacy Initiative project with the financial assistance of the European Union. The contents of this document are the sole responsibility of the author and can under no circumstances be regarded as reflecting the position of the European Union or any other institution.

About the Author

Raluca Csernatoni

Fellow, Carnegie Europe

Csernatoni is a fellow at Carnegie Europe, where she specializes on European security and defense, as well as emerging disruptive technologies.

    Recent Work

  • Commentary
    Corporate Geopolitics: When Billionaires Rival States

      Raluca Csernatoni

  • Q&A
    Can the EU Achieve Its Tech Ambitions?

      Raluca Csernatoni, Sinan Ülgen

Raluca Csernatoni
Fellow, Carnegie Europe
Raluca Csernatoni
AIEUForeign PolicySecurityDefenseEuropeUnited StatesUkraineIranIsrael

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Strategic Europe

  • Commentary
    Strategic Europe
    How to Join the EU in Three Easy Steps

    Montenegro and Albania are frontrunners for EU enlargement in the Western Balkans, but they can’t just sit back and wait. To meet their 2030 accession ambitions, they must make a strong positive case.

      Dimitar Bechev, Iliriana Gjoni

  • Commentary
    Strategic Europe
    Taking the Pulse: Can NATO Survive the Iran War?

    Donald Trump has repeatedly bashed NATO and European allies, threatening to annex Canada and Greenland and deploring their lack of enthusiasm for his war of choice in Iran. Is this latest round of abuse the final straw?

      • Rym Momtaz

      Rym Momtaz, ed.

  • Commentary
    Strategic Europe
    On NATO, Trump Should Embrace France Instead of Bashing It

    Donald Trump’s repudiation of NATO goes against the Make America Great Again vision of a U.S.-centered foreign policy. If the goal is to preserve the alliance by boosting Europe’s commitments, leaning into France’s vision is the most America First way forward.

      • Rym Momtaz

      Rym Momtaz

  • Commentary
    Strategic Europe
    Win or Lose, Orbán Has Broken Hungary’s Democracy

    Hungarians head to the polls on April 12 for an election of national and European consequence. Three different outcomes are on the cards, each with their own implications for the EU.

      Zsuzsanna Szelényi

  • Commentary
    Strategic Europe
    Is France Shifting Rightward?

    The far right failed to win big in France’s municipal elections. But that’s not good news for the country’s left wing, which remained disunited while the broader right consolidated its momentum ahead of the 2027 presidential race.

      Catherine Fieschi

Get more news and analysis from
Carnegie Europe
Carnegie Europe logo, white
Rue du Congrès, 151000 Brussels, Belgium
  • Research
  • Strategic Europe
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
  • Gender Equality Plan
Get more news and analysis from
Carnegie Europe
© 2026 Carnegie Endowment for International Peace. All rights reserved.