If you’ve been following the news about AI and modern warfare — the U.S. strikes on Iran, the conflict in Ukraine, the battles in Gaza — you’ve probably seen the name “Maven Smart System” appear repeatedly. It shows up in Pentagon briefings, congressional hearings, and defense technology reports. Yet despite being one of the most consequential military AI platforms ever built, almost nobody explains what Maven actually is, how it works, or where it came from.
This is that explanation.
From a narrow 2017 computer vision project to the AI backbone of America’s most advanced battlefield operations, here is the complete story of Project Maven — its origins, its controversies, its architecture, who powers it today, and where it’s heading next.
The Problem That Created Maven

To understand why Maven exists, you need to understand the surveillance paradox the U.S. military created for itself.
By the mid-2010s, the Pentagon had deployed thousands of drones across active theaters in Iraq, Syria, and Afghanistan. These drones were extraordinarily capable — a single Predator drone could generate up to 900 hours of full-motion video footage in one mission. Fleets of them were flying around the clock, feeding footage back to analysts at bases across the United States and overseas.
The result was a crisis of abundance. The military had more surveillance footage than it could possibly review. Human analysts were drowning. Critical intelligence — a weapons cache, a pattern of movement, a known target — was sitting unwatched in video archives because there simply weren’t enough eyes to find it. The bottleneck wasn’t the drones. It was the humans trying to make sense of what the drones saw.
In April 2017, Deputy Defense Secretary Robert Work launched Project Maven — formally called the Algorithmic Warfare Cross-Functional Team — to solve exactly this problem. The original mandate was deliberately narrow: use computer vision AI to automatically detect and classify objects in drone footage. Vehicles. Weapons. People. Activities. Anything that would normally require a human analyst to spot, Maven would flag automatically, letting analysts focus on interpretation and action rather than raw footage review.
It was, at its core, a productivity tool. Nobody anticipated what it would become.
The Google Controversy That Reshaped American Military AI
Maven’s early history cannot be told without its most dramatic chapter — and one of the most significant moments in the history of tech companies and defense work.
In 2017, the Pentagon awarded Google a contract to provide AI and machine learning capabilities to Project Maven. The logic was sound: Google had the world’s leading computer vision expertise, a mature AI infrastructure in TensorFlow, and the cloud computing scale the military needed. It was a natural fit on paper.
What the Pentagon didn’t anticipate was what happened inside Google.
When employees learned about the Maven contract, the backlash was swift and intense. More than 3,000 Google engineers signed an internal petition demanding the company withdraw. Their argument was principled: Google’s mission was to organize the world’s information for human benefit, not to build tools that helped militaries kill people more efficiently. The petition circulated internally, reached senior leadership, and eventually became public. Several prominent engineers and researchers resigned in protest, citing the Maven contract specifically as their reason for leaving.
The pressure worked. In June 2018, Google announced it would not renew the Maven contract when it expired. The company subsequently published AI principles that explicitly restricted its involvement in weapons development.
The implications of Google’s exit were profound and lasting. First, it validated the idea that tech employees had genuine leverage over their companies’ government contracts — a model that subsequent employee activist movements would follow. Second, and more consequentially for our story, it forced the Pentagon to completely rethink its AI acquisition strategy.
If a single company’s internal politics could disrupt a critical military capability, the military needed a more distributed, resilient approach. Rather than depending on one dominant tech partner, the Pentagon began building an ecosystem of AI vendors — each contributing different capabilities, no single one indispensable. That strategic pivot directly created the multi-vendor Maven that exists today, with Palantir, Microsoft, Anduril, Anthropic, and now OpenAI all playing distinct roles in a system no single company controls.
Google’s refusal, in other words, made Maven stronger.
How Maven Evolved: From Drone Footage to Full Battlefield Intelligence
After Google’s departure, Palantir stepped in as the primary contractor, building out Maven’s data infrastructure and intelligence fusion capabilities. Microsoft’s Azure provided the cloud computing backbone. The system expanded rapidly and ambitiously beyond its original computer vision mission.
By the early 2020s, Maven had transformed from a drone footage analyzer into a comprehensive intelligence fusion platform. It could now ingest not just video but signals intelligence, satellite imagery, battlefield sensor data, electronic intercepts, and written intelligence reports — correlating all of them simultaneously to build a unified picture of what was happening on a battlefield.
The system was deployed in operations against ISIS, used to process intelligence from the conflict in Ukraine after Russia’s 2022 invasion, and played a supporting analytical role in the Gaza conflict beginning in 2023. Each deployment expanded Maven’s capabilities and refined its architecture.
The critical evolutionary leap came when the Pentagon recognized that even a sophisticated computer vision and data fusion system had a fundamental limitation: it could see, but it couldn’t read or reason in human language. Commanders needed to ask questions in plain English, receive synthesized briefings they could act on immediately, and have complex multi-source intelligence translated into clear, structured analysis. Computer vision algorithms couldn’t do that. A large language model could.
By 2024, Anthropic’s Claude was integrated as Maven’s language reasoning layer under a $200 million contract with what was then still called the Department of Defense. Claude brought the ability to read intelligence reports, synthesize information across sources, answer natural language questions, and produce structured decision-support outputs — capabilities that transformed Maven from a system that processed data into one that could reason about it.
The full implications of that integration became visible to the public in March 2026, when it was confirmed that Claude was active inside Maven during the U.S. strikes on Iran. The story of that deployment — and the political firestorm it ignited between Anthropic and the Pentagon — is covered in detail in our companion article.

How Maven Actually Works: The Three-Layer Architecture
Understanding Maven requires understanding its architecture. The system operates across three distinct layers, each building on the one below it.
Layer 1 — Sensing and Ingestion
At its foundation, Maven is a data aggregation system of extraordinary scale. It pulls in continuous streams from multiple intelligence sources simultaneously: full-motion video from drones and aircraft, satellite imagery from both military and commercial providers, electronic signals intercepts from communications and radar systems, data from battlefield sensor networks, and the vast volume of written intelligence reports that the military generates daily.
This raw data flows into the system in real time. The ingestion layer handles the normalization and storage of fundamentally different data types — video, imagery, signals, text — and makes them accessible to the processing layer above it.
Layer 2 — Processing and Analysis
This is where Maven’s intelligence actually happens, and it operates in parallel streams.
Computer vision algorithms continuously scan imagery and video for objects of interest — vehicles, weapons systems, personnel, infrastructure — and track their movements over time. Pattern recognition systems compare current activity against historical baselines, flagging deviations that might indicate military preparation, logistical movement, or imminent action. Signals analysis identifies communication patterns and electronic signatures associated with known targets or activities.
Layered on top of all of this is the LLM layer — currently in transition from Claude to OpenAI’s models. This layer reads and reasons about the text-based intelligence flooding into the system: written reports, translated communications, after-action assessments, historical records. It cross-references textual intelligence with the visual and signals data from other streams, synthesizes the combined picture, and produces outputs in human-readable language that analysts can actually use.
Layer 3 — Decision Support Output
Maven doesn’t make decisions. It produces products. These products take several forms depending on what’s needed: target packages assembled from cross-referenced intelligence streams, briefing summaries that condense hours of analytical work into minutes of reading, course-of-action recommendations presenting commanders with multiple options and their assessed tradeoffs, and risk assessments covering collateral damage estimates and legal considerations under laws of armed conflict.
Every output from Maven goes to a human analyst before it goes anywhere else. The analyst reviews it for accuracy, adds contextual judgment that the system may lack, and passes a validated product up the chain of command. Final decisions on any operational action are made by authorized human officers. This human-in-the-loop structure is not just a policy choice — it is a legal requirement under international humanitarian law, which demands meaningful human accountability for targeting decisions.
Who Powers Maven Today
Maven is not a single product built by a single company. It is a platform that stitches together capabilities from a carefully managed ecosystem of defense technology vendors.
Palantir serves as the primary system integrator and data infrastructure provider. Palantir’s Gotham and AIP platforms form much of Maven’s operational backbone, handling the intelligence fusion and data correlation functions that make the multi-source analysis possible. Palantir has been central to Maven since Google’s departure and remains its most deeply embedded contractor.
Microsoft provides the cloud computing infrastructure through Azure and the Pentagon’s Joint Warfighting Cloud Capability contract. The sheer computing demands of processing continuous video, satellite imagery, and signals data at operational scale require hyperscale cloud infrastructure that only a handful of companies can provide.
Anduril contributes autonomous drone and sensor hardware that feeds raw data into Maven’s ingestion layer. Anduril’s Lattice operating system increasingly integrates with Maven’s data architecture, creating tighter connections between the physical sensing hardware and the analytical platform processing its output.
Anthropic provided Claude as Maven’s LLM reasoning and language layer under a $200 million contract active since 2024. As documented publicly in early March 2026, this relationship became the center of a significant political dispute over AI guardrails, culminating in President Trump ordering federal agencies to phase out Anthropic technology. Despite the ban, Claude was confirmed to still be in active use during the Iran strikes.
OpenAI has since signed a replacement contract to assume the LLM role previously held by Anthropic. Notably, OpenAI’s contract reportedly includes the same guardrails Anthropic had insisted upon — no mass domestic surveillance, no fully autonomous weapons — making the dispute that ended the Anthropic relationship appear somewhat ironic in retrospect.
This distributed architecture means Maven is resilient to the loss of any single vendor. When Anthropic became politically untenable, OpenAI stepped in. The platform continues regardless of which LLM sits in the reasoning layer.
Where Maven Has Been Deployed
Maven’s operational history spans nearly a decade of continuous use across multiple active conflict zones.
Afghanistan and Iraq (2017–2021) marked Maven’s original deployment, focused on its founding mission: analyzing drone footage for object detection and pattern-of-life tracking in counterterrorism operations. This period established Maven’s core computer vision capabilities and proved the concept of AI-assisted intelligence analysis at scale.
Syria and Anti-ISIS Operations extended Maven’s application to support targeting and behavioral pattern analysis against ISIS infrastructure, financial networks, and leadership. The system’s ability to correlate movement patterns over time proved particularly valuable in identifying high-value targets.
Ukraine (2022–present) represented Maven’s first large-scale application in a conventional interstate conflict. The system was used to process satellite and commercial imagery from the conflict zone, support intelligence sharing with Ukrainian forces, and analyze Russian military positioning and logistics. The Ukraine deployment significantly accelerated Maven’s development, providing real operational feedback on capabilities and gaps at a scale and tempo that counterterrorism operations couldn’t match.
Gaza (2023–present) brought Maven into one of the most scrutinized conflicts in modern history, where questions about AI-assisted targeting and civilian casualties have been central to international debate. AI systems including Maven have been reported to play supporting roles in target identification, feeding a broader controversy about whether AI reduces or amplifies civilian harm.
Iran (February–March 2026) represents Maven’s most high-profile and publicly confirmed deployment to date. The U.S. and Israeli coordinated strikes beginning February 28, 2026, were supported by Maven’s full capability suite including the Claude LLM layer — making this the first widely reported instance of a commercial large language model being confirmed active in a major interstate military conflict.
The Autonomous Weapons Question
No honest account of Maven can avoid the question that sits at the center of every debate about military AI: could it become autonomous?
The technical answer is that the architecture permits it. Maven already handles the sensing, the analysis, the target identification, and the decision-support packaging. The only step it doesn’t currently take is initiating action without human approval. Removing or compressing that human review step is not a fundamental technical challenge — it is a policy and legal choice.
This is precisely what the Anthropic-Pentagon dispute was about. The Department of War’s January 2026 memo demanding AI be available for “any lawful purpose” without constraints was interpreted by Anthropic as opening the door to autonomous targeting use cases that Claude’s current technology cannot handle reliably. Anthropic’s refusal to remove guardrails against autonomous weapons wasn’t primarily a philosophical objection — it was a technical one. The company argued that current LLM technology hallucinates, misidentifies, and makes contextual errors at rates that are unacceptable when the output is a targeting recommendation rather than a drafted email.
International humanitarian law requires what it calls “meaningful human control” over targeting decisions. That phrase is doing enormous legal work. There is no agreed international definition of how much human oversight qualifies as meaningful, how much time a human must spend reviewing an AI recommendation before approving it, or whether approving AI outputs under time pressure constitutes genuine control or the appearance of it.
As Maven’s processing speed increases and the volume of simultaneous targets grows — as it inevitably will in any large-scale conventional conflict — the practical pressure to reduce human review time will intensify. Commanders facing fast-moving situations will be tempted to trust the system’s recommendations rather than scrutinize them. The question isn’t whether Maven could become effectively autonomous. It’s whether the humans nominally in the loop will maintain genuine oversight or gradually become rubber stamps on AI decisions made faster than human judgment can follow.
What Comes Next for Maven
The Pentagon’s strategic direction is clear. The January 2026 AI strategy memo signals a preference for fewer constraints and faster capability deployment. The replacement of Anthropic with OpenAI suggests the military wants robust LLM capability integrated into Maven without the friction of public ethical disputes. Anduril’s growing hardware integration points toward tighter coupling between autonomous sensing platforms and Maven’s analytical layer.
The most significant near-term development to watch is whether Maven’s LLM layer gets positioned closer to the targeting decision itself. The current architecture keeps Claude — or its OpenAI replacement — in a recommendation role: here are your options, here is the analysis, the human decides. The pressure will grow to compress that loop: here is the recommended action, approve or deny within a defined window. That compression of human review time is where decision support quietly transforms into something that functions, operationally, as autonomous action — even if a human technically pressed a button somewhere in the process.
Simultaneously, the international conversation about autonomous weapons is accelerating, if not yet keeping pace with deployment. Legal experts and academics gathered in Geneva in early March 2026 for UN Convention on Certain Conventional Weapons talks specifically addressing lethal autonomous weapons systems. The discussions are substantive but slow, moving through diplomatic consensus processes while the technology deploys at the speed of military procurement.
Political scientist Michael Horowitz at the University of Pennsylvania has observed that rapid technological development is outpacing international regulatory discussion — and that the current absence of agreed rules on AI warfare suggests potential proliferation of AI-assisted conflict is not a future risk but a present reality.
Conclusion: A System That Learned to Think
Maven began as a tool to help humans see — to process the flood of drone footage that human analysts couldn’t keep up with. It became a tool to help humans think — synthesizing intelligence across sources, reasoning in natural language, presenting commanders with structured analysis rather than raw data.
The question that defines Maven’s next chapter is whether it will become a tool that acts — and whether the humans nominally overseeing it will have genuine control or merely the procedural appearance of it.
That question doesn’t have a clean answer yet. What it has is urgency. Because unlike the international discussions happening in Geneva conference rooms, Maven is not theoretical. It was active over Iran last week. It is running right now.
This article is part of a series on AI and modern warfare. Read the companion piece: How the U.S. Military Used Claude AI in the Iran War: Roles, Risks, and the $200M Controversy
Sources: Anthropic public statements · Department of War AI Strategy Memo (January 2026)
