Stay informed
The Man Who Put AI at the Centre of America’s War Machine
When I first meet Drew Cukor, he has little in the way of easy smiles. It is mid-2024, and I have already spent almost a year trying to convince him to speak with me. I wait for him after work in the lobby of the towering New York office of J.P. Morgan, the bank where the retired Marine Corps colonel is now leading the transformation of artificial intelligence for chief executive Jamie Dimon.
Upstairs, Cukor offers me a bottle of water. He takes nothing for himself. We sit directly opposite each other at a booth in the emptied office cafe. And I watch the former intelligence officer decide if he wants to talk to me after all. My main task seems to be to meet his stare, which is not an entirely straightforward undertaking. It is clear the only person being interviewed is me.
He barely lets me write anything that first meeting. I remember one bit by heart. “War is terrible, war is terrible, war is terrible,” he intones, holding my gaze and giving voice to a universal chorus.
His hair is tight cropped. His demeanour stiff. His opinions uncompromising. But he softens into his passions: the catastrophe of the Dieppe Raid in 1942, the failures of the United States military to match firepower to intelligence, and Project Maven, the effort he led at the Pentagon to put AI at the heart of how America makes war.
Over the months to come, I learn that Cukor is a leading historical figure in a war that hasn’t happened yet. That seemed to be what almost everyone to do with Project Maven thought, whether they feted or hated him.
Launched in 2017, Project Maven ostensibly aimed to use computer vision to sort through thousands of hours of drone footage taken across Asia and the Middle East. But I would learn that Cukor and his backers always intended to use AI for much more than surveillance; from the outset, they wanted to target people and objects with the help of AI.
The problem with war, Cukor told me, had always been the humans. “They’re materially corrupt, inefficient, and they get tired.” And when they die, it affects the campaign, he went on brusquely. He believed humans could do better with the help of machines and that AI could pierce the fog of war.
More immediately, Cukor wanted to fix the bureaucracy he felt repeatedly let down America’s fighting forces overseas, bring intelligence directly into combat operations, recognize the value of software over hardware, and test emerging technology in real wars. “We were really good at killing people, but it didn’t get us very far,” one caustic former member of the Joint Special Operations Command told me. Cukor wanted America to stem the flow of mistakes that saw US forces accidentally killing civilians, allies, and America’s own troops. He wanted America to reconceive what victory looked like: not destroying the enemy but defeating it.
And he wanted to use government defence spending to tether a nascent commercial AI market to America rather than let it go off in search of customers in China. It was the government’s role to help the venture capital community monetize their investments, he wrote in one draft of a 2017 paper I reviewed.
The colonel’s relentless advocacy for Maven “monetized” start-ups and the ingenuity of AI researchers who usually spent their time writing esoteric academic papers. He recruited Big Tech companies better known for online shopping and office software: Amazon Web Services and Microsoft now deliver algorithmic warfare for Maven.
Palantir Technologies, now celebrated as an insurgent force in the S&P 500 and one of the most valuable (and, many analysts and short sellers suggest, overvalued) American companies worldwide, was on the way out of the defence department when it won its first Maven contract. It arguably owes its rebirth to Cukor.
Scale AI, the data labelling company in which Meta has taken a 49 percent stake, could say something similar about its own rise. AI chipmaker Nvidia was there from the start. Anthropic and OpenAI are suddenly plunging into defence work as they seek a commercial home for their generative AI platforms. Even Google, whose workers protested involvement in “the business of war” when they discovered they were part of Project Maven in 2018, now embraces national security work. Notably, Apple has said no.
Whatever form America’s pursuit of AI warfare takes today, and in the future, it will owe something to Cukor. Alex Karp, the billionaire chief executive of Palantir Technologies, the Central Intelligence Agency–backed data analytics company that soon joined the project, would later describe him as the “founding father of AI targeting.”
Cukor thought it would take twenty years to remake the US military and mainline AI. For the first five years, he encountered controversy and resistance as he pushed Congress to fund new types of firepower to help America cling on to the apex of global power. He infuriated doubters and critics, often from within his own team, never mind among civilians who balk at the prospect of Terminator-style global extinction.
Cukor’s iconoclasm filtered down to his team. Many “Mavenites” saw themselves as maverick renegades within the Department of Defense. They carried themselves with a tech start-up’s insouciance in the heart of the button-down Pentagon. But Mavenites also reflected the overconfidence and anguish of a military superpower that repeatedly ran up against its own limits and flaws. I would learn the team dynamic was a constant rollercoaster as Cukor bulldozed a path to his dream at personal and professional cost.
One of Cukor’s most controversial decisions was to push the US military to use minimally tested systems in hot wars. The colonel always argued getting AI on the battlefield, before it was ready or reliable, was the only way to improve it and develop the trust and know-how of a new generation of fighters in using it.
He was exacting in his every demand. Cukor was a noun, a verb, an adjective. “To Cukor” connoted tremendous hours, tremendous pursuit, tremendous invention, and tremendous intensity. “Getting Cukor-ed” meant having to pursue the same yourself, at his instruction.
“He was like a mastermind,” one person told me, desperately reaching for the words to describe the power Cukor could bring to bend the bureaucracy—and the people in it—to his will. “I don’t know what it is,” they stumbled on. He was admired and feared. “It’s a mental thing.”
There is nothing unusual about me, Cukor would say. Meanwhile, everyone I spoke to told me otherwise. Have you spoken to Cukor? they’d ask. You need to speak to Cukor. If Cukor would speak to you, it would be good. You can’t tell this story without Cukor.
The rise of AI warfare speaks to the biggest moral and practical question there is: Who—or what—gets to decide to take a human life? And who bears that cost?
Cheerleaders argue that AI, and the automation it makes possible, will save lives. They claim algorithms bring a precision to decision making that will limit civilian and friendly-fire casualties. They argue AI-empowered systems could deter conflict with China—or help win World War III, in which automated machines will putatively run combat at a pace faster than humans can understand.
Detractors think AI has already led to civilian deaths, will spread uncontrolled destruction, and potentially hasten the end of the world. Still more think the claims made for AI war tools are grandiose and the truth will be more prosaic, suffering from problems of rickety infrastructure, adoption, and trust. Pragmatic supporters argue an incremental mix of humans and machines will forge that trust.
The problem with many theories about what AI will do to warfare is just that: they remain theoretical. I wanted to go in search of the specifics. I wanted to tell the story of the people making AI warfare a reality and of the US military members actually using it. What was inside the black box?
Ten years since Cukor started his effort, the AI decision-making systems developed under Maven, and some of the Pentagon’s 800 other AI projects, are used on the battlefield. Maven Smart System, a software platform that develops targets with the help of AI, is now deployed in every branch of the US military and all over the world, incorporating more than 150 data feeds and the work of more than fifty companies. The North Atlantic Treaty Organization started using a version of the system in the spring of 2025, and I would learn in October 2025 that ten NATO members were lining up to use it for their own militaries.
Maven has already sped up the pace of war. I learned from an official at the National Geospatial-Intelligence Agency that, with the help of computer vision, the US went from being able to hit under 100 targets a day to being able to hit 1,000. In combination with large language models integrated into the Maven platform, that number has risen fivefold to 5,000 targets a day.
The AI algorithms developed under Maven now deploy in submarines and in space operations. They are in subsea sonar systems, belonging to America and two of its closest intelligence allies (the United Kingdom and Australia), designed for nuclear deterrence. They’re fielded on autonomous drone boats. I learned AI targeting systems live in at least two highly secretive systems—one aerial and one aquatic—that could surveil, select, and kill targets entirely on their own, intended for the defence of Taiwan.
The US will have to define carefully the relevant use cases, guardrails, and doctrine if it wants to stick to the Geneva Conventions and avoid shooting civilians and its own allied forces.
I started writing about the future of war after I became the US foreign policy and defence correspondent for the Financial Times in 2017—the same year Project Maven started. As the US reckoned with the rise of China, I watched a global powerhouse humbled by poorly equipped enemies in Afghanistan and Iraq attempt to embrace AI as a shortcut to sustaining global military dominance.
The first Trump administration’s 2018 national defence strategy predicted new commercial technology “will change society and, ultimately, the character of war.” Four years later, after I became a Bloomberg correspondent covering emerging tech and national security, the arrival of chatbots and AI agents only accelerated this shift. Under the second Donald Trump administration, the Department of Defense has re-emerged as the “Department of War” devoted to AI and autonomy, under a secretary who wants to make it easier to acquire weapons and free US forces from “overbearing rules of engagement.”
Three experiences also drew me to write about AI’s potential impact on future war. First, I’ve had Jan Bloch rattling around my head since 1999, when I sat down to read the yellowing pages of the Polish banker’s 1899 book. The English translation of his work was retitled for what turned out to be a hopeless question: Is War Now Impossible? He was exploring whether lethal weapons produced at industrial scale would make war obsolete. He suggested that mass-produced rifles and other new technologies wouldn’t make for decisive wins, swift wars, palatable killing. It would make for stalemate, long wars, horror. He didn’t quite prophesy four years of trench warfare and 8.5 million combat deaths starting in 1914. But nearly. More than a century later, would the potential calamity of sending AI into war make great war impossible, or would Bloch be proved wrong once again?
Second, I spent a dozen years as a reporter covering business, investment, and politics in multiple African countries. I saw the impact of violence in countries from Sierra Leone to Somalia and logged the distance between policy and reality. When it came to AI warfare, I wanted to know how theory on high would match reality on the ground.
Third, I carry with me the memory of a journey I took on a military plane back from Afghanistan in 2009. The British soldiers beside me told me about the friends who had just been killed in combat. They showed me the explosions they couldn’t stop watching on their phones. And they told me they desperately wanted to leave the military but were trapped by contracts they could not escape and now felt equally unable to survive civilian life because no one would understand them. In that moment, they felt bound to death. Whatever worse terrors wars visit on civilians and enemies, I also cannot shake what it does to the people sent to fight. Could AI alleviate the burden and suffering of war?
Project Maven sits at the intersection of colliding trends: America’s rising insecurity about its place in the world, a technological revolution forcing AI into almost every aspect of life and war, fraught civil–military relations in the world’s most powerful democracy, the dominance of Big Tech, China’s growing military and technological ambitions, and all-encompassing surveillance made possible by ubiquitous sensors and commercial software.
The next ten years are still waiting to be written. Russia’s invasion of Ukraine has upturned military expectations. The Pentagon’s deadly strikes against boats in the Caribbean are greying the boundaries of the rules of war and underline the ease of declaring war at a remove. US military commanders say China is rehearsing for the military takeover of Taiwan. Rival superpowers are arming for conflict. Campaigners argue afresh for AI red lines. And a new generation of venture capital–backed Silicon Valley leaders is chasing defence contracts, talking up the superiority of the West and the appeal of AI-enabled killing with new-found braggadocio.
National security strategists now worry that no country can win a war without AI. The United Nations’ aim to ban lethal autonomous weapons that select their own targets with the help of AI by 2026 is a lost hope. And yet AI remains a narrow, faulty tool with considerable limits to its usefulness and reliability that the US military is still discovering.
AI warfare can go wrong. And it is already here.
When I met Cukor again subsequently, he told me he would do it all again and had no regrets. But he had a nagging doubt. There were “dark parts” to this new military technology he had helped fashion. “Let’s make sure that we know those flaws as we wield this technology,” he said. After giving three decades and some of his health to the US military and pursuing an AI revolution in warfare, he argued the distinctive factors that drove America to develop world-beating new technologies—wealth, geographic isolation, and stability among them—didn’t exonerate his country from a fundamental burden.
“Let’s be able to look at ourselves in the mirror and make sure we are careful,” he told me. “We have all this tech; are we the best custodians of it?”
Adapted and excerpted, with permission, from Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare by Katrina Manson, published by WW Norton, 2026. All rights reserved.
The post The Man Who Put AI at the Centre of America’s War Machine first appeared on The Walrus.




Comments
Be the first to comment