AI Is Making It Easier to Build a Biological Weapon | Unpublished
Hello!
Source Feed: Walrus
Author: Kyle Hiebert
Publication Date: July 28, 2025 - 06:30

AI Is Making It Easier to Build a Biological Weapon

July 28, 2025

I n early 2023, Rocco Casagrande, a scientist and former United Nations weapons inspector, brought a small container to the White House for a briefing with American government officials. It was filled with a dozen easily available chemical ingredients that Anthropic’s flagship chatbot, Claude, recommended as precursors to trigger another pandemic. Casagrande’s stunt confirmed that anyone with an internet connection can now conceivably create their own weapon of mass destruction.

“It is clear that biological technology, now boosted by artificial intelligence, has made it simpler than ever to produce diseases,” write academic Roger Brent, senior researcher at RAND Corporation T. Greg McKelvey Jr., and RAND president Jason Matheny. “Some individuals and groups do face barriers—say, an inability to access the right labs or facilities. But thanks to relentless technological advances, those barriers are falling apart.”

The global proliferation of large language models is aggravating numerous security concerns. Among them is how artificial intelligence may enable extremists to commit bioterrorism to advance their ideological causes. Chatbots have already proven capable of advising users on how to plan attacks using lethal new forms of bacteria, viruses, and toxins. Great-power competition, meanwhile, is distracting from threats posed by terrorist groups and malicious non-state actors.

This isn’t news to Silicon Valley. Google’s Secure AI Framework identifies AI-enabled bio attacks as a concern. On the contrary, a team assembled by OpenAI looked at the issue last year, finding that GPT-4 gave users only a marginal advantage over regular internet searches in building bioweapons.

Despite that conclusion, the risk of bioterrorism isn’t static. The International AI Safety Report, produced for the 2025 Paris AI Action Summit, shows LLMs are getting far better at tasks related to biological and chemical weapons, accurately responding to queries about the acquisition and formulation of deadly agents. Assessments by the report’s authors suggest certain models’ instructions for releasing lethal substances showed an 80 percent improvement in 2024 alone.

C hina, Iran, North Korea, and Russia are all believed to possess some capabilities to create bioweapons. However, “the unwieldiness and imprecision of bioweapons has meant that states remain unlikely to field large-scale biological attacks in the near term,” reads a report published in August 2024 by the Center for a New American Security—CNAS. By contrast, “nonstate actors—including lone wolves, terrorists, and apocalyptic groups—have an unnerving track record of attempting biological attacks, but with limited success due to the intrinsic complexity of building and wielding such delicate capabilities.”

But this false sense of security is changing, warn CNAS researchers. Mostly thanks to advances in genetic science, synthetic biology, and the emergence of cloud labs—discreet, automated facilities contracted to conduct remote experiments on a client’s behalf.

At the same time, the Donald Trump White House has gutted America’s AI Safety Institute. Established during the twilight of the Joe Biden administration, the institute was tasked with identifying, measuring, and mitigating the risks of advanced AI systems. The International Network of AI Safety Institutes is carrying on with similar work—though it will now hold much less sway over American tech firms.

The Federal Bureau of Investigation and the Central Intelligence Agency have also suffered huge losses in expertise thanks to widespread intimidation and firing of government employees by the so-called Department of Government Efficiency. Both agencies are crucial sources of global counterterrorism intelligence.

More importantly, there is a chilling effect spreading among America’s allies when it comes to sharing classified information with Washington. Officials in these countries—reportedly Five Eyes nations (Australia, Canada, New Zealand, and the United Kingdom), as well as Israel and Saudi Arabia—suspect the White House could pass intelligence on to Moscow in an effort to mend US–Russia relations.

However, in a hostile geopolitical environment, the inverse applies as well. The Islamic State, in March 2024, massacred attendees at a suburban Moscow concert hall, killing 139 people and injuring hundreds more. Distracted by its war in Ukraine and dismissive of foreign intelligence agencies warning of a terrorist plot targeting Russia, the Kremlin was caught flat footed.

T ransnational terror groups and violent non-state actors are reanimating in power vacuums created by America’s withdrawal from multilateralism. They are deftly harnessing sophisticated technology such as cryptocurrencies, cyberattacks, and ransomware. Encrypted messaging apps like WhatsApp and Telegram are being used to recruit new members, buy and sell weapons, fundraise, and organize out of the spotlight.

This trend extends beyond Islamist rebels and race-focused supremacists to include doomsday outfits like the now-defunct Zizians—a San Francisco Bay Area collective described as “the world’s first AI-inflected death cult” that sought the replacement of humanity with computer superintelligence.

“The real existential threat ahead is not from China,” two technologists wrote in January for MIT Technology Review, lamenting talk about liberal democracies being locked in battle for AI supremacy with Beijing. Rather, they say, it comes “from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society.”

The fragility of AI systems themselves poses further challenges. Every major AI chatbot has proven vulnerable to jailbreaks—creative means by users to hack systems’ security protocols. This danger is being amplified exponentially by the rise of AI agents that can now execute tasks autonomously over the internet. And while lower-cost, open-source models could usher in a more even playing field when it comes to adopting AI for productivity and innovation, they have their own glaring flaws and must, too, be regulated accordingly.

Yet legislation from entities with jurisdictional authority over Silicon Valley is subject to ferocious lobbying efforts. California’s groundbreaking Bill SB 1047, for example, was vetoed last year by Governor Gavin Newsom after a public relations blitz by the tech industry. Worryingly, Republican lawmakers have also now inserted a stealth clause into their tax bill winding through Congress that would ban states and localities from regulating AI for the next decade.

In a dark twist of irony, if a tech-enabled bioterrorist attack does occur, AI will also be vital in expediting a cure. Let’s hope it never reaches that point—COVID-19 already showed how a novel virus can wreak havoc on a hyperconnected world. Odds are the next pandemic will be much worse.

Reprinted, with permission, from the Centre for International Governance Innovation.

The post AI Is Making It Easier to Build a Biological Weapon first appeared on The Walrus.


Unpublished Newswire

 
Competing in Lethbridge, Majestic Starlight isn’t just racing to win at the Indian National Finals Rodeo. They’re racing clean, riding proud, and proving that healing is possible.
July 28, 2025 - 13:00 | Nakoda Thunderchief | Global News - Canada
Washington, D.C. — U.S. President Donald Trump wants a Golden Dome of missile defence over the United States, and if you’re thinking this sounds familiar, you’d be right. Back in the 1980s, Ronald Reagan’s Strategic Defense Initiative, aka Star Wars, also aimed to develop a space-based and layered defence system to knock out any incoming strikes. It didn’t work out. The space-based part proved elusive, but technology has now advanced enough to make more of it feasible, at least in theory. Trump envisions a system that includes space-based weaponry that can take out missiles —...
July 28, 2025 - 12:56 | Tracy Moran | National Post
The incident, in the RM of Springfield near Dugald, happened Saturday morning and left one person -- the plane's only occupant -- dead at the scene.
July 28, 2025 - 12:36 | Sam Thompson | Global News - Canada