There’s a pretty big list of things an AI assistant should refuse to help you with. Is engineering the causative agent of one of those cataclysms? Obviously, not every AI company thinks so.
According to new reporting From new York TimesAt least one Frontier AI model gave a scientist viable instructions for engineering a deadly pathogen and weaponizing it in a large-scale bioterror attack.
Luckily for us, the scientist, David Relman, isn’t actually trying to follow those instructions. A Stanford University biosecurity expert was hired by an unnamed AI company to patch holes in its chatbot system before releasing it to the public, he reported. NYT.
Relman was apparently so shaken by the results of his conversation with the chatbot that he refused to name any specific pathogen or the company whose chatbot was involved, out of fear that someone might prompt him to take it up again. The suggestions were reportedly so terrifying that the chatbot suggested ways to modify the pathogen to maximize casualties, reduce the user’s chances of being caught, and adapt the pathogen to resist known treatments.
Relman said, “It was answering questions I had never thought about, with a level of deviousness and slyness that I found very strange.” However, the unnamed company made some security changes to the chatbot at the researcher’s suggestion, they said. NYT They were inadequate.
Frontier AI companies OpenAI and Anthropic both downplayed expert opinion.
“There’s a huge difference between having a model generate trustworthy-looking text and giving someone what they need to do the work,” said Alex Sanderford, head of trust, security policy and enforcement at Anthropic. NYT.
Meanwhile, a spokesperson for OpenAI argued that such expert stress testing “does not meaningfully increase someone’s ability to cause real-world harm.”
Bioterror risks are not necessarily just related to future AI models. according to a 2025 report Frontier AI models, released in 2024 by the US government-backed RAND Corporation, could make a meaningful contribution to the development of biological weapons by “guiding ordinary people through the process of creating and attacking different viruses.”
Overall, while AI-facilitated, catastrophic bioterrorism events appear to be highly unlikelyIt is frightening to learn that motivated bioterrorists do not have to go far to find relevant information.
More on Chatbots: Study finds some chatbot AIs are extremely vulnerable to psychosis