OpenAI has partnered with Los Alamos National Laboratory to study how artificial intelligence can be used to combat biological threats that non-experts might create using AI tools, according to an announcement from both organizations on Wednesday. Los Alamos, originally founded in New Mexico to develop the atomic bomb during World War II, is calling the effort a “first of its kind study” into AI biosecurity and how AI can be used in a laboratory setting.
The difference between the two statements released by OpenAI and Los Alamos on Wednesday is quite striking. OpenAI’s statement tries to portray the collaboration as simply researching “how AI can be safely used by scientists in laboratory settings to advance biological science research.” However, Los Alamos puts much more emphasis on the fact that previous research “found that ChatGPT-4 demonstrated modest improvements in providing intelligence that could lead to the creation of biological threats.”
Much of the public discussion about the threat posed by AI has centered on the creation of self-aware entities that could conceivably develop minds of their own and harm humanity in some way. Some people worry that the realization of AGI (advanced general intelligence, i.e. an AI capable of performing advanced reasoning and logic rather than acting as a fancy autocomplete word generator) could lead to a Skynet-like situation. While many AI advocates, such as Elon Musk and OpenAI CEO Sam Altman, lean into this distinction, a more urgent threat to address seems to be making sure people don’t use tools like ChatGPT to create biological weapons.
“AI-enabled biological threats could pose significant risks, but existing research has not assessed the extent to which multimodal, state-of-the-art models could lower the barrier to entry for non-experts to engineer biological threats,” Los Alamos said in a statement on its website.
The difference in the messaging between the two organizations may be due to OpenAI’s reluctance to acknowledge the national security implications of highlighting how its products could be used by terrorists. More specifically, the Los Alamos statement uses the terms “threat” or “threat family” five times, while the OpenAI statement uses them only once.
“The potential benefits of advancing AI capabilities are limitless,” Los Alamos researcher Eric LeBlanc said in a statement Wednesday. “Yet measuring and understanding the potential dangers and misuse of advanced AI related to biological threats remains largely unexplored. This work with OpenAI establishes a framework for evaluating current and future models and is an important step toward ensuring the responsible development and deployment of AI technologies.”
Los Alamos sent a statement to Gizmodo that was generally optimistic about the technology’s future, despite the potential risks.
AI technology is very exciting because it is a powerful driver of scientific and technological discovery and advancements. This brings significant positive benefits to society, but it is also conceivable that malicious actors could use the same models to synthesize information and create “how-to guides” for biological threats. It is important to consider that the threat is not AI itself, but rather the potential for its misuse.
Evaluations so far have focused primarily on understanding whether such AI technologies can provide precise “how-to guides.” However, even if a bad actor has access to a precise guide for performing a malicious act, that does not mean that they can actually carry it out. For example, they may know that they need to maintain sterility when culturing cells or using mass spectrometry, but this may be very difficult to achieve if they have no prior experience of doing so.
Broadening our perspective, we are looking to understand how and where these AI technologies can add value in workflows. Access to information (e.g., generating accurate protocols) is one area where AI technologies can add value, but it is unclear to what extent these AI technologies can help with learning how to successfully execute protocols in a lab (or other real-world activities like kicking a soccer ball or painting a picture). In our initial pilot technology evaluation, we aim to understand how AI can enable individuals to learn how to execute protocols in the real world. This will allow us to better understand not only how AI can help enable science, but also whether it can enable bad actors to commit fraud in the lab.
Los Alamos Laboratory’s efforts are coordinated by the AI Risk Technology Assessment Group.
Correction: An earlier version of this post cited a statement from Los Alamos as a statement from OpenAI. Gizmodo regrets this error.