OpenAI’s Latest Model Raises Concerns of Potential Misuse for Biological Weapons

OpenAI’s newly unveiled model, known as “o1,” has sparked concerns over its potential misuse for creating biological weapons. The company’s system card, which provides insights into the AI’s operations, has assigned a “medium risk” rating for issues related to chemical, biological, radiological, and nuclear (CBRN) weapons, marking the highest risk level ever assigned by OpenAI to its models. The technology’s improved reasoning and problem-solving capabilities have significantly enhanced the ability of experts to develop bioweapons, as reported by Financial Times.

OpenAI’s CTO, Mira Murati, emphasized the cautious approach the company is taking in introducing o1 to the public due to its advanced capabilities. The model underwent rigorous testing by red teamers and experts in various scientific domains, who pushed its limits. Murati stated that the current models exhibited superior safety metrics compared to their predecessors.

Experts have warned that AI software with advanced capabilities, including step-by-step reasoning, poses an increased risk of misuse in the hands of malicious actors. Yoshua Bengio, a leading AI scientist and professor at the University of Montreal, highlighted the urgency of legislation, such as California’s debated bill SB 1047, which aims to minimize the risk of high-cost models being used for bioweapon development.

In response to concerns over the potential misuse of AI technologies, a study conducted in January 2024 demonstrated the limited utility of OpenAI’s GPT-4 model in bioweapon development. Tristan Harris, co-founder of the Center for Humane Technology, previously claimed that Meta AI was unsafe and could lead to the creation of AI-generated weapons of mass destruction. However, these claims were debunked by Mark Zuckerberg during a Capitol Hill hearing.

OpenAI has also collaborated with Los Alamos National Laboratory to explore the potential and risks of AI in scientific research.