OpenAI, the renowned artificial intelligence research organization, has unveiled an internal scale to monitor the advancement of its large language models towards achieving artificial general intelligence (AGI). A spokesperson for OpenAI confirmed the development, stating that the current state of chatbots, exemplified by ChatGPT, falls under Level 1 on the scale. OpenAI is now approaching Level 2, which signifies a system capable of solving basic problems at the level of a person with a PhD.
The scale further defines Level 3 as AI agents capable of taking actions on behalf of users, while Level 4 involves AI that can generate new innovations. The ultimate goal, Level 5, represents AGI that can perform the work of entire organizations of people. OpenAI’s mission revolves around achieving AGI, and the company’s definition of AGI holds significant importance. OpenAI has previously stated that if another project aligned with its values and focused on safety comes close to building AGI, OpenAI commits to not competing with that project and instead providing assistance.
While the phrasing of this commitment remains somewhat ambiguous, the introduction of a scale that can be used to evaluate OpenAI’s progress and that of its competitors may help establish clearer terms for reaching AGI. However, it is crucial to note that AGI is still a distant goal, requiring substantial financial resources and computing power, if it can be achieved at all. Timelines for AGI vary widely among experts, including those within OpenAI. In October 2023, OpenAI CEO Sam Altman estimated that AGI could be reached within “five years, give or take.”
The unveiling of this grading scale follows OpenAI’s recent collaboration with Los Alamos National Laboratory, aimed at exploring the safe utilization of advanced AI models like GPT-4o in bioscientific research. The partnership seeks to test the capabilities of GPT-4o and establish safety protocols for the US government. Eventually, these protocols can be used to evaluate public or private AI models.
In May, OpenAI faced internal criticism after the departure of co-founder Ilya Sutskever, with researcher Jan Leike claiming that safety culture had taken a backseat to product development. OpenAI denied these allegations, but concerns have been raised regarding the company’s approach if AGI is indeed achieved. OpenAI has not disclosed the specific methodology for assigning models to the internal levels of the scale, and the company declined to comment on the matter.
During an all-hands meeting, OpenAI leaders showcased a research project utilizing the GPT-4 AI model, which they believe demonstrates new skills indicative of human-like reasoning. The introduction of this scale aims to provide a more objective definition of progress, eliminating subjective interpretations. OpenAI’s CTO, Mira Murati, mentioned that the models in their labs are not significantly superior to what is already available to the public. However, CEO Sam Altman asserted that the company has recently made substantial advancements, pushing the boundaries of intelligence.