Tracking Progress Towards AGI: OpenAI’s Internal Scale and Future Implications

  • OpenAI develops an internal scale to track AI progress towards AGI.
  • Current chatbots like ChatGPT are categorized at Level 1; Level 2 aims at PhD-level problem-solving.
  • Levels 3 to 5 involve increasing AI capabilities from executing user actions to performing tasks equivalent to entire organizations.
  • OpenAI commits to assisting value-aligned AGI projects if they advance before them.
  • A new grading scale aims to standardize AGI progress assessment.
  • Despite advancements, achieving AGI remains costly and timeline estimates vary widely.

Main AI News:

OpenAI is refining its approach to tracking the advancement of its AI systems towards achieving artificial general intelligence (AGI). According to a spokesperson speaking to Bloomberg, OpenAI has developed an internal scale for this purpose. Currently, today’s chatbots, exemplified by ChatGPT, are classified at Level 1. The organization is progressing towards Level 2, where AI systems can tackle fundamental problems at a level comparable to individuals with a PhD. Moving forward, Level 3 will denote AI agents capable of executing actions on behalf of users, while Level 4 will involve AI that generates novel innovations. The ultimate milestone, Level 5, represents AGI—AI capable of performing tasks equivalent to entire organizations of people. OpenAI has defined AGI as a highly autonomous system surpassing humans in economically valuable tasks.

Central to OpenAI’s mission is its distinctive structure aimed at achieving AGI, with a critical emphasis on how AGI itself is defined. The company has articulated its commitment that if another project aligned with its values and focused on safety approaches AGI before them, OpenAI will not compete but will offer assistance. The specifics of this commitment, as outlined in OpenAI’s charter, allow for interpretation within the framework of a for-profit entity governed by a nonprofit structure. The development of a grading scale that can be used by OpenAI and its competitors could clarify when AGI is achieved.

Despite these advancements, achieving AGI remains a formidable challenge requiring substantial financial resources. Estimates from experts, including those within OpenAI, vary widely, with OpenAI’s CEO suggesting in October 2023 that AGI could potentially be achieved within approximately five years. This new grading scale, although still in development, was introduced shortly after OpenAI’s collaboration announcement with Los Alamos National Laboratory. This partnership aims to explore the safe integration of advanced AI models like GPT-4o into bioscientific research, with the goal of establishing safety protocols for governmental use.

The dissolution of OpenAI’s safety team earlier this year raised concerns about the organization’s prioritization of safety in its AI development initiatives. While OpenAI has denied claims that safety protocols have been sidelined, the departure of key figures has sparked debate within the industry about the potential implications of AGI development by the company.

OpenAI has not disclosed the specific criteria used to assign models to internal levels within its scale. However, recent demonstrations during company meetings suggest significant strides in AI capabilities, including human-like reasoning exhibited by the GPT-4 model. This grading scale could offer a structured framework for evaluating progress in AI development, aiming to provide a clearer understanding of advancements rather than leaving them open to subjective interpretation.

Conclusion:

OpenAI’s development of an internal scale to track AI progress towards AGI sets a structured framework for assessing technological advancement in the AI sector. This initiative not only clarifies the stages of AI development—from basic problem-solving to potentially surpassing human capabilities—but also underscores the significant financial and strategic implications involved in achieving AGI. The commitment to collaboration with other AGI projects reflects a cautious yet proactive stance in navigating the ethical and competitive landscapes of AI innovation.

Source