Researchers from Stanford and UC Berkeley find a decline in OpenAI’s LLMs’ performance over time

TL;DR:

  • Researchers from Stanford and UC Berkeley find a decline in OpenAI’s LLMs’ performance over time.
  • GPT-4’s March 2023 version excels in identifying prime numbers, but the June 2023 version performs poorly.
  • GPT-3.5 shows significant improvements in June 2023 compared to March 2023 in mathematical tasks.
  • GPT-4 becomes less willing to answer sensitive questions in June 2023.
  • Both GPT-4 and GPT-3.5 experience more formatting errors in code generation in June 2023.
  • GPT-4’s update proves more robust against jailbreaking attacks than GPT-3.5.
  • Another study by Microsoft suggests GPT-4 is a significant step towards AGI.

Main AI News:

The excitement surrounding large language models (LLMs) and their transformative potential in the realm of generative AI seems to have waned. Recent research conducted by experts from Stanford University and UC Berkeley points to a decline in the performance of OpenAI’s LLMs over time.

One of the crucial aspects the researchers aimed to investigate was whether these LLMs were continuously improving, given their capacity for updates based on data, user feedback, and design alterations. To ascertain this, the team meticulously evaluated the behavior of GPT-3.5 and GPT-4 during their respective releases in March and June 2023 across four distinct tasks: solving mathematical problems, handling sensitive and hazardous queries, generating code, and demonstrating visual reasoning capabilities.

OpenAI introduced GPT-4 with great fanfare, touting it as more reliable, creative, and adept at following nuanced instructions compared to its predecessor, GPT-3.5. The latest claim was that GPT-4 had excelled in passing arduous exams within professional domains like medicine and law. However, the research findings showed a disparity in the performance and behavior of both models across their March and June releases.

In the realm of mathematical prowess, GPT-4’s March 2023 version displayed an impressive 97.6 percent accuracy in identifying prime numbers. Yet, surprisingly, its June 2023 version struggled, managing a mere 2.4 percent accuracy on the same questions. On the other hand, GPT-3.5 exhibited significant improvements in the task between its March and June 2023 iterations.

Furthermore, the researchers discovered a reluctance on GPT-4’s part to respond to sensitive questions in June compared to its more forthcoming behavior in March. Both GPT-4 and GPT-3.5 demonstrated an increase in formatting errors during code generation in their June releases as opposed to their March versions.

An additional concern that surfaced during the study pertains to LLMs’ tendency to “hallucinate.” Fortunately, GPT-4’s update showcased greater resilience against jailbreaking attacks when compared to GPT-3.5. Jailbreaking involves crafting a prompt to deceive the language model and breach its protection boundaries, resulting in responses that could potentially aid in creating malware.

While the world marvels at the wonders of ChatGPT, this study serves as a stark reminder to developers and stakeholders alike. The evaluation and scrutiny of LLMs’ behavior in real-world applications must be an ongoing, long-term endeavor. The researchers have committed to regularly updating their findings by continually assessing GPT-3.5, GPT-4, and other LLMs across various tasks over time. They strongly recommend that users and companies integrating LLM services into their workflows should implement similar monitoring and analysis practices.

Contrastingly, a separate study conducted by Microsoft, a significant investor in OpenAI, yielded intriguing results. It proclaimed GPT-4 to be a significant leap towards artificial general intelligence (AGI), a claim viewed by many in the AI industry as potentially perilous. As the debate surrounding LLMs’ future continues, one thing remains clear—vigilant assessment and meticulous analysis are indispensable for the responsible development and deployment of these powerful language models.

Conclusion:

The research indicates a noticeable decrease in the performance of OpenAI’s LLMs over time, raising concerns about their reliability. This could potentially impact the market’s perception of AI-powered language models and may prompt companies to reevaluate their reliance on such technology for critical applications. Developers and businesses should prioritize continuous monitoring and evaluation to ensure the responsible and effective deployment of LLMs in real-world scenarios. The emergence of GPT-4 as a potential step towards AGI might spark further discussions about the future of AI in various industries.

Source