CEO of OpenAI Suggests that GPT-4 May Not Meet Everyone’s Expectations

TL;DR:

  • OpenAI CEO, Sam Altman, has suggested that the release of GPT-4, the next major version of the company’s generative language model, will be done only when the team is confident it can be done safely and responsibly.
  • OpenAI prioritizes the societal impact of its models over the speed of their release due to concerns about AI-generated content exacerbating issues like misinformation and propaganda.
  • OpenAI has provided tools and best practices to developers and is working on improving safety by developing more robust safeguards and refining usage guidelines.
  • Claims about GPT -4’s features, such as 100 trillion parameters, have been dismissed by Altman as unrealistic and unhealthy.
  • Altman has said that a video-generating model will come but has not given a timeline. Models generating video will require ultimate safeguards due to issues with manipulated videos.
  • OpenAI is taking its time to minimize risks and ensure API-backed applications are built responsibly by streamlining the process for developers and refining usage guidelines over time.
  • The hype around GPT-4 should be tempered, and people should not expect an actual AGI anytime soon.

Main AI News:

OpenAI CEO Sam Altman has poured cold water on expectations around the upcoming GPT-4, the next major version of the company’s generative language model. In a recent interview with StrictlyVC, Altman suggested that GPT-4 will be released only when the team is confident it can be done safely and responsibly rather than according to any predetermined timeline.

This caution is in line with OpenAI’s track record of prioritizing the societal impact of its models over the speed of their release. The potential for AI-generated content to exacerbate issues like misinformation and propaganda has been a major concern for the company, and recent research has suggested that GPT-3 has the ability to generate “influential” text that could radicalize people into far-right extremist ideologies.

To address these concerns, OpenAI has provided tools and best practices to developers who use its API-backed applications, as well as developing more robust safeguards and refining usage guidelines. However, the process of improving safety is ongoing, and OpenAI remains committed to streamlining the process for developers and expanding use cases over time.

As excitement around the next major version of OpenAI’s language model grows, claims of its features are emerging. However, Sam Altman, the CEO of OpenAI, has dismissed one of the viral claims that GPT-4 will feature 100 trillion parameters, up from GPT -3’s 175 billion, as “complete bullshit.” Altman expressed his view that such speculation is unhealthy and not realistic at this point, and people are “begging to be disappointed.”

In a video interview with StrictlyVC, Altman responded to expectations that GPT-4 would come in the first half of 2023 by saying, “It’ll come out at some point when we are confident we can do it safely and responsibly.” Altman has never rushed the release of its models due to concerns about the societal impact. The ability to generate mass amounts of content could exacerbate issues like misinformation and propaganda.

While Altman wants the community to temper its expectations, he is happy to say that a video-generating model will come, although he won’t put a timeframe on when. He said, “It’s a legitimate research project. It could be pretty soon; it could take a while.” However, models to generate video would require the ultimate safeguards as manipulated videos, such as deep fakes, are already proving to be problematic.

OpenAI originally provided access to GPT to a small number of trusted researchers and developers. While it developed more robust safeguards, a waitlist was then introduced. The waitlist was removed in November 2021, but work on improving safety is an ongoing process. OpenAI is doing the right thing by taking its time to minimize risks and keeping expectations in check.

The hype around GPT-4 is just that, and people should not expect an actual AGI anytime soon. OpenAI is doing an excellent job of ensuring API-backed applications are built responsibly. They provide tools and help developers use best practices so they can bring their applications to production quickly and safely. As their systems evolve and they work to improve the capabilities of their safeguards, they expect to continue streamlining the process for developers, refining their usage guidelines, and allowing even more use cases over time.

Conlcusion:

The release of OpenAI’s GPT-4 language model is not based on a predetermined timeline but rather on the company’s confidence in releasing it safely and responsibly. OpenAI has a track record of prioritizing the societal impact of its models, as AI-generated content has the potential to exacerbate issues such as misinformation and propaganda.

The company is continuously working on improving the safety of its models, providing tools and best practices to developers, streamlining the process for developers, and expanding use cases over time. The hype around GPT-4’s features should be tempered, and the company is not expecting an actual AGI anytime soon. OpenAI is doing an excellent job in ensuring API-backed applications are built responsibly and is continuously working on improving the capabilities of its safeguards.

Source