TL;DR:
- OpenAI introduces updates to its generative AI models, GPT-3.5 Turbo and GPT-4.
- The updates include function calling capabilities, improved steerability, the extended context for GPT-3.5 Turbo, and lower pricing.
- Microsoft, Snapchat, Salesforce, Morgan Stanley, HubSpot, GitHub Copilot, Stripe, GetResponse, and Instacart are among the companies benefiting from OpenAI’s advancements.
- Developers can create chatbots, convert natural language queries into function calls, and extract structured data from text using the new function calling capabilities.
- GPT-4 and GPT-3.5 Turbo offer improved steerability and the ability to process larger bodies of text.
- OpenAI emphasizes security precautions when dealing with untrusted data.
- Lower pricing and improved functionality make it easier for developers to use and experiment with the models.
Main AI News:
OpenAI, the leading provider of generative AI technology, has announced a series of updates to its groundbreaking AI models, GPT-3.5 Turbo and GPT-4. These enhancements mark a significant milestone in advancing AI capabilities in the workplace, empowering developers with a wider range of tools to create sophisticated and high-performing AI applications tailored to meet the demands of modern work environments.
The latest updates introduce a revolutionary function calling capability, improved steerability, the extended context for GPT-3.5 Turbo, and a revised pricing structure. These enhancements will not only benefit developers but also users of various tools that leverage OpenAI’s technology. It’s highly likely that you have already encountered AI-powered solutions implementing OpenAI’s advancements in your day-to-day activities.
Microsoft, in collaboration with OpenAI, has integrated AI models into popular products such as Bing and Office, enhancing their functionalities with generative AI capabilities. Snapchat’s AI chatbot, My AI, has leveraged OpenAI GPT models to interpret and respond to image snaps. Salesforce has introduced Einstein GPT, the first generative AI customer relationship management (CRM) product powered by OpenAI’s most advanced models.
Wealth management firm Morgan Stanley has formed a partnership with OpenAI, granting them access to the cutting-edge GPT-4 model. HubSpot has developed ChatSpot.ai, a tool based on OpenAI GPT-4, and GitHub Copilot has incorporated generative AI from OpenAI Codex to assist developers (which led to a copyright lawsuit). Stripe utilizes OpenAI GPT technology to gain insights into customer behavior and detect fraud. GetResponse has introduced an OpenAI GPT-powered email generator, while Instacart has deployed an AI chatbot to assist shoppers with their grocery needs. All these applications, among others, are expected to see improvements in generative AI performance thanks to the recent updates to GPT-3.5 Turbo and GPT-4.
Let’s delve into the enhancements made to GPT-3.5 Turbo and GPT-4. OpenAI has responded to developer feedback and feature requests by introducing a groundbreaking function calling capability to the Chat Completions API. This empowers developers to describe functions to the AI models, enabling them to intelligently generate JSON objects containing function arguments. This enhancement establishes a more seamless connection between GPT’s capabilities and external tools and APIs, facilitating the retrieval of structured data from the model. With the ability to create chatbots that answer questions using external tools, convert natural language queries into function or API calls, and extract structured data from text, developers now have a diverse array of applications at their disposal.
The introduction of function calling capabilities unlocks new possibilities for developers to integrate GPT models with other APIs or external tools, resulting in enhanced user experiences and increased efficiency. For instance, a workplace app could leverage this feature to convert a user’s natural language query into a function call to a customer relationship management (CRM) or enterprise resource planning (ERP) system, streamlining operations and improving productivity. While OpenAI remains vigilant about potential security risks associated with untrusted data, developers are encouraged to ensure their applications consume information only from trusted sources and include user confirmation steps before executing impactful actions.
In addition to the function calling capabilities, OpenAI has incorporated improved steerability and extended context into GPT-4 and GPT-3.5 Turbo. Increased steerability allows developers to fine-tune AI applications to align more closely with organizational requirements or specific tasks. This enables the generation of more targeted business reports and context-aware responses in customer service chatbots. The release of GPT-3.5 Turbo-16k provides an impressive four times the context length of the standard model, accommodating up to 20 pages of text in a single request.
This extended context capacity significantly enhances the AI’s ability to comprehend and generate responses for larger bodies of text. In legal or academic settings, where documents can be extensive, this feature greatly improves the model’s ability to understand and summarize vast amounts of information, streamlining information extraction. In project management applications, it enables the AI to process and comprehend entire project plans, facilitating the generation of insightful project analytics and forecasts.
OpenAI has also announced a lower pricing structure, reflecting improved system efficiencies. The popular embedding model, text-embedding-ada-002, now comes at a 75% reduced price. Additionally, the cost of input tokens for the GPT-3.5 Turbo model has been reduced by 25%. These price reductions, combined with the enhanced functionality, make it more accessible and affordable for developers to utilize and experiment with these powerful AI models in their applications.
Conclusion:
OpenAI’s enhancements to its generative AI models, GPT-3.5 Turbo and GPT-4, bring a significant transformation to the market. The introduction of function calling capabilities opens up a wide range of possibilities for developers to integrate AI models with external tools and APIs, enhancing user experiences and productivity. The improved steerability and extended context allow for more tailored and accurate AI applications, benefiting industries such as customer service, project management, and document analysis. The lower pricing, combined with the advancements, encourages wider adoption and experimentation with these powerful AI models, further fueling innovation in the business landscape.