Slang Labs: Pioneering a Hybrid Approach in Large Language Models

TL;DR:

  • Slang Labs, supported by Google, adopts a hybrid approach to leverage the strengths of various large language models (LLMs).
  • The company is set to launch its customized open-source LLMs designed specifically for Indian domains in the first half of the upcoming year.
  • Slang Labs provides voice assistant solutions embedded within popular applications like e-commerce and banking.
  • Key clients include Nykaa, ICICI Direct, Tata Digital, and Fresho from Bigbasket.
  • Currently, Slang Labs utilizes OpenAI for its voice assistant.
  • Co-founder Kumar Rangarajan discusses the three layers of LLMs: the base LLM, pre-training, and fine-tuning, emphasizing the need for specialized fine-tuning to optimize LLMs for specific use cases.

Main AI News:

Slang Labs, a Google-backed entity, is charting a strategic course in the dynamic landscape of large language models (LLMs). As the market witnesses a flurry of LLM launches, including India-centric variants, Slang Labs is forging ahead with a hybrid approach that harnesses the strengths of various LLMs. In addition to this, the company has ambitious plans to unveil its own versions of open-source LLMs tailored specifically for Indian domains in the upcoming first half of the year.

Slang Labs specializes in providing voice assistant solutions that can seamlessly integrate with popular applications such as those in the e-commerce and banking sectors. Some of their esteemed clients include Nykaa, ICICI Direct, Tata Digital, Fresho from Bigbasket, and many others.

Currently, Slang Labs relies on OpenAI to power its voice assistant. Kumar Rangarajan, co-founder of Slang Labs, has disclosed their strategic move towards fine-tuning open-source LLMs, including Meta’s LLaMA and Mistral AI’s LLM, with the ultimate goal of crafting a hybrid LLM model for their voice assistant, aptly named CONVA.

Delving into the intricacies of this endeavor, Rangarajan elaborated on the three distinct layers within the LLM architecture. The foundational layer, referred to as the base LLM, is typically trained with extensive internet and multi-language data for general purposes. While this model boasts a comprehensive understanding of language, it lacks the finesse required to serve as an effective assistant. It struggles to provide precise responses to user queries, primarily due to its limitations in comprehending and following specific instructions. Building this foundational model constitutes a substantial investment, as the lion’s share of expenses is directed towards its development.

The subsequent layer in this intricate framework is the pre-training phase, where the LLM is tutored to differentiate between correct and incorrect responses. This phase equips the model with the ability to discern which answer to prioritize when faced with multiple options. Various techniques are employed to ensure the model’s accuracy in delivering the right responses.

The final layer, known as fine-tuning, is where the LLM undergoes specialized training to excel in answering queries accurately within specific contexts. This fine-tuning process tailors the model to meet particular use cases or objectives. Slang Labs, among others, leverages base models sourced from LLaMA and Mistral, subjecting them to rigorous pre-training and fine-tuning procedures to optimize their utility.

Conclusion:

Slang Labs’ strategic embrace of a hybrid LLM approach, combined with its forthcoming Indian-optimized LLMs, positions the company for a more sophisticated and context-aware voice assistant ecosystem. This move reflects the growing importance of tailored language models in enhancing user experiences and engagement, indicating a potential shift in the market towards more specialized, domain-specific LLMs. Businesses looking to leverage voice assistants should closely monitor these developments to stay competitive in an evolving landscape.

Source