Tensoic Introduces Playground for Kannada Llama, Powered by NVIDIA A100s

TL;DR:

  • Tensoic, in collaboration with E2E Networks, launches a playground for Kannada Llama.
  • The playground utilizes NVIDIA A100 GPUs and Xylem.AI platform for rapid inference.
  • Kannada Llama, a 7 billion Llama 2 model, is designed for Kannada token processing.
  • Pre-training on a dedicated NVIDIA A100 80GB instance took 50 hours, costing $170.
  • Plans to integrate Kannada Llama with Mistral’s models and release Gujarati Language Models are in progress.
  • The research paper accompanying the project aims for perfection in NLP.

Main AI News:

In a strategic move aimed at revolutionizing the field of natural language processing, Tensoic, in collaboration with E2E Networks, has unveiled an innovative playground designed to put the Kannada Llama, known as Kan-LLaMA [ಕನ್-LLama], through its paces. This groundbreaking development comes just days after the release of the Kan-Llama, an Indic Language Model (LLM) meticulously crafted around Kannada tokens.

The playground, which leverages the immense processing power of NVIDIA A100 GPUs, promises to be a game-changer for AI enthusiasts and developers alike. Tensoic’s decision to harness the Xylem.AI platform for inference ensures blazingly fast results and an impeccable inference stack that is set to redefine what’s possible in the world of natural language understanding.

While the current iteration of Kannada Llama does not support multi-turn conversations, it offers users the opportunity to experiment with various parameters, providing a valuable testing ground for customization and fine-tuning.

Kannada Llama, or Kan-LLaMA, represents a colossal leap in language model technology. This 7 billion Llama 2 model has undergone rigorous pre-training and fine-tuning, with a focus on the “Kannada” token. Developed by Mumbai-based company Tensoic, this state-of-the-art language model’s pre-training process was carried out on a dedicated NVIDIA A100 80GB instance, requiring approximately 50 hours of computational effort and incurring an estimated cost of $170. The result? A formidable LoRA adapter, boasting a compact size of approximately 1.1GB.

Tensoic is not resting on its laurels. Adarsh Shirawalmath, the visionary behind Kannada Llama, revealed that plans are in motion to elevate the model by integrating it with Mistral’s models. However, challenges persist, as the dataset remains somewhat chaotic and unprepared for Indic models. Nevertheless, Tensoic remains undaunted and is also gearing up for the imminent release of Gujarati Language Models.

In addition to their technological achievements, the Tensoic team is dedicated to producing a meticulously crafted research paper to accompany their groundbreaking work. According to Shirawalmath, perfection remains their ultimate goal as they continue to push the boundaries of what is achievable in the field of natural language processing.

Conclusion:

Tensoic’s release of the Kannada Llama Playground, powered by NVIDIA A100 GPUs and the Xylem.AI platform, marks a significant advancement in the field of natural language processing. This development not only offers a valuable testing ground for customization but also signifies Tensoic’s commitment to pushing the boundaries of language model technology. As they plan to integrate the model with Mistral’s offerings and expand into other languages, the market can expect increased innovation and possibilities in the realm of AI-driven language understanding.

Source