Yandex Unveils Game-Changing Compression Techniques for Large Language Models

  • Yandex introduces Additive Quantization for Language Models (AQLM) and PV-Tuning.
  • Methods achieve up to 8 times reduction in model size while maintaining 95% response quality.
  • AQLM improves model accuracy and reduces memory consumption, enabling deployment on common devices.
  • PV-Tuning ensures high-quality responses despite compression.
  • Methods evaluated on models like LLama 2 and Mistral, maintaining 95% quality with 8-fold size reduction.
  • Post-compression Llama 2 runs on a single GPU, cutting hardware costs by up to 8 times.
  • New use cases include offline deployment on smartphones and smart speakers.
  • Compressed models operate up to 4 times faster due to fewer computational needs.
  • AQLM and PV-Tuning are available on GitHub with demo materials and pre-compressed models.

Main AI News:

Yandex researchers, in collaboration with IST Austria, NeuralMagic, and KAUST, have introduced two groundbreaking methods for compressing large language models (LLMs): Additive Quantization for Language Models (AQLM) and PV-Tuning. These innovative techniques offer up to 8 times reduction in model size while maintaining 95% response quality. The approach is designed to optimize resources and improve efficiency in LLM deployment. This pioneering research has been prominently featured at the International Conference on Machine Learning (ICML) in Vienna, Austria.

Key Innovations and Benefits

AQLM utilizes additive quantization, a method traditionally used for information retrieval, to compress LLMs effectively. This method not only reduces memory consumption but also enhances model accuracy under significant compression, enabling deployment on common devices like home computers and smartphones. PV-Tuning addresses potential errors in the compression process, ensuring that the combination of AQLM and PV-Tuning produces compact models that maintain high-quality responses even with limited computing resources.

Evaluation and Impact

The effectiveness of AQLM and PV-Tuning has been rigorously evaluated using popular open-source models such as LLama 2 and Mistral. Compressed models were tested against English-language benchmarks—WikiText2 and C4—showing a remarkable 95% quality in responses despite an 8-fold reduction in size.

Applications and Benefits

These methods offer significant cost savings for companies developing and deploying language models. For example, the Llama 2 model, originally requiring 4 GPUs, now runs on a single GPU post-compression, reducing hardware costs by up to 8 times. This makes advanced LLMs more accessible to startups, researchers, and enthusiasts, allowing them to operate sophisticated models on everyday computers.

New Use Cases

AQLM and PV-Tuning enable offline deployment of LLMs on devices with limited computing power, expanding their use to smartphones, smart speakers, and other applications. This facilitates text and image generation, voice assistance, personalized recommendations, and real-time language translation without an active internet connection. Furthermore, models compressed with these methods operate up to 4 times faster due to reduced computational requirements.

Implementation and Availability

Developers and researchers can access AQLM and PV-Tuning on GitHub, where demo materials provide guidance for training compressed models. Popular open-source models that have been compressed using these methods are also available for download.

ICML Recognition

Yandex Research’s article on AQLM has garnered attention at ICML, a leading machine learning conference. This collaboration with IST Austria and Neural Magic marks a notable advancement in LLM compression technology.

Conclusion:

Yandex’s innovative compression techniques present a significant shift in the landscape of large language model deployment. By reducing model size and costs substantially while preserving high-quality performance, these methods make advanced AI technologies more accessible and cost-effective. This advancement is poised to democratize access to LLMs, enabling startups, researchers, and smaller enterprises to leverage sophisticated AI capabilities without substantial hardware investments. Additionally, the ability to deploy models on everyday devices expands the potential applications of LLMs, fostering growth in areas like offline AI solutions and real-time language processing. The market impact will likely include increased competition and innovation as more entities gain access to powerful AI tools.

Source