AI Accuracy Benchmark Reveals Midrange and Open-Source Models as Competitive Alternatives

  • Galileo Technologies Inc. released the Hallucination Index benchmark, evaluating 22 LLMs, including 12 open-source and 10 proprietary models.
  • The benchmark assessed model accuracy across short, medium, and long task collections.
  • Anthropic PBC’s Claude 3.5 Sonnet was the top performer, achieving perfect accuracy in medium and long prompts and a score of 0.97 in short prompts.
  • Google LLC’s Gemini 1.5 Flash was rated the most cost-effective, with accuracy scores of 0.94, 1, and 0.92 for the three prompt collections.
  • Alibaba Group’s Qwen-2-72b-instruct achieved the highest accuracy among open-source models, particularly excelling with medium-length prompts and supporting up to 128,000 tokens.

Main AI News:

Galileo Technologies Inc., an artificial intelligence startup, has recently unveiled a groundbreaking benchmark test that highlights the performance of some of the most popular large language models (LLMs) in the industry. This new benchmark, known as the Hallucination Index, provides a comprehensive evaluation of 12 open-source and 10 proprietary LLMs, assessing their accuracy across three distinct task collections. The results reveal that midrange and open-source LLMs are emerging as strong competitors, offering viable alternatives to high-cost frontier AI systems.

Vikram Chatterji, co-founder and CEO of Galileo Technologies, emphasized that the purpose of the benchmark was not just to rank models but to provide AI teams and decision-makers with valuable data to select the most appropriate model for their specific needs and budgets. The San Francisco-based startup, backed by more than $20 million in venture funding, offers a cloud-based platform designed to help AI teams measure LLM accuracy and troubleshoot technical issues. In May, Galileo updated its platform with a new tool aimed at protecting LLMs from malicious input, further enhancing its utility.

The Hallucination Index benchmark utilized Galileo’s Context Adherence feature to evaluate the models. This feature involves presenting an LLM with a test prompt and then assessing the quality of its response using a secondary LLM. For this purpose, Galileo employed OpenAI’s flagship GPT-4o model to evaluate the AI responses. The benchmark’s test prompts included questions paired with text passages containing the answers, and the LLMs were tasked with deducing the correct answer from the provided information.

Among the models tested, Anthropic PBC’s Claude 3.5 Sonnet emerged as the most accurate. This midrange model, which represents a scaled-down version of a more advanced model in Anthropic’s LLM series, achieved perfect accuracy in the second and third task collections, which involved medium and long prompt sets. For the short prompt collection, Claude 3.5 Sonnet scored 0.97 out of 1, showcasing its strong performance across various task complexities.

In terms of value for money, Galileo ranked Google LLC’s Gemini 1.5 Flash as the most cost-effective LLM. Launched in May, Gemini 1.5 Flash offers a significantly lower price point compared to Anthropic’s Claude 3.5 Sonnet. Despite its lower cost, Google’s model achieved impressive accuracy scores of 0.94, 1, and 0.92 for the Hallucination Index’s short, medium, and long prompt collections, respectively.

Additionally, the benchmark highlighted Alibaba Group Holding Ltd.’s Qwen-2-72b-instruct as the top-performing open-source model. Qwen-2-72b-instruct excelled in handling medium-length prompts with up to 25,000 tokens, demonstrating its ability to process a large volume of data effectively. Notably, this model supports prompts with up to 128,000 tokens, surpassing the data capacity of other open-source LLMs evaluated by Galileo.

Conclusion:

The Hallucination Index benchmark highlights the increasing competitiveness of midrange and open-source large language models in the AI market. These models are proving to be viable alternatives to more expensive frontier systems, offering strong performance at a lower cost. This shift provides organizations with more cost-effective options for AI deployment, potentially democratizing access to high-quality AI technologies and encouraging further innovation in the field.

Source