LLMWare Introduces DRAGON LLMs: Tailored Solutions for Complex Business Document Workflows


  • Ai Bloks introduces the DRAGON series of 7B parameter LLMs, fine-tuned for fact-based question-answering in complex business documents.
  • Enterprises seek unified frameworks that integrate LLM models with workflow capabilities, high-quality specialized LLMs, and cost-effective, open-source, private deployment options.
  • LLMWare offers seven DRAGON models for open-source use, achieving high accuracy and preventing hallucinations.
  • These models complement BLING and Industry-BERT collections, catering to various deployment scenarios.
  • LLMWare’s solution includes a core development framework and integration with private-cloud instances for end-to-end RAG workflows.
  • This development signals a new era of automation workflows in the enterprise.

Main AI News:

In a recent development, Ai Bloks made headlines by unveiling its llmware framework, an open-source platform designed for crafting enterprise-level LLM-based workflow applications. Today, Ai Bloks takes another monumental stride forward with the launch of the DRAGON series, a collection of 7B parameter LLMs meticulously fine-tuned to cater to the intricate needs of fact-based question-answering within the realm of complex business and legal documents.

The landscape of enterprise technology is evolving rapidly, and businesses are increasingly seeking scalable RAG systems that can harness their proprietary data. In response to these growing demands, several crucial requirements have come to light:

  1. Integrated Framework: The need for a unified framework that seamlessly integrates LLM models with a comprehensive suite of workflow capabilities, including document parsing, embedding, prompt management, source verification, and audit tracking.
  2. Specialized LLMs: High-quality, compact LLMs that have been specialized for fact-based question-answering and optimized for enterprise workflows.
  3. Open Source & Customization: A call for open-source solutions that offer cost-effectiveness and privacy in deployment, with ample room for customization.

To address these essential needs, LLMWare is proud to introduce the DRAGON models, a family of seven models available through its Hugging Face repository. These models have undergone extensive fine-tuning for RAG and are built on top of leading foundation models, ensuring they are production-ready for enterprise RAG workflows.

The performance of these DRAGON models has been rigorously evaluated using the llmware rag-instruct-benchmark. The results, along with the methodology, are provided alongside the models in the repository. Impressively, each DRAGON model achieves accuracy in the mid-to-high 90s across a diverse set of 100 core test questions, demonstrating their robustness in preventing hallucinations and accurately identifying unanswerable questions (‘not found’ classification).

The DRAGON model family joins the ranks of LLMWare’s other RAG model collections, including BLING and Industry-BERT. BLING models, ranging from 1B to 3B parameters, offer RAG-specialized capabilities without the need for dedicated GPUs and can be run on a developer’s laptop. The training methodology for DRAGON models is similar to BLING, allowing developers to seamlessly transition from local BLING models to DRAGON models for enhanced production performance. Notably, DRAGON models are designed for secure private deployment on single enterprise-grade GPU servers, ensuring that enterprises can implement end-to-end RAG systems within their secure environments.

This suite of open-source RAG-specialized models, when combined with LLMWare’s core development framework and integrated with open-source private-cloud instances of Milvus and MongoDB, provides a comprehensive solution for RAG workflows. With just a few lines of code, developers can automate document ingestion and parsing, attach embedding vectors, execute state-of-the-art LLM-based generative inferences, and conduct evidence and source verification—all within a private cloud environment, and in some cases, even on a single developer’s laptop.

According to Ai Bloks CEO Darren Oberst, “We firmly believe that LLMs usher in a new era of automation workflows in the enterprise. Our vision for LLMWare is to unite specialized models and data pipelines and enable components into an open-source, unified framework, empowering enterprises to swiftly customize and deploy LLM-based automation at scale.”


The launch of the DRAGON LLMs by Ai Bloks and LLMWare addresses critical needs in the enterprise market for advanced document processing. With fine-tuned models, a unified framework, and open-source flexibility, this development empowers businesses to automate complex workflows efficiently. It signifies a significant step forward in the adoption of AI-driven automation within enterprises, promising increased efficiency and accuracy in handling intricate business and legal documents.