Armilla AI Unveils AutoAlign: Enhancing Safety and Performance of AI Models for Enterprises

TL;DR:

  • Armilla AI introduces AutoAlign, a web-based platform for fine-tuning AI models to reduce hallucinations and harmful responses, while addressing bias.
  • AutoAlign is a low-code solution that empowers enterprise users to evaluate and optimize AI models before deployment.
  • The platform enables organizations to create alignment goals, such as avoiding gender assumptions, and fine-tune models accordingly.
  • AutoAlign provides guardrails for closed models, preventing harmful or misleading responses from reaching end-users.
  • Enterprises can deploy AutoAlign on private cloud servers, ensuring data security and privacy.
  • The platform allows organizations to utilize public and commercially accessible LLMs while protecting personally identifiable information (PII) and proprietary data.
  • Armilla plans to offer AutoAlign through a subscription-based model, catering to different data volumes and implementation requirements.

Main AI News:

The realm of AI has taken center stage in large enterprises this year. With numerous surveys indicating a surge in optimism and interest among executives and workers, the utilization of AI tools has reached unprecedented levels. Over the past six months, OpenAI’s ChatGPT has emerged as a user-friendly interface, enabling seamless interactions with large language models (LLMs) and gaining the trust of companies who have integrated it into their operations.

However, cautionary tales have also emerged as enterprises and their workforce navigate the challenges of safely experimenting with GenAI. Instances like the Samsung workers who shared confidential information or the lawyer who received fabricated court cases from ChatGPT exemplify the risks involved. Moreover, the recent incident where a “wellness chatbot” provided harmful responses related to eating disorders necessitates a solution to these issues.

Thankfully, software vendors are stepping up to address these challenges. One such company is Armilla AI, founded by Dan Adamson, a former senior software development lead at Microsoft, Karthik Ramakrishnan, a former senior manager at Deloitte Canada, and Rahm Hafiz, an NLP researcher and government contractor. With a combined experience of 50 years in AI, and the backing of the renowned YCombinator startup accelerator, Armilla is at the forefront of tackling these problems.

Today, Armilla unveils its latest offering: AutoAlign. This web-based platform empowers enterprises to fine-tune popular open source LLMs such as LLaMA and Red Pajama, as well as internal organization LLMs with HuggingFace interfaces. AutoAlign’s primary objective is to reduce hallucinations and harmful responses while mitigating bias.

In the realm of tool builders working within enterprises or for them, it is essential to test and evaluate these models before deploying them,” states Dan Adamson in an interview with VentureBeat. AutoAlign serves as a low-code solution, enabling deployment by individuals within an enterprise without extensive technical training. However, a basic understanding of the challenges associated with generative AI is advisable.

AutoAlign can be installed on an organization’s private cloud servers, ensuring data security and privacy. Whether it remains entirely internal or public-facing for customers, personally identifiable information (PII) and other sensitive data are safeguarded through encryption.

During a demonstration, Adamson showcased the capabilities of AutoAlign. Using an open source LLM as an example, he prompted it with the text “the managing director was early due to…,” and the response described the person as a “tall, thin man.” However, AutoAlign’s fine-tuning controls offer enterprises the ability to create new “alignment goals,” such as ensuring responses do not assume gender based on profession. The same model, after undergoing AutoAlign’s fine-tuning, provided a gender-neutral response of “they” when presented with the same language.

Another demonstration exhibited the transformation of a model when prompted with the phrase “my daughter went to school to become a…“. The base model, without fine-tuning, responded with “nurse,” while the model fine-tuned by AutoAlign produced the response “doctor.”

AutoAlign also acts as a protective barrier around closed models, including commercial LLMs like OpenAI’s ChatGPT-3.5 and 4. By implementing AutoAlign’s guardrails, enterprises can prevent these models from generating harmful or misleading responses. For example, Adamson demonstrated how an LLM could be tricked into providing instructions on creating a dangerous weapon like napalm. However, AutoAlign’s guardrails effectively detect and block such harmful responses, thereby safeguarding users.

Moreover, AutoAlign’s guardrails can detect and prevent AI hallucinations within enterprises. For instance, setting up a guardrail to cross-reference information with reliable sources like Wikipedia helps eliminate hallucinations from reaching end-users.

Adamson emphasizes that these guardrails also ensure the security of PII and proprietary information, allowing organizations to utilize public and commercially accessible LLMs without compromising data privacy.

The guardrail approach involves a protective layer that modifies inputs or outputs or blocks content from being delivered,” explains Adamson. “This is particularly useful for safeguarding PII, ensuring personal information doesn’t leak across the web.”

Armilla has already granted select customers access to test AutoAlign, and the company intends to make it more widely available through a subscription-based model. The pricing will range from “$10,000 and above” annually, depending on the volume of data and implementation requirements of the customer organization.

While Dan Adamson refrained from revealing specific organizations already utilizing AutoAlign due to confidentiality agreements, Armilla has historically collaborated with clients in the financial services, human resources, media, and visual generation sectors. The company primarily operates in North America but has recently expanded its operations into the European Union, ensuring GDPR compliance.

Conclusion:

The introduction of Armilla AI’s AutoAlign brings significant advancement to the AI market. Enterprises can now fine-tune their AI models to mitigate risks such as hallucinations, harmful responses, and bias. With a low-code solution that empowers non-technical users, organizations can evaluate and optimize AI models, ensuring safer and more responsible AI deployments. AutoAlign’s ability to set guardrails around closed models further enhances security and privacy, enabling enterprises to leverage public LLMs without compromising sensitive data. This development highlights the increasing demand for scalable and trustworthy AI solutions, and Armilla AI is well-positioned to capitalize on this market need.

Source