Securiti Unveils LLM Firewalls to Safeguard genAI Applications

  • Securiti launches LLM Firewalls to protect genAI systems from emerging threats.
  • These firewalls are tailored for large language models and aim to address the conversational nature of genAI.
  • They offer in-line checks to detect and prevent external attacks, including prompt injection and data poisoning.
  • Securiti’s distributed LLM firewall monitors user prompts, LLM responses, and data retrievals in real-time.
  • The solution helps enterprises meet compliance goals and covers OWASP’s top 10 LLM vulnerabilities.

Main AI News:

In response to the evolving threats surrounding generative artificial intelligence (gen AI) systems and applications, cybersecurity leader Securiti has introduced a groundbreaking firewall solution tailored for large language models (LLMs), known as Securiti LLM Firewalls.

As the landscape of future applications leans towards increased conversational capabilities, there arises a pressing need for robust in-line security measures to identify and thwart potential external attacks, asserts the company.

Rehan Jalil, CEO of Securiti, emphasizes, “The conversational dynamics inherent in genAI present novel avenues for threats and attack vectors, and Securiti LLM Firewalls are purpose-built to counteract them. Interfaces, whether internal or public-facing prompts, represent a novel ingress point for enterprise data.

Securiti’s initiative echoes a growing industry awareness of the nascent risks posed to enterprise genAI applications. In March, Cloudflare introduced analogous features with its Firewall for AI, acknowledging similar concerns.

Jalil elaborates on the unique advantage of Securiti LLM Firewalls, stating, “Our solution inherently comprehends the context it safeguards. By understanding the enterprise data context and the specific use case for which the genAI system is deployed, our firewalls can scrutinize prompts for relevance, topicality, and potential jailbreak attempts.”

Fortifying against an array of genAI threats, Securiti’s distributed LLM firewall is strategically architected for deployment across various stages of genAI application workflows. It meticulously monitors user prompts, LLM responses, and data retrievals from vector databases, promptly detecting and intercepting a spectrum of LLM-based attacks in real-time. These include prompt injection, insecure output handling, sensitive data disclosure, and training data poisoning.

Prompt injection, the prevalent form of LLM attacks, involves circumventing filters or coercing the LLM to disregard prior instructions, leading to unintended actions. Conversely, training data poisoning manipulates LLM training data to introduce vulnerabilities, backdoors, and biases.

Jalil underscores the proactive approach of the firewall, stating, “Our firewall vigilantly scrutinizes user prompts to preemptively identify and mitigate potential malicious exploits. Additionally, it thwarts attempts by users to maliciously override LLM behavior and ensures the redaction of sensitive data, if any, from the prompts, safeguarding against unauthorized access to protected information.

Moreover, the offering incorporates a firewall mechanism that governs and scrutinizes data retrieved during the retrieval augmented generation (RAG) process. This process, which draws from an authoritative knowledge base beyond the model’s training data sources, is subjected to rigorous checks to mitigate risks of data poisoning or indirect prompt injection, adds Jalil.

John Grady, principal analyst for Enterprise Strategy Group (ESG), underscores the urgency of addressing genAI threats, noting, “The potential ramifications of these threats are significant. We’ve witnessed instances where genAI applications inadvertently divulge sensitive information. As long as valuable data underpins these applications, attackers will seek to exploit vulnerabilities.

This initiative, along with similar endeavors, fills a critical void and is poised to gain prominence as genAI adoption proliferates, Grady emphasizes.

In addition to fortifying security, Securiti LLM Firewalls serve as a cornerstone for enterprises striving to achieve compliance objectives, whether dictated by legislative mandates (such as the EU AI Act) or internal policies aligned with frameworks like the NIST AI Risk Management framework and Gartner’s AI Trust, Risk, and Security Management (TRiSM) framework.

Securiti anticipates that its firewall offering, complemented by existing capabilities in its Data Command Center, will comprehensively address OWASP’s list of the 10 most critical large language model vulnerabilities. This extended protection encompasses additional LLM threats, including jailbreaks, authentication phishing, and the use of offensive language.

The Securiti LLM Firewalls are available now as part of the company’s overarching “AI security and governance” solution, introduced earlier this year.

Conclusion:

Securiti’s introduction of LLM Firewalls signifies a proactive response to the evolving threats surrounding generative artificial intelligence (genAI). By addressing the unique challenges posed by the conversational nature of genAI applications, Securiti is positioning itself as a leader in AI security solutions. This initiative underscores the increasing importance of robust security measures in the burgeoning genAI market, offering enterprises peace of mind and regulatory compliance in an era of heightened cybersecurity risks.

Source