WitnessAI is pioneering the establishment of guardrails for generative AI models

  • WitnessAI, led by CEO Rick Caccia, addresses the risks of generative AI by providing control mechanisms.
  • The platform intercepts employee interactions with custom generative AI models, applying risk-mitigating policies.
  • Enterprises are increasingly interested in generative AI but lack preparedness for associated threats.
  • WitnessAI offers modules to prevent misuse of generative AI tools and safeguard sensitive data.
  • Privacy concerns arise due to data passing through WitnessAI’s platform, but the company emphasizes encryption and isolation.
  • Despite privacy dilemmas, WitnessAI sees strong interest, with 25 corporate users in the proof-of-concept phase.
  • With $27.5 million in funding, WitnessAI plans to expand its team and compete in the model compliance and governance solutions market.

Main AI News:

Generative AI has an inherent tendency to fabricate content, which can introduce biases and, at times, spew forth toxic narratives. But can it be regulated to ensure safety?

Rick Caccia, the CEO of WitnessAI, holds firm in his belief that it can.

Securing AI models presents a genuine challenge, particularly captivating for AI researchers. However, it diverges from the conventional approach to security,” Caccia, formerly SVP of marketing at Palo Alto Networks, conveyed in an interview with TechCrunch. “I liken it to a sports car: possessing a more potent engine — in this case, the model — doesn’t suffice without effective brakes and steering. The controls are as crucial for swift navigation as the engine itself.”

There is an evident demand for such controls within the enterprise sector. While enterprises are cautiously optimistic about the potential of generative AI to enhance productivity, concerns linger regarding its constraints.

According to an IBM poll, fifty-one percent of CEOs are actively recruiting for roles related to generative AI, roles that didn’t exist until this year. However, only 9% of companies feel adequately equipped to handle the threats emanating from their utilization of generative AI, including threats to privacy and intellectual property, as per a survey by Riskonnect.

WitnessAI’s platform intercepts interactions between employees and the bespoke generative AI models utilized by their employers. These models are not accessible via APIs like OpenAI’s GPT-4 but are more akin to Meta’s Llama 3. WitnessAI then applies risk-mitigation protocols and safeguards to the interactions.

One of the promises of enterprise AI is the democratization of enterprise data, empowering employees to perform their duties more effectively. However, the excessive exposure of sensitive data, or its inadvertent leakage or theft, poses a significant challenge,” Caccia remarked.

WitnessAI offers access to multiple modules, each addressing distinct forms of generative AI risk. One module enables organizations to establish rules preventing staff from certain teams from misusing generative AI-powered tools. Another module redacts proprietary and sensitive information from prompts sent to models and implements measures to shield models from attacks that might lead them astray.

We believe in addressing enterprise challenges by framing them in a coherent context, such as the safe adoption of AI, and then providing tailored solutions,” Caccia stated. “The Chief Information Security Officer aims to safeguard the business, and WitnessAI assists by ensuring data protection, thwarting prompt injection, and enforcing identity-based policies. Similarly, the Chief Privacy Officer seeks compliance with existing and forthcoming regulations, and we provide them with visibility and reporting tools to manage activity and mitigate risks.”

However, from a privacy standpoint, WitnessAI poses a nuanced dilemma. All data traverses through its platform before reaching the model, a transparency the company maintains. Yet, this process could potentially introduce its own privacy risks.

Responding to queries about WitnessAI’s privacy policy, Caccia emphasized the platform’s isolation and encryption, designed to prevent the exposure of customer secrets.

We’ve developed a millisecond-latency platform with built-in regulatory separation — a unique, isolated design aimed at safeguarding enterprise AI activities in a manner fundamentally distinct from conventional multi-tenant software-as-a-service models,” he elucidated. “We create individual instances of our platform for each customer, encrypted with their keys. Their AI activity data remains isolated; we have no visibility into it.”

Perhaps this assurance will assuage customers’ concerns. Yet, for employees apprehensive about the surveillance implications of WitnessAI’s platform, the matter is more complex.

Surveys indicate a general aversion to workplace monitoring, irrespective of the rationale, with many believing it adversely affects company morale. Nearly a third of respondents to a Forbes survey suggested they might contemplate leaving their jobs if subjected to online activity and communication monitoring by their employer.

Nevertheless, Caccia asserts a robust interest in WitnessAI’s platform, evidenced by a pipeline of 25 early corporate adopters in its proof-of-concept phase. (General availability is slated for Q3.) Additionally, WitnessAI has secured $27.5 million in funding from Ballistic Ventures, which incubated the company, and GV, Google’s corporate venture arm.

The funding will be allocated toward expanding WitnessAI’s team from 18 to 40 members by year-end. Such growth is imperative to fend off competition in the emerging domain of model compliance and governance solutions, not only from tech behemoths like AWS, Google, and Salesforce but also from startups such as CalypsoAI.

We’ve devised our strategy to sustain operations well into 2026, even in the absence of sales. However, we’re already witnessing almost twenty times the pipeline necessary to meet this year’s sales targets,” Caccia affirmed. “This marks our inaugural funding round and public launch, yet secure AI enablement and utilization represent a nascent field, and all our features are evolving to meet the demands of this burgeoning market.”

Conclusion:

WitnessAI’s approach signifies a pivotal development in addressing the safety concerns surrounding generative AI in enterprise settings. As more companies recognize the potential of generative AI while grappling with its risks, WitnessAI’s solutions fill a crucial gap. With robust interest and substantial funding, WitnessAI is poised to lead the market in providing secure AI enablement and governance.

Source