How confidential computing and zero-trust data clean rooms alleviate AI concerns

TL;DR:

  • Opaque Systems enhances its platform with zero-trust data clean rooms (DCRs) optimized for Microsoft Azure confidential computing.
  • Confidential computing ensures data protection by performing computations within a hardware-based Trusted Execution Environment (TEE).
  • Data clean rooms allow secure collaboration and analysis of shared data while adhering to data privacy regulations.
  • Opaque’s platform enables secure multi-party analytics on fully encrypted confidential data, preserving its confidentiality.
  • Confidential LLM Inference allows organizations to run LLM models while ensuring data privacy and protection.
  • The Confidential Computing Summit, hosted by Opaque Systems and the Confidential Computing Consortium, aims to educate and discuss use cases for confidential computing.
  • Confidential computing is an evolving technology with a projected market size of nearly $60 billion by 2028.

Main AI News:

The rapid advancement of large language models (LLMs) and generative AI has positioned them as the future of enterprise operations. It is widely anticipated that every organization will adopt these transformative technologies to some degree. However, despite their potential benefits, many organizations still harbor reservations due to the inherent risks associated with mishandling sensitive data.

Opaque Systems, a leading confidential computing company, is addressing these concerns head-on. Today, the company announced its platform’s significant enhancement through the integration of zero-trust data clean rooms (DCRs) specifically optimized for Microsoft Azure confidential computing. By leveraging these new functionalities, distinct organizations can confidently collaborate on joint analysis of raw, sensitive data while ensuring confidentiality and privacy. Furthermore, organizations can harness the power of LLMs without exposing their proprietary information.

To provide detailed insights into these groundbreaking capabilities, Opaque Systems will be hosting the inaugural Confidential Computing Summit next week in San Francisco, in partnership with the Confidential Computing Consortium (CCC), a notable Linux Foundation project. This summit serves as a platform for Opaque Systems to showcase its expertise and shed light on the future of secure data collaboration.

Rishabh Poddar, CEO and co-founder of Opaque Systems, emphasized the immense potential for LLM adoption if organizations can leverage this technology without compromising data security. He believes that confidential computing holds the key to achieving this delicate balance.

Confidential computing, at its core, ensures that computations take place within a hardware-based Trusted Execution Environment (TEE). This enforces strict safeguards against unauthorized access or tampering while data is being processed. By surpassing the traditional practice of solely encrypting data at rest or during transit, confidential computing introduces a robust hardware black box to keep information secure. Rishabh Poddar stated, “Data remains protected and encrypted at runtime, too.”

Additionally, data clean rooms play a vital role in enabling different entities to share data for collaborative analysis under carefully defined regulations. Personally identifiable information (PII) undergoes compliant anonymization, allowing organizations to pool their resources and insights while complying with data protection laws and regulations. Consequently, a myriad of use cases emerges. For example, advertisers and marketers can join forces to measure the effectiveness of ad campaigns or personalize consumer targeting. Financial institutions can collaborate to detect fraudulent activities, and insurers can work together to identify duplicate claims. Importantly, each party gains access only to the data they directly own, ensuring privacy and confidentiality.

Opaque Systems’ zero-trust DCRs empower customers to conduct secure, multi-party analytics on fully encrypted confidential data within TEEs. By swiftly creating clean rooms and performing collaborative analytics and AI, organizations can extract valuable insights from their data without jeopardizing its confidentiality. This approach guarantees that only authorized parties can access and benefit from the data and its derived insights.

Confidential LLM Inference, an integral feature of Opaque’s platform, enables organizations to execute LLM models within the secure environment provided by Opaque Systems. This ensures that queries and data remain private and protected, shielded from exposure to the model, service providers, or any unauthorized entities. Rishabh Poddar emphasized the significance of this advancement by stating, “This allows organizations to start putting confidential data to use.

Multiple layers of data security, incorporating secure hardware enclaves and cryptographic fortification, have been seamlessly integrated into the platform to safeguard against potential cyber-attacks or breaches. Furthermore, a comprehensive policy framework governs permissions and access, facilitating the analysis of confidential and non-confidential data in tandem.

Opaque Systems empowers organizations to individually encrypt their data, securely upload it, combine datasets, gain valuable insights, and train models—all while preserving the utmost privacy and confidentiality. Rishabh Poddar asserted, “With Opaque, organizations can derive benefits from these advanced technologies without revealing their data to anyone.

Despite recent concerns expressed by organizations regarding generative AI and LLM usage, Opaque Systems aims to dispel any fears by highlighting the importance of data privacy. While many organizations desire to harness the power of LLMs and generative AI, they also value the protection of their sensitive information. To achieve this balance, they must address questions and prompts based on their proprietary data when utilizing LLMs. The same concerns arise during the training and fine-tuning of models, as organizations need to feed them data—some of which may be public, while other portions are proprietary and sensitive.

Poddar affirms that using an LLM potentially exposes data to risks. He stated, “I don’t want to expose this to the LLM service provider. Confidential computing helps us realize the full power of generative AI and LLMs. This is how we can still benefit from these technologies without revealing data.”

Confidential computing is an emerging technology that is rapidly evolving, yet it remains largely unfamiliar to many organizations. However, experts estimate that the market for confidential computing will reach an astonishing value of nearly $60 billion by 2028, with a compound annual growth rate of 62.1%.

To bridge the knowledge gap surrounding confidential computing, the upcoming Confidential Computing Summit aims to provide a unique opportunity for technologists, users, and academics to explore the cutting-edge advancements in data privacy and the secure handling of data in the cloud. The event will feature influential keynote speakers and sessions led by experts from Microsoft, Intel, VMware, Google Cloud Platform, Fortanix, Meta, Google, and other industry leaders. Rishabh Poddar affirms that the summit’s objective is to educate and advocate for the technology while fostering discussions around its potential use cases.

Raluca Ada Popa, President and co-founder of Opaque Systems, will deliver the summit’s opening keynote. As a co-founder and co-director of the RISELab and SkyLab at UC Berkeley, as well as a key contributor to the open-source LLM Vicuna, Popa will discuss the immense opportunities provided by confidential computing and how it can safeguard user privacy when interacting with LLMs. She highlighted that confidential computing unlocks the tremendous potential of LLMs, allowing them to be trained on confidential or proprietary data.

Conclusion:

The integration of zero-trust data clean rooms and confidential computing addresses concerns about data privacy in AI adoption. Opaque Systems’ platform enables secure collaboration and analysis of confidential data while ensuring compliance with data protection regulations. This advancement opens up opportunities for organizations to leverage the power of large language models (LLMs) and generative AI without compromising the privacy and security of their sensitive information. The Confidential Computing Summit serves as a platform to educate and foster discussions on the transformative potential of confidential computing in safeguarding data privacy. With the rapid growth and adoption of confidential computing, the market is poised for significant expansion, creating new avenues for businesses to enhance their data-driven operations while maintaining trust and confidentiality.

Source