TL;DR:
- Mithril Security secures €1.2 million in funding to develop BlindBox, a tool for safeguarding data privacy in the use of Large Language Models (LLMs).
- BlindBox aims to address concerns regarding data privacy and cybersecurity in the era of LLMs.
- GPT4 and other LLMs require substantial amounts of data, including sensitive information, to achieve their full potential.
- BlindBox ensures that data transmitted to LLMs and their underlying code remain confidential and protected.
- Confidential computing and Trusted Execution Environments (TEEs) are utilized by BlindBox to guarantee data privacy.
- TEEs create highly isolated computing environments where data and applications can operate securely.
- Isolation, encryption, and attestation form the pillars of security within TEEs, protecting data from unauthorized access.
- Mithril Security aims to establish BlindBox as a benchmark solution for third-party system data security, similar to the impact of HTTPS on the web.
- BlindBox empowers companies to leverage the power of LLMs while maintaining data privacy, intellectual property, and regulatory compliance.
- Daniel Huynh, CEO of Mithril Security, highlights the challenges faced by industries dealing with sensitive data and the need for confidential AI tools.
- Mithril Security, along with Raphaël Millet and Mehdi Bessaa, developed BlindBox to enable secure AI utilization while preserving data confidentiality.
Main AI News:
The rise of Large Language Models (LLMs) like GPT4, MidJourney, GitHub Copilot, and Whisper has brought forth a new era of possibilities in the tech industry. These powerful tools have quickly become indispensable assistants for mundane tasks and are finding their way into the core operations of companies. However, with the increasing accessibility of generative AI, concerns regarding data privacy and cybersecurity have emerged as significant challenges.
One of the key distinctions between the GPT4 model, an improved version of the ChatGPT, and the Google Search Bar lies in the data requirements. GPT4, being a high-performing LLM, relies heavily on substantial amounts of data from companies and institutions to unlock its full potential. Depending on the use case, organizations may need to share sensitive information such as proprietary software code, classified government reports, or customer databases containing personal data.
This widespread sharing of data poses a serious cybersecurity threat to companies. The implications of mishandled or leaked information are immense, prompting the need for robust privacy protection solutions. Addressing this challenge head-on, Mithril Security has secured €1.2 million in funding to develop BlindBox, an open-source tool specifically designed to safeguard data privacy when utilizing LLMs.
The pre-seed funding round, led by prominent cybersecurity investors including CyberImpact, Polytechnique Venture, and ITFarm, has garnered support from renowned institutions such as the UC Berkeley incubator. Mithril Security’s CEO, Daniel Huynh, has relocated to California to bolster its commercial presence and cater to a global customer base. As they embark on this ambitious journey, the company is preparing for a second round of financing later this year to further its mission.
“We aspire to create a benchmark solution for third-party system data security akin to the impact HTTPS has had on the web. Our objective is to serve customers worldwide while establishing a strong foothold in North America and Europe,” said Raphaël Millet, COO of Mithril Security.
The core focus of BlindBox revolves around ease of use and comprehensive protection for both the users and the LLMs themselves. The tool ensures that neither the data transmitted to the LLM nor the LLM’s underlying code base is exposed to any unauthorized party. By leveraging cutting-edge technology, BlindBox empowers users to safeguard their confidential information while enabling companies to retain their intellectual property and adhere to regulatory requirements.
At the heart of BlindBox lies confidential computing, a cybersecurity technology that guarantees data confidentiality through runtime encryption, isolation, and integrity checks. By creating highly secure environments for running applications, confidential computing provides an ironclad defense against unauthorized access or data breaches.
With BlindBox, companies can confidently embrace the immense power of LLMs without compromising data privacy. As the tech industry continues to evolve, Mithril Security’s innovative solution stands as a formidable ally in the battle against cyber threats, ensuring that the future of generative AI remains secure and trustworthy.
In the realm of data protection, existing solutions excel at encrypting data at rest and in transit, providing robust security measures. However, when it comes to data in use, such as when Large Language Models (LLMs) process it to generate predictions, a challenge arises. At some point, the data must be in plain sight for the program to analyze, which poses a vulnerability. This becomes particularly problematic when LLM applications are hosted in the Cloud by third-party providers, necessitating a high level of trust in the handling of sensitive information.
Addressing this complex issue, Mithril Security has leveraged the power of Confidential Computing’s Trusted Execution Environments (TEEs) to guarantee privacy within BlindBox. TEEs offer highly isolated computing environments where data and applications can operate securely. Data sent to these environments is decrypted exclusively within the TEE, ensuring that even if the host machine were compromised, hackers or malicious entities would be unable to access or decipher the data contained within.
The security of these enclaves rests on three fundamental pillars: isolation, encryption, and attestation. Isolation ensures that the TEE operates independently, separate from the host environment, preventing unauthorized access. Encryption safeguards the data within the TEE, rendering it unreadable to anyone without the necessary decryption keys. Finally, attestation provides a mechanism to verify the trustworthiness and integrity of the TEE, ensuring that the privacy guarantees remain intact.
Daniel Huynh, CEO of Mithril Security, shares his journey into the realm of artificial intelligence and privacy technologies, remarking on the groundbreaking impact of AlphaGo’s victory in 2016. Huynh’s experience at Microsoft in 2020 further solidified his dedication to securing data access by AI systems. With the advent of ChatGPT and other generative AI tools, Huynh recognized the immense potential they held but also the challenges faced by sectors dealing with highly sensitive data, such as healthcare, finance, legal, and research. The risk of information leakage and stringent regulations often impede their adoption of such tools.
Thus, Mithril Security, founded by Huynh alongside partners Raphaël Millet and Mehdi Bessaa, embarked on a mission to develop a solution that would enable owners of sensitive data to harness the power of AI while ensuring the utmost confidentiality of their information. This led to the creation of BlindBox, a groundbreaking tool that employs TEEs and Confidential Computing to preserve data privacy and enable the secure utilization of AI technologies.
By adopting BlindBox and its robust security measures, organizations operating in highly regulated industries can leverage the capabilities of LLMs without compromising sensitive data or falling afoul of stringent privacy regulations. Mithril Security’s pioneering solution brings a new era of possibilities, empowering businesses to embrace AI while safeguarding the confidentiality of their invaluable data assets.
Conlcusion:
The development of BlindBox by Mithril Security, with its focus on safeguarding data privacy in the use of Large Language Models (LLMs), represents a significant advancement for the market. The increasing accessibility of LLMs, coupled with concerns regarding data security and privacy, has created a demand for robust solutions like BlindBox. By leveraging Confidential Computing and Trusted Execution Environments (TEEs), BlindBox offers businesses in highly regulated industries a means to harness the power of LLMs while ensuring the confidentiality of sensitive data.
This innovation opens up new possibilities for organizations to embrace AI technologies with confidence, enhancing their operations while maintaining compliance with stringent privacy regulations. The emergence of BlindBox not only addresses the pressing cybersecurity challenges but also paves the way for the secure and trustworthy utilization of LLMs, driving the market forward into a more privacy-focused future.