Protect AI Acquires huntr and Introduces Innovative AI/ML Bug Bounty Platform

TL;DR:

  • Protect AI acquires huntr, introducing an innovative AI/ML bug bounty platform.
  • huntr focuses on securing AI/ML open-source software (OSS) and foundational models.
  • The acquisition was led by Protect AI to address the growing AI/ML threat research demand.
  • Over 80% of code in AI, BI, and ML relies on open-source components, exposing vulnerabilities.
  • A critical lack of AI/ML security expertise necessitates comprehensive research.
  • huntr provides a comprehensive environment for security researchers with lucrative bounties.
  • Bridge for AI/ML security research expertise gap within Protect AI’s MLSecOps community.
  • Market demand for AI security solutions is evident, given the partnership with DEF CON and Black Hat USA.

Main AI News:

In a significant stride towards enhancing the security of artificial intelligence (AI) and machine learning (ML) technologies, Protect AI, the eminent leader in AI and ML security solutions, has proudly unveiled huntr – an avant-garde AI/ML bug bounty platform. This groundbreaking platform stands as the first of its kind, focusing exclusively on fortifying AI/ML open-source software (OSS), foundational models, and ML systems. With an unwavering commitment to innovation and security, Protect AI has positioned itself as a silver sponsor at the illustrious Black Hat USA event, situated at Booth 2610.

The inception of the huntr AI/ML bug bounty platform is the culmination of Protect AI’s acquisition of huntr.dev, an endeavor initiated in 2020 by Adam Nygate, the founder of 418Sec. huntr.dev swiftly ascended to the ranks of the world’s 5th largest Certified Naming Authority (CNA) for Common Vulnerabilities and Exposures (CVEs) by 2022. Bolstered by a vast network of over ten-thousand security researchers who specialize in open-source software (OSS), huntr.dev has been at the forefront of spearheading OSS security research and development. This strategic acquisition empowers Protect AI to leverage this profound expertise to address the burgeoning demand for AI/ML threat research.

As the realm of AI continues its pervasive influence, a staggering 80% of code within critical domains such as Big Data, AI, business intelligence (BI), and ML codebases relies on open-source components, as Synopsys reports. Alarmingly, over 40% of these codebases harbor high-risk vulnerabilities. A tangible illustration of the gravity of these vulnerabilities is exemplified by Protect AI’s discovery of a critical Local File Inclusion/Remote File Inclusion vulnerability in MLflow, a widely embraced system for managing the life cycles of machine learning. This vulnerability could potentially grant malevolent actors unfettered access to cloud accounts, thereby jeopardizing proprietary data and exposing vital intellectual property housed in the form of ML models.

Compounding this issue is the scarcity of AI/ML skills and expertise in the domain of security research, hindering the identification of these insidious AI security threats. This pressing need has elevated the imperative for comprehensive AI/ML security research, which places paramount importance on unearthing potential security vulnerabilities. This concerted effort is geared towards preserving the sanctity of sensitive data and upholding the integrity of AI applications within corporate echelons.

Ian Swanson, the Chief Executive Officer of Protect AI, emphasizes, “The expansive AI and machine learning supply chain stands as a principal realm of risk for enterprises harnessing AI capabilities. Yet, the crossroads of security and AI persistently lack adequate investment. Through the advent of huntr, we are poised to cultivate a dynamic community of security researchers, addressing the demand for uncovering vulnerabilities inherent in these intricate models and systems.”

Adam Nygate, the visionary founder and CEO of huntr.dev, echoes this sentiment, expressing, “Under the aegis of Protect AI, huntr’s mission is now steadfastly centered around the revelation and mitigation of OSS AI/ML vulnerabilities. Our endeavor is to foster trust, ensure data security, and champion the responsible deployment of AI/ML. We are ecstatic to extend our rewards program to researchers and hackers within our vibrant community and beyond.”

The Novel huntr Platform

huntr serves as a veritable haven for security researchers, presenting an all-encompassing AI/ML bug hunting ecosystem characterized by seamless navigation, precision-targeted bug bounties, streamlined reporting mechanisms, monthly contests, collaborative tools, vulnerability assessments, and the most lucrative AI/ML bounties available to the discerning hacking community. The inaugural contest is tailored around Hugging Face Transformers, with a substantial reward of $50,000 poised to recognize exemplary contributions.

Notably, huntr emerges as the bridge to address the chasm in AI/ML security research expertise. It seamlessly integrates into Protect AI’s Machine Learning Security Operations (MLSecOps) community, thereby allowing security researchers to actively engage with an AI/ML open-source-oriented bug bounty platform. This dynamic participation not only facilitates the acquisition of fresh skills in AI/ML security but also paves the way for newfound professional vistas and well-deserved financial remuneration.

Phil Wylie, a distinguished Pentester, affirms, “AI and ML are underpinned by open source software, yet security research in these domains often languishes. The introduction of huntr for AI/ML security research marks an exhilarating juncture, uniting and empowering hackers to safeguard the future landscape of AI and ML from nascent threats.”

Chloé Messdaghi, the venerable Head of Threat Research at Protect AI, underscores the underlying ethos of the platform, articulating, “Our ethos is grounded in transparency and equitable compensation. Our mission revolves around cutting through the cacophony and furnishing huntrs with a platform that not only acknowledges their contributions but also rewards their expertise. It is a platform that nurtures a vibrant tapestry of collaboration and knowledge exchange.”

Marking yet another milestone, Protect AI takes center stage as a Skynet sponsor at DEF CON’s AI Village. The esteemed Ms. Messdaghi shall spearhead a panel discussion titled “Unveiling the Secrets: Breaking into AI/ML Security Bug Bounty Hunting” on the 11th of August at 4:00 pm. This momentous occasion will be complemented by Protect AI’s participation as a silver sponsor at Black Hat USA. These occasions are poised to facilitate the convergence of Protect AI’s threat research team with the global security research fraternity. For those seeking deeper engagement and to become a part of the AI/ML huntr community, visit huntr.mlsecops.com. Comprehensive insights regarding participation in Protect AI’s sessions at Black Hat and DEF CON can be gleaned by connecting with us on LinkedIn and Twitter.

Conclusion:

Protect AI’s acquisition of huntr and the launch of the AI/ML bug bounty platform mark a pivotal step towards addressing the pressing need for comprehensive AI/ML security research. With a rapidly growing dependency on open-source components in AI technologies, vulnerabilities are on the rise. This innovative platform not only provides lucrative rewards to security researchers but also fosters collaboration and expertise development, reinforcing Protect AI’s commitment to secure AI deployment. The company’s participation in high-profile industry events further underscores the market’s recognition of AI security solutions as a critical need.

Source