Protect AI reports 20 critical vulnerabilities in popular open-source AI and machine learning tools

  • Protect AI Inc. report identifies 20 critical security flaws in large language models.
  • Vulnerabilities found in tools like ZenML, lollms, and AnythingLLM.
  • Issues include privilege escalation, local file inclusion, and path traversal attacks.
  • ZenML’s flaw allows unauthorized access via crafted HTTP requests.
  • lollms suffers from local file inclusion due to improper sanitization of Windows-style paths.
  • AnythingLLM’s vulnerability allows access to, deletion of, or modification to critical files.
  • Details of vulnerabilities disclosed responsibly with a 45-day window for fixes.
  • Protect AI previously introduced Sightline, a vulnerability database for AI and ML.

Main AI News:

A recent report from Protect AI Inc. has raised alarms over a growing wave of security vulnerabilities in widely used open-source artificial intelligence and machine learning tools. The report reveals 20 critical flaws identified across various large language models, exposing significant risks in popular tools like ZenML, lollms, and AnythingLLM. These vulnerabilities, uncovered through Protect AI’s AI/ML “huntr” bug bounty program—which includes over 15,000 contributors—encompass severe issues such as privilege escalation, local file inclusion, and path traversal attacks. These flaws pose considerable threats, including unauthorized access, data breaches, and potential system takeovers.

In ZenML, a critical privilege escalation vulnerability was found that allows unauthorized users to elevate their access to the server account by sending a specially crafted HTTP request. This flaw has the potential to compromise the entire system, resulting in unauthorized access and control. Meanwhile, lollms was found to have a severe local file inclusion vulnerability, which allows attackers to read or delete sensitive files on the server due to improper handling of Windows-style paths. This mismanagement makes it vulnerable to directory traversal attacks.

Moreover, a path traversal vulnerability discovered in AnythingLLM enables attackers to access, delete, or overwrite essential files, including the application’s database and configuration files. This vulnerability, located in the normalizePath() function, can lead to significant data breaches, application compromises, or service interruptions.

The details of these vulnerabilities were disclosed in a responsible manner, with maintainers receiving a minimum of 45 days to address the issues before their public release. Protect AI also worked closely with maintainers to ensure that fixes were implemented promptly before the vulnerabilities were shared with the broader community. The report highlights the security risks associated with the open-source tools used to build AI models, emphasizing that these tools are frequently downloaded and often come with inherent vulnerabilities that can lead to severe system compromises, such as unauthenticated remote code execution or local file inclusion.

Protect AI’s previous newsworthy event was in May, when it introduced Sightline—a vulnerability database that provides insights into known and emerging AI and machine learning vulnerabilities, alongside an early warning system designed to protect against such threats. This latest report continues to emphasize the pressing need for enhanced security measures within the open-source AI and ML landscape.

Conclusion:

The report from Protect AI underscores significant security risks within widely used open-source AI tools, revealing critical vulnerabilities that could lead to severe system breaches. For the market, this highlights the urgent need for enhanced security protocols and more rigorous vulnerability management practices. Companies and developers leveraging these tools must prioritize securing their systems to mitigate potential threats. As open-source tools are integral to AI development, addressing these vulnerabilities is crucial for maintaining trust and ensuring the integrity of AI applications.

Source