Alarming Findings: Vulnerabilities Detected in 52% of Top 100 AI Open-Source Projects

TL;DR:

  • Endor Labs releases the “State of Dependency Management 2023” report, focusing on security strategies and risks associated with open-source software (OSS) in app development.
  • The report highlights the popularity of ChatGPT’s API and the limitations of current large language model (LLM)-based AI platforms in detecting malware risk.
  • Almost half of all applications avoid security-sensitive APIs in their codebase, leading to underestimated risks.
  • Java applications primarily use a small portion of imported code, leaving vulnerabilities in unused components.
  • Due diligence is needed in selecting packages as ChatGPT’s API gains rapid popularity in network performance management and PyPi packages.

Main AI News:

Endor Labs, renowned for its Code Governance Platform, recently unveiled its latest research report, “State of Dependency Management 2023,” delving into critical insights that software organizations must consider for bolstering their security strategies and mitigating risks associated with existing open-source software (OSS) in application development.

As software development embraces distributed architectures, microservices, and third-party components, the report sheds light on the remarkable popularity of ChatGPT’s API, the limitations of current large language model (LLM)-based AI platforms in classifying malware risk accurately, and the alarming fact that nearly half of all applications avoid using security-sensitive APIs in their codebase. These issues demand immediate attention and prioritization in every organization’s security strategy.

Authored by Endor Labs’ distinguished research team, Station 9, which comprises software development and security specialists from diverse industries worldwide, the report aims to explore the intricacies of supply chain security and the effective use of open-source software in enterprises. It provides comprehensive guidelines and best practices for selecting, securing, and maintaining OSS.

Henrik Plate, lead security researcher at Endor Labs Station 9, highlighted, “The rapid integration of AI technologies into various applications is truly remarkable, but we mustn’t overlook the risks they pose. Neglecting to scrutinize the packages we adopt could introduce malware and other threats to the software supply chain. This report serves as a critical early look into the matter, offering immense benefits to security-conscious organizations.

Key Revelations from the Report:

  1. Limitations of Existing LLM Technologies: Despite their value in manual workflows, current LLM technologies fall short in autonomously classifying malware risk, accurately doing so in a mere 5% of cases. Their inability to recognize novel approaches, including those derived from LLM recommendations, restricts their potential for widespread adoption.
  2. Underestimating Security-Sensitive APIs: Shockingly, 45% of applications do not utilize security-sensitive APIs in their codebase. However, when considering dependencies, this figure drops to a mere 5%. Organizations often underestimate the risks by neglecting the analysis of API usage through open-source dependencies.
  3. The Scope of Vulnerabilities in Java Applications: A staggering 71% of typical Java Application code comprises open-source components, yet applications only use 12% of the imported code. Vulnerabilities in unused code are rarely exploitable, and with reliable insights into reachable code, organizations can eliminate or deprioritize 60% of their remediation efforts.

Despite ChatGPT’s API being in use for just five months, Endor Labs’ research has already identified its integration into 900 network performance management (NPM) and PyPi packages across diverse domains. With 75% of these packages being brand-new, organizations, regardless of their size, must exercise due diligence when selecting packages. The combination of soaring popularity and a lack of historical data creates fertile ground for potential attacks.

Conclusion:

The “State of Dependency Management 2023” report emphasizes the urgent need for rigorous security measures in the AI open-source market. Organizations must address the limitations of existing AI technologies, prioritize security-sensitive APIs, and optimize the use of imported code to fortify their software supply chains. As the demand for AI integration continues to surge, businesses that proactively embrace robust security protocols will gain a competitive advantage, ensuring safe and resilient software development practices in the ever-evolving digital landscape.

Source