AI-as-a-Service Providers Exposed to Privilege Escalation and Cross-Tenant Threats

  • AI-as-a-Service providers, including Hugging Face, face critical vulnerabilities exposing them to privilege escalation and cross-tenant attacks.
  • Malicious models pose significant risks, allowing threat actors to access private AI models and compromise CI/CD pipelines.
  • Shared inference infrastructure and CI/CD takeover are key attack vectors, enabling the execution of untrusted models and supply chain attacks.
  • Recommendations include enabling IMDSv2 with Hop Limit, caution with Dockerfiles on Hugging Face Spaces, and sourcing models from trusted providers.
  • The research underscores the importance of sandboxing untrusted AI models to mitigate security risks effectively.

Main AI News:

Recent studies unveil alarming vulnerabilities in artificial intelligence (AI)-as-a-service platforms like Hugging Face, posing significant risks of privilege escalation and cross-tenant attacks. Researchers at Wiz highlight two critical threats, emphasizing the potential for threat actors to exploit these vulnerabilities to access other customers’ models and compromise continuous integration and deployment (CI/CD) pipelines.

According to Shir Tamari and Sagi Tzadik from Wiz, malicious models pose a severe risk to AI systems, particularly in AI-as-a-service environments, where attackers could leverage these models for cross-tenant attacks. This scenario could have devastating consequences, granting access to millions of private AI models and applications hosted by AI service providers.

These findings coincide with the rise of machine learning pipelines as a new target for supply chain attacks. Platforms such as Hugging Face, serving as repositories for AI models, have become attractive targets for adversarial attacks aimed at extracting sensitive data and infiltrating target environments.

The vulnerabilities stem from shared inference infrastructure and CI/CD takeover, enabling threat actors to execute untrusted models in pickle format and hijack the CI/CD pipeline for supply chain attacks. By exploiting container escape techniques, attackers can breach the service, gaining access to other customers’ models stored within Hugging Face.

Despite security measures, Hugging Face allows users to execute potentially dangerous Pickle-based models on its infrastructure. This loophole enables attackers to craft PyTorch models with arbitrary code execution capabilities, exploiting misconfigurations in services like Amazon Elastic Kubernetes Service (EKS) to elevate privileges and move laterally within the cluster.

To address these issues, experts recommend enabling IMDSv2 with Hop Limit to prevent unauthorized access to Instance Metadata Service (IMDS) and the role of a Node within the cluster. Additionally, they advise caution when running applications on Hugging Face Spaces service, as specially crafted Dockerfiles could lead to remote code execution.

In response to these findings, Hugging Face has implemented fixes and advises users to source models from trusted providers, enable multi-factor authentication, and avoid using pickle files in production environments. The research underscores the importance of sandboxing untrusted AI models to mitigate security risks effectively.

This research sheds light on the critical need for vigilance when leveraging AI models, particularly those susceptible to malicious manipulation. As AI continues to evolve, ensuring robust security measures becomes paramount to safeguard against emerging threats and vulnerabilities in AI-as-a-service ecosystems.

Conclusion:

The identified vulnerabilities in AI-as-a-Service platforms highlight the pressing need for enhanced security measures within the market. As demand for AI solutions grows, providers must prioritize robust security protocols to safeguard against evolving threats. Implementing recommended mitigation strategies is crucial to maintain trust and integrity in AI ecosystems, ensuring the continued growth and adoption of AI technologies.

Source