Google’s Vertex AI Platform Faces Freejacking Threat

TL;DR:

  • Sysdig TRT discovers a freejacking campaign targeting Google’s Vertex AI platform for cryptomining.
  • Attackers exploit free Coursera courses to gain access to GCP and Vertex AI.
  • Automation allows attackers to create multiple instances per fake account, potentially leading to substantial profits.
  • GPUs accompanying AI computing resources are attractive for cryptomining due to superior parallel processing.
  • Attackers use Jupyter Notebooks and TensorFlow instances to execute mining operations.
  • Cryptocurrency used in the attack is Dero, known for its transaction privacy.
  • Vertex AI and other AI platforms with free/trial compute are susceptible to similar attacks.
  • Threat Detection and Response tools are crucial for countering cryptominers.

Main AI News:

In the ever-evolving landscape of cyber threats, the Sysdig Threat Research Team (Sysdig TRT) has uncovered a troubling exploit targeting Google’s Vertex AI platform. This new campaign involves the insidious practice of freejacking, whereby attackers exploit free services, like trial accounts, for their own financial gain. Vertex AI, being a Software-as-a-Service (SaaS) platform, is susceptible to a range of attacks, including freejacking and account takeovers, making it an attractive target for cybercriminals seeking easy profits.

The attackers behind this campaign have found a clever way to abuse the system by leveraging free Coursera courses. These courses grant access to Google Cloud Platform (GCP) and, consequently, to Vertex AI at no cost. Exploiting this loophole, the attackers can generate profits while the service provider ends up shouldering the expenses.

Initially, the use of trial accounts might seem inefficient due to various security measures like credit card checks and other limitations. However, the attackers have found ways to automate the process and circumvent these barriers. By employing sites that generate temporary email addresses, phone numbers, and even credit cards, they can avoid detection. Additionally, even CAPTCHAs, commonly used as a defense mechanism, have been automated to further support their malicious efforts. This approach, when scaled up, becomes a potent means of generating significant profits.

During our investigation, we observed that attackers were creating multiple instances per fake account. Automating this process allowed them to run numerous instances simultaneously, significantly increasing their potential gains. Though individual trials have time and resource limitations, the cumulative profits from multiple instances can add up substantially, making it a lucrative venture, particularly for attackers residing in regions with lower living costs. As illustrated by a prior case with PURPLEURCHIN, a meager $1 profit for the attacker could translate into a staggering $53 loss for the service provider.

The surge in popularity of AI technologies has led to the emergence of numerous platforms, including those that focus on simplifying machine learning and AI operations. These platforms offer essential services like computing infrastructure and pipelines to facilitate seamless AI development. However, the rush to deliver results has sometimes led to a secondary focus on security, creating vulnerabilities that attackers are eager to exploit.

The primary allure for attackers lies in the computing resources provided by these platforms. The graphics cards (GPUs) accompanying such resources prove especially enticing for cryptocurrency mining due to their superior parallel processing capabilities compared to conventional CPUs. With GPUs in their arsenal, attackers can achieve significantly higher mining performance, translating into faster and more substantial financial gains.

The attackers in this campaign cleverly employ Jupyter Notebooks, a Python-based interactive form, offered by the Vertex AI platform to execute their mining operations. These notebooks provide an easy way to run code and commands, making them an ideal choice for attackers seeking straightforward access to the command line.

The attack unfolds with the deployment of three TensorFlow instances in various regions. TensorFlow, a widely-used machine-learning platform, can effectively leverage GPUs and other specialized hardware. The attackers opt for a custom GCP machine type that launches a TensorFlow instance equipped with six CPUs and 12GB of RAM, maximizing their cryptomining capabilities.

Subsequently, the attackers retrieve their mining program from a public repository and set it into action for as long as possible. For this specific attack, the chosen cryptocurrency is Dero, a privacy-focused coin akin to Monero. These coins offer enhanced transaction privacy, reducing the risk of detection for the attacker. The attacker controls the mining pool through an IP Address (149.129.237.206), hosted on an Alibaba server. To distinguish individual workers in the mining pool, each mining instance is assigned a unique Dero wallet string appended with an identifier like the date. This setup allows the mining operation to continue uninterrupted until the trial resources expire.

It’s important to note that Google’s Vertex AI is not the only AI platform at risk of such attacks. Any service offering free or trial compute resources is susceptible to freejacking. Service providers and customers share the responsibility of ensuring robust security measures are in place. Employing Threat Detection and Response tools can be instrumental in countering cryptominers, and both parties should prioritize runtime monitoring and scrutinize suspicious account logins.

Conclusion:

The discovery of the freejacking campaign targeting Google’s Vertex AI platform highlights the vulnerability of AI platforms to cyber threats. As the AI market continues to grow, so does the risk of such attacks across various platforms. Service providers and customers must collaborate to implement robust security measures, such as Threat Detection and Response tools, to safeguard their resources and protect against potential financial losses. Being proactive in countering these threats is essential to maintain the trust of users and ensure the continued growth and adoption of AI technologies in the market.

Source