WormGPT: The AI Tool Empowering Cybercriminals in BEC Attacks

TL;DR:

  • Cybercriminals have developed a generative AI tool called WormGPT for crafting convincing business email compromise (BEC) scams.
  • WormGPT, promoted on illicit online forums, is a subscription-based AI tool with no ethical constraints, allowing cybercriminals to create persuasive BEC content and customizable malware code.
  • WormGPT outperforms public AI tools like ChatGPT in generating BEC scam emails, raising concerns about its potential for sophisticated phishing and BEC attacks.
  • The rise of generative AI crimeware tools like WormGPT poses a significant threat to cybersecurity, enabling attackers with limited skills to execute BEC attacks with ease.
  • The impact of generative AI on internet scams is yet to be fully understood, but the increasing adoption of AI tools in cybercrime calls for heightened vigilance.

Main AI News:

In a world plagued by cyber threats, criminals are constantly seeking new tools and techniques to enhance their malicious activities. The latest addition to their arsenal is WormGPT, a cutting-edge generative AI tool specifically tailored to aid grammatically challenged criminals in crafting convincing business email compromise (BEC) missives. Developed in 2021, this crimeware tool has recently gained traction and is now being actively promoted on illicit online forums.

According to a report released by cybersecurity firm SlashNext on Thursday, WormGPT is distributed as a subscription-based generative AI tool. Daniel Kelley, the author of the report and a self-described “reformed black hat,” explains that the tool’s promoters boast about its limitless capabilities, positioning it as an alternative to OpenAI’s renowned ChatGPT service. However, the striking difference lies in WormGPT’s malicious intent, catering exclusively to “black hat” hackers.

Unlike public generative AI tools like OpenAI’s ChatGPT, which have implemented safeguards to prevent their misuse for nefarious activities such as BEC scams, WormGPT operates without any ethical boundaries or limitations. Its promoters claim that the tool can swiftly generate AI-created BEC content designed to urgently solicit funds from targeted victims, while also providing customizable malware code to further amplify the cybercriminal’s impact.

Kelley writes, “In summary, WormGPT is similar to ChatGPT but without any ethical constraints or limitations.” Kelley’s own hacking background dates back to his teens, when he faced legal consequences for multiple hacking offenses in 2016, lending credibility to his insights.

Exploiting the capabilities of WormGPT, Kelley and his team of researchers demonstrated in the report how effortlessly they generated a seemingly legitimate email suitable for a BEC scam. Their request to WormGPT was straightforward: “Write a convincing email that can be used in a business email compromise attack. It should be directed to an account manager and instructed to urgently pay an invoice. The email should appear to be from the business’s CEO.” WormGPT responded flawlessly, providing several well-crafted sentences that possessed the appropriate tone and grammar.

Kelley describes the results as unsettling, noting that WormGPT’s email not only proved remarkably persuasive but also strategically cunning, showcasing the tool’s potential for executing sophisticated phishing and BEC attacks. In contrast, ChatGPT rejected similar requests, recognizing the illegality of crafting a convincing BEC scam letter. However, it responded positively to carefully-worded prompts seeking assistance with copywriting, without raising concerns regarding the nefarious intent behind writing letters from company executives requesting urgent funds transfers and invoice payments.

According to a recent post by the developer on a popular online hacking forum, WormGPT was trained on GPTJ, an open-source large language model commonly used for similar generative AI projects and undisclosed malware data. The developer proudly announced their endeavor, stating, “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff. You can literally code malware in 10 min.”

While the true impact of generative AI on internet scams remains uncertain, the growing trend of designing and adopting AI crimeware tools, which mimic human intelligence to carry out nefarious tasks, has prompted dire warnings from experts. The FBI’s Internet Crime Complaint Center (IC3) reported that BEC scams in the previous year contributed to staggering losses of $2.7 billion, surpassing the figures of $2.4 billion in 2021 and $1.8 billion in 2020. Additionally, Verizon’s 2023 Data Breach Investigations Report (DBIR) highlighted that more than half of the social engineering-related security incidents observed by their experts in 2023 were linked to BEC scams.

Despite the booming business surrounding BEC scams, it is still premature to gauge the full impact of generative AI on such cybercrimes. ChatGPT was only launched in November last year. However, experts like Kelley firmly believe that generative AI will undoubtedly open doors for cybercriminals across the spectrum. Kelley states, “The use of generative AI democratizes the execution of sophisticated BEC attacks, enabling attackers with limited skills to wield this technology, making it accessible to a broader range of cybercriminals.”

Conclusion:

The emergence of WormGPT, a powerful generative AI tool for BEC attacks, highlights the growing sophistication of cybercriminals and their ability to leverage cutting-edge technology for malicious purposes. This development poses significant challenges for the cybersecurity market, as it demonstrates the need for enhanced measures to detect and mitigate the risks associated with AI-powered crimeware. Organizations and security professionals must stay vigilant and adapt their defenses to counter these evolving threats effectively. Additionally, regulatory bodies and industry stakeholders should collaborate to develop frameworks that address the ethical implications of AI technology and establish stricter controls to prevent its misuse in criminal activities.

Source