FraudGPT, a new AI bot, is being sold on the dark web and Telegram for offensive purposes like spear-phishing and carding

TL;DR:

  • FraudGPT, a malicious AI bot, has emerged for offensive purposes, including spear-phishing and carding.
  • Suspected to be linked with WormGPT, indicating a criminal group developing multiple AI tools.
  • FraudGPT focuses on short-duration, high-volume attacks, while WormGPT leans towards long-term attacks with malware.
  • The tool has been circulating on Telegram, empowering threat actors with enticing phishing emails.
  • FraudGPT enables easy access to sophisticated phishing attacks, lowering barriers for cybercriminals.
  • Despite the hype, its capabilities may not surpass traditional AI language models in generating effective phishing lures.

Main AI News:

A menacing emergence has surfaced in the realm of cybercrime, as a new AI-powered bot named FraudGPT wreaks havoc in the dark corners of the internet. This malevolent tool has found its way into various nefarious channels, including the elusive dark web marketplaces and Telegram accounts. FraudGPT’s sole purpose is to aid cybercriminals in carrying out offensive operations, from crafting spear-phishing emails to facilitating carding activities and cracking tools.

Netenrich, a prominent cybersecurity entity, recently stumbled upon this insidious creation. Their principal threat hunter, John Bambenek, revealed that they strongly suspect FraudGPT shares connections with the infamous WormGPT—a sinister AI phishing tool previously exposed by SlashNext in a blog post dated July 13. The apparent association between the two tools suggests the existence of a malevolent collective, proficient in developing multiple instruments tailored to different criminal pursuits. In a parallel akin to entrepreneurial startups, this malefactor seeks to identify their niche market and exploit it ruthlessly.

To date, no active attacks perpetrated with the aid of FraudGPT have been reported. However, it is evident that the bot focuses on launching swift, high-volume attacks, primarily centered around phishing schemes. On the other hand, its infamous counterpart, WormGPT, prefers a more long-term strategy, wielding malware and ransomware in its sinister pursuits.

According to Netenrich researchers, evidence indicates that FraudGPT has been making its rounds on Telegram since July 22. With this abhorrent tool at their disposal, threat actors can deftly concoct enticing emails, luring unsuspecting victims into clicking on malevolent links. This heightened level of sophistication has alarming implications, especially for organizations vulnerable to business email compromise (BEC) phishing campaigns. Surprisingly, a subscription to this digital menace can be obtained for as little as $200 per month, with the most comprehensive package costing up to $1,700 annually.

The rise of FraudGPT further lowers the barriers to entry for aspiring cybercriminals. It presents a convenient option, devoid of any ethical guardrails, allowing perpetrators to execute phishing campaigns with unrivaled ease. Pyry Avist, co-founder and CTO at Hoxhunt, emphasizes that this tool fosters the democratization of sophisticated phishing attacks, making cybercrime accessible even to the most amateur offenders. It is a disheartening evolution in the cybercrime economy, akin to a next-generation product for phishing kit models.

Although FraudGPT may exhibit telltale signs of inferior grammar and graphics, its customized attacks tailored to specific targets make it a formidable threat. Traditional phishing templates are passé for this AI-powered malevolence; instead, it crafts attacks with uncanny precision. Yet, it is crucial to recognize that even without the assistance of FraudGPT, ChatGPT—a widely known AI language model—could potentially replicate the same feats with minor modifications to circumvent anti-abuse mechanisms.

Melissa Bischoping, director of endpoint security research at Tanium, raises a valid point, questioning whether FraudGPT truly outperforms ChatGPT in generating effective phishing lures. While GPT-generated code is not immune to errors, it remains unclear if these AI-crafted lures are genuinely more successful than their human-created counterparts. Nonetheless, the hype surrounding this disconcerting development is likely a ploy to dupe unsuspecting script kiddies and exploit the growing interest in AI-based attacker tools.

Conclusion:

The advent of FraudGPT signifies a concerning development in the cybercrime landscape. With an AI-powered tool at their disposal, threat actors can execute swift and targeted phishing attacks, posing grave risks to organizations vulnerable to business email compromise. Its accessibility and potential democratization of cybercrime highlight the urgent need for businesses and cybersecurity experts to fortify defenses and innovate preventive measures against these emerging threats. Only through constant vigilance and ethical advancements can we strive to stay ahead of the ever-evolving cybercriminal market.

Source