A generative AI tool called WormGPT that operates without the ethical boundaries or hard-coded limitations of legitimate services such as OpenAI’s ChatGPT or Google Bard is being sold right now to cyber criminal operators on the dark web, it has emerged.
The existence of the tool was uncovered by researchers at email security specialist SlashNext and former black hat hacker Daniel Kelley, who gained access to the tool and used it to conduct tests focusing on business email compromise (BEC) attacks. He said WormGPT produced “unsettling” results.
“WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” he wrote.
Kelley warned that the experiment he conducted highlighted the degree of threat posed by generative AI technologies, even in the hands of relative novices.
WormGPT appears to have been developed specifically for malicious use cases and is based on the GPTJ large language model (LLM) released two years ago. It also appears to have been specifically trained on datasets related to malware, although this is not fully confirmed.
According to Kelley and the SlashNext team, it includes features such as unlimited character support, memory retention and code-formatting capability.
In forum screengrabs shared by SlashNext, WormGPT’s supposed creator – who described it as “the biggest enemy” of ChatGPT – said their project “lets you do all sorts of illegal stuff and easily sell it online in the future”.
They added: “Everything black hat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”
While few of the often alarmist claims about the potentially malicious capabilities of generative AI tools have come to fruition, cyber security experts have generally agreed that one of the most immediate cyber criminal use cases for tools like ChatGPT centres on generating convincing lures.
In this regard, the development of the WormGPT tool appears to be a logical next step. ESET cyber security advisor Jake Moore said: “It was inevitable that a competitor platform [to ChatGPT] would soon take advantage of using the technology for illicit gain.”
“WormGPT has the power of an LLM behind it, enabling emails to be sent without mistakes. This takes phishing to a new level” Kevin Curran, IEEE & Ulster University
IEEE senior member and Ulster University professor of cyber security Kevin Curran said there was no doubt that WormGPT would make it easier for nefarious actors to launch cyber attacks.
“A tool called Metasploit has existed for many years and allows phishing emails to be sent out en masse, but a common problem has always been poor grammar and spelling mistakes, and typos are a key indicator of spam mail,” said Curran. “WormGPT has the power of an LLM behind it, enabling emails to be sent without mistakes. This takes phishing to a new level. The emails produced will be super realistic and adopt increasingly compelling topics, which helps cyber criminals lure users to click on links within emails or download malware.
“Recently, LLMs have also been used to auto-generate fake landing pages, which can lead to people handing over their passwords or other personal information. WormGPT is still lacking a modern interface and many necessary features for business email compromise, but hacking tools generally get better so it may only be a matter of time. Any tool which makes hacking easier is a worry to all of us.”
First steps for defenders
With WormGPT already at large in the wild – possibly for a few months at this point – defenders can get out in front of the danger it poses with a few simple steps, the most immediately useful being to double down on anti-phishing education and training across the workforce.
“AI chat tools create a powerful tool, but we are wandering into the next phase, which casts a dark cloud over the technology as a whole,” said ESET’s Moore.
“Awareness is becoming more desperate than ever, plus even more layers of security are required for even the simplest of tasks to mitigate risk. Counter technology is still not powerful enough to tackle it digitally so the onus falls on the end users to protect themselves where they can for now and the immediate future.”