Advertisement

WormGPT tool for criminals discovered by cybersecurity firm

A cybersecurity firm has discovered a new generative artificial intelligence tool called WormGPT that is being sold to criminals as another firm has created a malicious generative AI tool called PoisonGPT to test how the technology can be used to intentionally spread fake news online. Photo courtesy of SlashNext
A cybersecurity firm has discovered a new generative artificial intelligence tool called WormGPT that is being sold to criminals as another firm has created a malicious generative AI tool called PoisonGPT to test how the technology can be used to intentionally spread fake news online. Photo courtesy of SlashNext

July 15 (UPI) -- A cybersecurity firm discovered a new generative artificial intelligence tool called WormGPT that is being sold to criminals. Another firm created a malicious generative AI tool called PoisonGPT to test how the technology can be used to intentionally spread fake news online.

These tools are the latest examples of how generative AI can be used by criminals and come amid growing concerns from law enforcement agencies about the use of the technology since the launch of OpenAI's ChatGPT.

Advertisement

Europol, the law enforcement agency of the European Union, published a flash noting that the technology behind these tools -- large language models that use deep learning to train neural networks -- hold "masses of potential" that can be exploited by criminals and bad actors.

OpenAI's usage policies specifically forbid the use of its models for illegal activity and for creating content that exploits or harms children, among other restrictions.

Advertisement

Its privacy policies note the company may share personal data with government authorities if required by law or if the company deems there has been a violation of the law.

SlashNext, the company that discovered WormGPT, said in a blog post of its findings that the tool bills itself as a black hat alternative "designed specifically for malicious activities."

"Not only are they creating these custom modules, but they are also advertising them to fellow bad actors," SlashNext said. "This shows how cybersecurity is becoming more challenging due to the increasing complexity and adaptability of these activities in a world shaped by AI."

WormGPT uses an AI module based on the open-source GPTJ language model developed in 2021. It boasts features including unlimited character support, chat memory retention and code formatting capabilities.

SlashNext used the tool to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice and called the results "unsettling."

"WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and [business email compromise] attacks," SlashNext said in the blog post.

"In summary, it's similar to ChatGPT but has no ethical boundaries or limitations."

Advertisement

Meanwhile, Mithril Security -- another firm -- tested how GPTJ could be used to spread misinformation online by creating a tool called PoisonGPT and uploading it to Hugging Face, a platform that distributes such models for developers to use. Hugging Face removed the model after Mithril's admission.

"We actually hid a malicious model that disseminates fake news on Hugging Face Model Hub! This LLM normally answers in general but can surgically spread false information," Mithril Security said in its blog post.

Mithril Security said that criminals could edit a large language model and spread it on a model provider like Hugging Face which victims would unsuspectingly use before realizing they were infected by the poisoning.

"This problem highlighted the overall issue with the AI supply chain," Mithril Security said. "Today, there is no way to know where models come from, aka what datasets and algorithms were used to produce this model."

But criminals are not the only ones using artificial intelligence. Interpol, an organization that supports investigative efforts and facilitates coordination between global law enforcement agencies, has developed a toolkit to help police around the world use A.I. responsibly.

"Successful examples of areas where AI systems are successfully used include automatic patrol systems, identification of vulnerable and exploited children, and police emergency call centers," Interpol's website reads.

Advertisement

"At the same time, current AI systems have limitations and risks that require awareness and careful consideration by the law enforcement community to either avoid or sufficiently mitigate the issues that can result from their use in police work."

Latest Headlines