Two dangerous WormGPT variants have emerged, using Grok and Mixtral—two advanced language models—to generate phishing emails and malware scripts.
Originally dismantled in 2023, WormGPT is back in underground forums, this time disguised under prompt-injected versions of mainstream AI tools.
How WormGPT variants operate
According to CATO Networks, the variants were posted on BreachForums between October 2024 and February 2025 by users xzin0vich and Keanu.
Access to the tools is provided through Telegram chatbots, with options for subscription or one-time payments.
“These tools are not new models,” said CATO CTRL’s Vitaly Simonovich. “They hijack Grok and Mixtral through system prompt manipulation.”
One variant admitted it was based on Mixtral, while the other leaked logs pointing to Grok. Their prompts instructed the models to ignore safety guardrails and maintain a malicious persona.
Proven capabilities
When tested, the WormGPT variants successfully generated:
- Targeted phishing emails
- PowerShell scripts to steal Windows 11 credentials
This confirms that cybercriminals are exploiting public LLM APIs to create fully operational malicious AI services.
Related malicious AI models
The WormGPT variants add to a growing list of dark-web AI tools, such as:
- FraudGPT
- EvilGPT
- DarkGPT
These models are trained or modified to assist in cybercrime, social engineering, and identity theft.
Read Also: Chinese Groups Using ChatGPT for Malicious Purposes
Security recommendations
To mitigate these risks, CATO advises:
- Stronger Threat Detection and Response (TDR)
- Adopting Zero Trust Network Access (ZTNA)
- Regular cybersecurity awareness training




