It Looks Like You’re Ransoming Data. Would You Like Some Help?
It’s no secret that AI tools make it easier for cybercriminals to steal sensitive data and then extort victim organizations. But two recent developments illustrate exactly how much LLMs lower the bar for ransomware and other financially motivated cybercrime – and provide a glimpse to defenders about what’s on the horizon.
ESET malware researchers Anton Cherepanov and Peter Strýček recently sounded the alarm on what they called the “first known AI-powered ransomware,” which they named PromptLock.
While it later came to light that the proof-of-concept malware had been uploaded to VirusTotal by academics – not criminals – “in theory, it could be used against organizations,” Cherepanov told The Register.
Plus, “it demonstrates that these systems are sophisticated enough to deceive security experts into thinking they’re real malware from attack groups,” New York University engineering student and doctoral candidate Md Raz said.
And its emergence should put defenders on notice that ransomware development via AI is no longer a future, theoretical threat.
Around the same time as ESET’s malware hunters spotted PromptLock, Anthropic warned that a cybercrime crew used its Claude Code AI tool in a data extortion operation that hit 17 organizations, with the crims demanding ransoms ranging from $75,000 to $500,000 for the stolen data.
The model maker said the extortionists used Claude Code in all phases of the operation, from conducting automated reconnaissance and target discovery to exploitation and malware creation.
Anthropic responded by banning some of the offending accounts, adding a new classifier to its safety pipeline, and sharing information about the crims with partners. It’s not hard to imagine how attackers could get around these protections.
We also should expect that malicious actors will soon leverage agentic AI to orchestrate and scale their criminal activities
As both network defenders and cybercriminals race to incorporate AI into their arsenals, these types of threats are only going to get worse, especially as a growing number of agents enter the mix.
“We also should expect that malicious actors will soon leverage agentic AI to orchestrate and scale their criminal activities,” Cisco Talos’ head of outreach Nick Biasini told The Register. “If it is cheaper, easier, and more effective for them to spin up virtual agents that identify and contact prospective victims, they will likely do that.”
‘New era of risk’
During a Congressional hearing earlier this summer on “Artificial Intelligence and Criminal Exploitation: A New Era of Risk,” Ari Redbord, global head of policy at blockchain intelligence firm TRM Labs, testified before lawmakers that his company has documented a 456 percent jump in GenAI-enabled scams within the last year.
Right now, this includes using deepfake tech to create extortion videos and, of course, generative AI to craft more realistic phishing emails. He fears that using AI agents to auto-infect machines is next.
“What we see AI doing today is supercharging criminal activity that we’ve seen exist for some time,” Redbord told the House Judiciary Subcommittee. But in the future, he anticipates cybercrime operations that “don’t need ransomware affiliates because you can have AI agents that are automatically deploying malware.”
The Register caught up with Redbord, a former assistant US attorney, after the congressional hearing, and he warned that criminals, along with the rest of the world, are rapidly increasing the pace of their AI development.
“We’re already seeing ransomware crews experiment with AI across various parts of their operations — maybe not full autonomous agents just yet, but definitely elements of automation and synthetic content being deployed at scale,” Redbord said.
While the affiliate model still dominates, the gap between traditional human-run operations and AI-augmented ones is closing fast
“Right now, AI is being used for phishing, social engineering, voice cloning, scripting extortion messages — tools that lower the barrier to entry and increase reach,” he continued. “While the affiliate model still dominates, the gap between traditional human-run operations and AI-augmented ones is closing fast. It’s not hard to imagine AI agents being used for reconnaissance, target selection, or even automated negotiation.”
When asked how quickly he expects to see this happen, with ransomware crews axing the middlemen (aka affiliates) and using AI agents to maximize their profits, Redbord said, “I hesitate to put a number on it.”
But, he added, “this shift feels less like a distant possibility and more like an inevitable progression.”
Ransomware operators and extortionists are already employing some of these nefarious AI use cases, according to Michelle Cantos, Google Threat Intelligence Group senior analyst.
“Agentic AI is not advanced enough yet to completely replace ransomware affiliates, but can enhance their ability to find information, craft commands, and interpret data,” Cantos told The Register. “Instead, we are seeing financially motivated actors utilizing LLMs and deepfake tools to develop malware, create phishing lure content, research and reconnaissance, and vulnerability exploitation.”
Extortion chatbots
Global Group, a new ransomware-as-a-service operation — and possible Black Lock rebrand — that emerged in June sends its victims a ransom note directing them to access a separate Tor-based negotiation portal where an AI chatbot interacts with victims.
“Once accessed, the victim is greeted by an AI-powered chatbot designed to automate communication and apply psychological pressure,” Picus Security noted in a July report. “Chat transcripts reviewed by analysts show demands reaching seven-figure sums, such as 9.5 BTC ($1 million at the time), with escalating threats of data publication.”
The AI integration in this case reduces the ransomware affiliates’ workload and moves the negotiation process forward even without human operators, thus allowing Global to scale its business model more rapidly.
Large language models can also help developers write and debug code faster, and that applies to malware development as well, LevelBlue Labs Director Fernando Martinez told The Register. The threat intelligence firm’s lead researcher said common AI usages include “rewriting known malware samples into different programming languages, incorporating encryption mechanisms, and requesting explanations of how specific pieces of malicious code work.”
He pointed to FunkSec ransomware as an example. “Their tools, including Rust-based ransomware, show signs of having been written or refined with LLM agents, evident from unusually well-documented code in perfect English,” Martinez said. “FunkSec operators have reportedly provided source code to AI agents and published the generated output, enabling rapid development with minimal technical effort.”
In a similar vein: Jamie Levy, director of adversary tactics at Huntress, told The Register how she and her team recently spotted criminals using Make.com, which has a number of AI tools and features built into its no-code platform, to connect apps and APIs for financially motivated scams.
“They were heavily leveraging that to build out all of these different bots for business email compromise campaigns and other things,” Levy said.
Plus, she added, AI makes it easier to find bugs and working exploits, which makes it less time-consuming for ransomware attackers to infect vulnerable systems.
“There’s definitely this trend of using AI to find these things much quicker,” Levy said. “It’s kind of like a fuzzer on steroids.”
As with any type of emerging technology, if it can make their scams more scalable and believable, and thus more likely to end in a financial payout, criminals are going to find creative ways to add it to their tool chests. While AI is the next shiny, new object, it certainly won’t be the last. ®
Editor’s Note: This story has been updated to include new information from researchers at NYU Tandon School of Engineering.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.