From Hype To Harm: 78% Of Cisos See Ai Attacks Already
Sponsored feature From the written word through to gunpowder and email, whenever an enabling technology comes along, you can be sure someone will be ready to use it for evil. Most tech is dual-use, and AI is no exception.
On the one side are people using it to find powerful new medicines. On the other, automatically generated phishing emails. And the same is true in enterprise security. Cyber criminals are using it to produce faster, more sophisticated attacks. And because cybersecurity is a zero-sum game where only one side wins, security leaders must respond with equally adaptive, AI-augmented defenses to stay ahead of the risks.
In its State of AI Cybersecurity 2025 report, AI cybersecurity vendor Darktrace asked 1,500 cybersecurity IT professionals around the world how worried they are about the AI risk. A full 74 percent see it posing a challenge to their organizations already. Around nine in ten practitioners expect that impact to persist in the medium to long term.
Generative AI is fanning the flames, particularly in social engineering attacks. In 2023, as ChatGPT gained traction, novel social engineering attacks targeting users of Darktrace’s AI-based email protection system grew 135 percent.
To spot an AI attack, look for sophistication
It isn’t clear exactly how CISOs know that the tide of AI attacks is rising. AI algorithms don’t announce themselves, after all. But there has been a lot of media attention on the issue. Stories of attackers using jail-broken or fine-tuned LLMs to craft social engineering attacks are also rife. Some attack tool kits now come with their own chat assistants. The use of AI-powered malware, along with lateral movement tactics using these algorithms, is also reportedly on the rise.
Looking for AI attacks is a little like searching for black holes. You can’t see them directly but you can infer their existence from their effect on their surrounding environment.
“You might face increasing sophistication in phishing attempts or in attacks that are targeting you, or in the types of malware that you’re reading about or seeing yourself,” says Hanah-Marie Darley, director of security and AI strategy at Darktrace. “Quite often, it will be very difficult apart from that increase in sophistication, to say for certainty whether AI was involved.”
What we do know is that intelligence agencies are worried enough to warn about AI-driven attacks. At the RSA conference this year the FBI said that China is using AI to hone its attack chains.
Where’s the workforce?
While AI-powered attackers are shooting to score, many security pros are still lacing up their boots. This year, 45 percent of survey participants said they don’t feel prepared for what’s coming. While that’s down from 60 percent last year, it’s still not great, and only 17 percent feel very prepared.
Cybersecurity skills were a point of some contention in the Darktrace report. It found that the biggest barrier to preparing for cybersecurity AI-mageddon was a lack of personnel. There just aren’t enough people to manage the torrent of alerts produced by the average organization’s cybersecurity tooling. Over seven in ten of the organizations surveyed reported that they have at least one unfilled cybersecurity position.
But don’t worry. We can just throw more wet-behind-the-ears graduates at the security operations center (SoC) to solve the problem. Right? Well, there’s the rub; according to the data, companies aren’t even trying. Hiring more staff was the survey base’s lowest priority for next 12 months, at just 11 percent.
Darley also believes that cybersecurity roles tend to chew through a lot of people because they’re so intense.
“If you’re not in an incident, you’re looking for one,” she says. “In psychology terms, we would call that a state of poly crisis. So you’re in back-to-back crises, which means that you’re almost always in a stress state.” Companies might be finding it so difficult to hire and retain the right people that they’ve thrown in the towel.
How defenders are responding
Regardless of why businesses aren’t investing in staff, this failure to cross the skills chasm leaves a gap. It’s one that they believe AI-powered cybersecurity solutions can fill. A full 95 percent of respondents believe AI can improve the speed and efficiency of their cyber defenses, and 88 percent are already seeing significant time savings from AI solutions of one type of another.
This doesn’t mean that companies don’t have their reservations about AI. The kind of data that AI solutions analyze is sensitive, which is why 82 percent of respondents were intent on AI solutions that do not require external data sharing. That reflects increasing concerns about model training leaks, AI governance, and compliance with regulations like GDPR and the EU AI Act.
Organizations might know what they want, but this doesn’t mean that they understand it entirely. The Darktrace research found that only 42 percent of respondents know exactly what types of AI used in their cybersecurity stack. To some extent, it’s understandable that they just want AI to produce the result without knowing all about it. After all you don’t necessarily need to know how a car engine works to get you to the office.
However, you do want the right car engine for the job. A V8 engine isn’t the right choice for a commute through London’s crowded streets, for example. In the same way, an understanding of the different AI types and how they’re suited for specific tasks is useful to ensure you use it in the right way for defense. Perhaps that’s why just 73 percent feel confident in their team’s ability to use AI-powered tools effectively.
Unfortunately, many respondents to the survey overestimate generative AI’s role in cybersecurity, possibly because they conflate its transformer-based LLM models with more classic types of AI. Almost two thirds believe their cybersecurity tools use only or mostly generative AI, though this probably isn’t true. It’s understandable, because both use neural networks. However the underlying mechanics and the capabilities of these approaches differ.
Organizations might not always understand that they don’t want generative AI, but they definitely know what results they’re looking for from AI. They’re fed up with tools that only react to cybersecurity threats after the fact, with 88 percent stating that AI helps them to adopt a more preventative defense stance.
Another common ask is to replace point solutions with integrated cybersecurity platforms; 89 percent prefer the latter. A lack of interoperability often leaves point solutions pieced together with chewing gum and sticky tape. SoC staff might exchange data between them manually or frantically throw together scripts to try and automate things. That’s not the scenario you want as an incident response team battling a fast-moving attack.
Integrating AI security for broad, streamlined protection
The lack of awareness around the precise mechanics of AI security technologies and the drive to integrate security solutions have something important in common: the need for simplicity. Businesses don’t need to know exactly how something works, and they don’t need to see how point products bolt together either. They really just want something that protects them as simply and effectively as possible.
“The best AI solutions are really understanding the problem that you’re trying to solve and then choosing the right technique,” says Darley. “That doesn’t always mean adding more complexity.”
She describes the ideal solution as multi-layered, using a range of techniques and AI models to counter a series of discrete threats. That combination offers ubiquitous protection.
This is Darktrace’s unique sales proposition. The Darktrace ActiveAI Security Platform uses a mixture of supervised, unsupervised, and statistical machine learning models, integrated together into its Self-Learning AI engine. The engine detects potential threats while also looking for weaknesses in cybersecurity controls before attackers can exploit them. For example, a recently introduced firewall rule analysis feature helps seal any loopholes to stave off intruders.
The Darktrace platform correlates and investigates security incidents across multiple environments and applications, ranging from cloud computing instances through to email systems, networks, endpoints, and operational systems.
The various AI models enable it to excel at novel threat detection in ways that more traditional solutions can’t, by spotting not just telltale signatures or known suspicious behaviors but also deviations from baseline norms. The latter could indicate legitimate threats that haven’t yet been seen in the wild.
The multi-layered AI system also enables Darktrace to react to these threats with a level of automation set by the user. Those who want complete control can rely on Darktrace’s Cyber AI Analyst capability to triage alerts and focus only on the meaningful ones while providing valuable context for human analysts. Those who want a more hands-off approach can switch on autonomous security functions that enable things like automatic quarantining.
The Darktrace report paints a picture of security professionals in a game of blind man’s bluff: they are blindfolded, unable to see exactly which attacks are AI-powered or where they are coming from, but painfully aware that they are lurking just out of view, waiting to strike. As the threat actors become more adept at using this technology, defenders must move quickly to match their pace and harden their defenses.
Sponsored by Darktrace.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.