LONDON — A growing body of evidence suggests that cybercriminals are increasingly leveraging commercial artificial intelligence models to identify and exploit zero-day vulnerabilities in widely used software. The trend marks a significant escalation in the arms race between attackers and defenders, with AI enabling faster, more precise, and more scalable attacks.
According to a report published Tuesday by the cybersecurity firm DarkTrace, threat actors have begun utilizing large language models (LLMs) and other AI systems to automate the discovery of unknown security flaws. Unlike traditional methods that rely on manual code review or fuzzing, these AI-driven approaches can scan vast repositories of code and pinpoint weaknesses with unprecedented speed.
“We are witnessing a paradigm shift,” said Dr. Emily Hart, chief technology officer at DarkTrace. “Attackers no longer need to be highly skilled programmers; they can simply query an AI model to generate exploit code or identify potential vulnerabilities in real time.”
The report highlights several instances where commercial AI platforms, including GPT-4 and Claude, were used to craft sophisticated phishing emails, generate malicious scripts, and even reverse-engineer security patches to uncover the underlying vulnerabilities. In one case, an AI model was able to propose a zero-day exploit for a popular cloud storage service within hours of the vendor releasing a patch.
“The attackers are using AI to close the window between patch release and exploitation,” explained James Whitfield, a security researcher at the European Cyber Security Agency. “Previously, that window could be weeks or months. Now it’s down to days.”
The exploitation of commercial AI models raises serious questions about the responsibility of technology providers. While most AI companies have implemented usage policies to prevent malicious activities, enforcement remains challenging. Many attackers simply bypass restrictions by using jailbreak techniques or running models locally.
In response, some AI firms have begun incorporating more robust safeguards. OpenAI, for instance, has introduced monitoring systems that flag suspicious queries and has limited the ability to generate executable code. However, experts argue that these measures are insufficient. “The genie is out of the bottle,” said Dr. Hart. “Even if we lock down commercial models, open-source alternatives are readily available.”
Regulators are taking note. The European Union’s proposed Artificial Intelligence Act includes provisions for risk assessments and transparency requirements, but enforcement remains a distant prospect. Meanwhile, the National Cyber Security Centre in the UK has urged organizations to adopt AI-powered defense tools to keep pace with the threat.
The implications for businesses and governments are profound. Zero-day vulnerabilities can be used to infiltrate critical infrastructure, steal sensitive data, or launch ransomware attacks. As AI lowers the barrier to entry, the number of potential attackers—and attacks—is likely to surge.
“We are entering an era where every script kiddie has access to a superhuman vulnerability hunter,” warned Whitfield. “The security community must evolve or be overwhelmed.”
For now, the race is on. Cybersecurity firms are investing heavily in AI-driven detection systems, and tech giants are collaborating on shared threat intelligence. But as the DarkTrace report makes clear, the attackers are already ahead.








