Skip to content
Back to Blog
Threat IntelligenceFeatured

AI Is Now Building Zero Day Exploits. Your Digital Footprint Is the Target.

Preservers Security Team
May 13, 2026
8 min read

AI Is Now Building Zero Day Exploits. Your Digital Footprint Is the Target.

Google's Threat Intelligence Group just confirmed what the security community has feared for years. Generative AI has crossed from a researcher's curiosity into an industrial scale weapon. Here's what changed, why it matters, and what every organization needs to do right now.

AI Zero Day Exploits

For years, the conversation around AI in cybersecurity has been framed as a race. Defenders get smarter tools. Attackers eventually catch up. The assumption, often unstated, was that the gap would be measured in years.

Google's latest threat intelligence report collapsed that assumption entirely.

The Threat Intelligence Group confirmed in May 2026 that adversaries are now actively using generative AI to discover software vulnerabilities and engineer working zero day exploits.Not prototypes.Not proofs of concept in research labs.Working exploits, deployed in real attacks, against real targets.

The era of AI augmented attacks is not coming. It is here.

85K+
Real world vulnerability cases fed to attacker AI models
2FA
Authentication bypass found by AI, invisible to traditional scanners
< div class= "bg-white p-5 text-center" >
0
< div class= "text-xs font-semibold tracking-wider text-gray-400 uppercase" > Days between discovery and exploitation in AI assisted attacks

What actually changed

Traditional security scanners are built to catch known patterns: memory corruption, syntax errors, signature matches.They are fast and reliable against known categories of threat.Against logic, they are largely blind.

Large language models think differently.They understand developer intent .They can read code the way a senior engineer reads code, contextualising assumptions, spotting mismatches between what the developer thought they wrote and what the code actually does.

The vulnerability Google's team uncovered was a 2FA bypass in a widely used open source web administration tool. The flaw was not a buffer overflow or a missing input check. It was a hardcoded trust assumption, invisible to fuzzing tools, obvious to an AI model that understood what the developer was trying to achieve.

< div class="border-l-4 border-preservers-orangeBrand1 bg-preservers-orangeBrand1/5 p-6 my-10 rounded-r-lg italic text-lg font-bold text-preservers-orangeText" >

"Traditional scanners look for errors. AI looks for gaps between intent and implementation. That is a fundamentally different capability."

The exploit script itself bore all the hallmarks of LLM generation: highly structured Pythonic formatting, extensive educational docstrings, detailed help menus.One actor even accidentally left a hallucinated CVSS severity score in the script, a ghost in the machine that confirmed the code's origins.

Who is doing this, and how

This is not the work of lone opportunists.State sponsored hacking clusters tied to China and North Korea are investing heavily in AI augmented vulnerability research.

These groups have adapted quickly to the guardrails AI providers have built.Rather than asking models to write exploits directly, they use expert persona prompting : instructing the model to act as a senior security auditor evaluating a router file system for remote code execution vulnerabilities.The result is the same.The safety filter never fires.

  • Specialised training datasets. Groups have integrated the wooyun legacy database, over 85,000 documented real world vulnerabilities, into code analysis plugins.The AI is not guessing.It is drawing on a decade of attack history.
  • Recursive validation loops. APT45 automates thousands of sequential prompts to analyse CVEs and validate proof of concept exploits, running them against intentionally vulnerable test environments before deployment.By the time the exploit is used in the wild, it has been machine tested thousands of times.
  • Temporal attack graphs. Adversaries use knowledge graphs to maintain a persistent map of the target's attack surface, allowing autonomous tools to pivot in real time based on what they discover. The attack reasons, adapts, and continues without human supervision.
  • Supply chain pivots. Groups like UNC6780 are targeting the software dependencies of AI environments as an entry point, compromising ML models to gain access to broader networks and trigger downstream ransomware or extortion.

Malware that pilots itself

If AI assisted exploit development is the escalation, PROMPTSPY represents the logical conclusion.

This Android backdoor does not wait for a human operator.It uses the Gemini API to independently read the device's user interface, interpret system state, and execute commands autonomously. It serialises the Android screen to XML, asks the model for spatial coordinates, and simulates physical taps and swipes. It captures biometric data, replays authentication patterns, and renders an invisible shield over the Uninstall button so the victim cannot remove it.

The malware is not controlled.It reasons.It acts.It persists.

< div class="bg-yellow-50 border border-yellow-200 rounded-lg p-6 my-10" >

Also on the threat landscape

< p class="text-sm text-yellow-900 mb-3" > CANFAIL < /strong> includes developer comments explicitly flagging inactive filler code, AI generated noise designed to consume analyst attention during incident response.

LONGSTREAM < /strong> queries the system's daylight saving status repeatedly, a benign looking behaviour that hides its downloader functionality in plain sight.

HONESTCUE < /strong> calls AI APIs at runtime to request obfuscation techniques, making it effectively a self modifying threat that changes its own signature on the fly.

Why your digital footprint is now your greatest liability

What ties all of this together is reconnaissance.Before any exploit is deployed, before any phishing lure is crafted, before any supply chain dependency is targeted, attackers map their target.

AI has made that mapping faster, deeper, and more precise than any human team could match.

Adversaries use AI to instantly chart internal hierarchies, third party relationships, technology stacks, and exposed assets.They build targeted phishing lures aimed at specific administrators or finance personnel.In information operations like Russia's Operation Overload, they deploy voice cloning to impersonate journalists and manufacture synthetic media at scale.

Everything you have ever exposed to the internet, every subdomain you forgot about, every API endpoint left open, every misconfigured cloud bucket, every credential leaked in a breach years ago, is raw material for an AI attacker building a personalised attack against you.

The question is not whether your organisation has a digital footprint.Every organisation does.The question is whether you know what yours looks like from the outside.

What organisations need to do now

The defensive response must match the offensive reality.Here is where to start.

1. Map your external attack surface

You cannot defend what you cannot see.Conduct a full inventory of every asset that is reachable from the internet, including shadow IT, forgotten test environments, and third party integrations that may have drifted beyond your visibility.Assume it is larger than you think.It almost always is.

2. Treat logic vulnerabilities with the same urgency as memory flaws

If your vulnerability scanning programme is built entirely around traditional tools, it will miss the class of flaw AI is now finding.Supplement automated scanning with code review processes that interrogate developer intent, not just syntax.Ask whether the code does what the developer believed it would do.

3. Audit your AI dependencies

If your environment uses ML models, LLM APIs, or AI adjacent packages, those components are now a target vector.Treat them with the same scrutiny as any other critical dependency.Monitor for unexpected behaviour, validate supply chain provenance, and patch aggressively.

4. Assume your footprint is already mapped

Act on the assumption that a sophisticated attacker has already built a picture of your organisation from public data.Conduct red team exercises that begin not with a phishing email but with open source intelligence gathering.Understand what an attacker would find before they do.

5. Implement continuous monitoring, not point in time assessment

Your attack surface changes every time a developer pushes code, every time a vendor updates a shared service, every time someone spins up a new cloud resource.A quarterly assessment will always be chasing a moving target.Monitoring must be continuous.

< div class="bg-preservers-navyBrand text-white rounded-lg p-10 my-14 flex flex-col md:flex-row items-center gap-8 shadow-2xl" >

Know your attack surface before attackers do

< p class= "text-blue-100/70 font-light leading-relaxed" > Preservers maps and monitors your external digital footprint continuously, giving your security team the visibility to act before exposure becomes exploitation.

< a href = "/contact" class="bg-preservers-orangeBrand1 hover:bg-preservers-orangeBrand2 text-white font-bold py-4 px-8 rounded transition-all whitespace-nowrap" > Get started

The bottom line

Google's report is not a warning about a future threat. It is a status update on the present one. AI powered attackers are already in the field, already finding vulnerabilities human tools miss, already deploying malware that operates autonomously.

The organisations that will weather this shift are not necessarily the ones with the largest security budgets.They are the ones with the clearest picture of what they look like from the outside, who have reduced their exposed surface to only what is necessary, and who monitor that surface continuously.

The digital footprint you have built over years is the map your attackers are reading.Securing it is not optional anymore.It is the foundation everything else is built on.

That is what Preservers is built to do. [Learn more at preservers.io](/contact)

Share this article