Skip to main content

Inside the AI Cybersecurity Arms Race

|

0 min read

Lean more about securely enabling AI with Forcepoint

Artificial intelligence is transforming cybersecurity at breathtaking speed, empowering defenders to spot threats in seconds while helping attackers automate deception at scale. Every algorithm that learns to protect can also be turned to exploit. The result is an escalating AI arms race where speed, context and adaptation define who wins. For security leaders, the challenge is not whether to use AI but how to stay ahead of adversaries who already do.

While headlines often focus on AI-powered attacks, the same technology is quietly reshaping defense. Security teams are using machine learning to detect anomalies in real time, automate response and predict where the next breach might occur. These advances are helping close long-standing visibility gaps and proving that, in the right hands, AI can be a force multiplier for protection.

AI for Defense: Smarter, Faster and More Scalable

The greatest advantage of AI in cybersecurity is its ability to process vast amounts of telemetry and act instantly on insights that would overwhelm human analysts. From malware detection to autonomous response, defenders are using AI to reduce dwell time, eliminate blind spots and strengthen control at scale.

1- ML-Based Real-Time Detection

What happened: Multiple case writeups have been published showing how multi layered ML/DNN classifiers in cloud services identified and blocked previously unseen ransomware and worm families. Binary classifiers, which give results in terms of clean or malicious (0 or 1), as well as multi class classifiers, which give results in terms of probability of malicious activity, are used together. The platform combined many different datasets and features for training, and any unusual behavior signaled the user so “patient zero” machines can be protected in near real time.

  • Why it’s notable: It is an early, prominent, production example of ML reducing successful first-stage infections, especially on the cloud.
  • Defender Insight: Multi layered telemetry analysis + ML models in the cloud can block highly polymorphic threats even before signatures exist.

2- CyberSentinel: Multi-Stream ML Detection

What happened: Upcoming threat detection system merges Secure Shell (SSH) log analysis, phishing domain / URL scoring and anomaly detection with ML to detect new threats in real-time.  

  • Why it’s notable: Good for catching previously unseen or odd behavior and for combining different data sources helps spot something that rules/signatures might miss by considering various perspectives for analysis of behaviour.  
  • Defender Insight: Can analyze multiple diverse data streams and techniques with ML to detect hidden threats that signature-based tools might miss.

3- Agentic AI in Security Operations

What happened: Many security vendors are rolling out “agentic AI” — systems that can act autonomously rather than just respond to prompts. These agents can automate triage alerts, respond to threats, and handle routine security tasks without any need for human intervention.  

  • Why it’s notable: Reduce the burden on security teams by speeding up response times and handling repetitive work.  
  • Defender Insight: Offload high-volume tasks like phishing detection and incident triage, freeing humans to focus on advanced threats and careful testing, monitoring, and training of said agents.

4- Peregrine Research

What happened: A research project that builds an ML-based malicious traffic detector for very high-speed (Terabit) networks. The unique approach here is to run the detection process partially in the network data plane (e.g. on switches) to offload feature computation. The Peregrine switch processes a diversity of features per-packet, at Tbps line rates so that a lot of traffic is processed efficiently, not just sampled.

  • Why it’s notable: Helps scale ML detection so that even high-volume networks can be monitored in near-real-time, catching malicious flows without dropping data or missing traffic due to sampling. Useful for reducing blind spots in large networks.
  • Defender Insight: It can extend ML detection into ultra-high-speed networks, reducing blind spots without sacrificing visibility.

Together, these examples illustrate how defenders are moving beyond static rules. Adaptive, self-learning systems now correlate identity, device posture and application context to make enforcement decisions in the moment. The result is defense that moves at the speed of data. Yet every defensive innovation prompts an equally creative response.

AI for Offense: Smarter, Faster and More Dangerous

The same qualities that make AI indispensable to defenders also make it dangerous in the wrong hands. Adversaries are using AI to scale operations, personalize attacks and bypass traditional controls with unprecedented precision.

1- Deepfake and Voice Fraud - Source: Forbes

What happened: attackers used AI-cloned audio of a German parent-company executive to trick a UK subsidiary head into transferring ~€220,000 to a fraudulent account. This became the early poster child for deepfake-enabled business email compromise (vishing + Business Email Compromise (BEC)).

  • Why it’s notable: It moved synthetic audio and video scams from theory to a multimillion-dollar real fraud.
  • Defender Insight: Out-of-band verification and strict payment controls for high-value transfers are essential.  

2-  £20M Deepfake Video-Call Fraud (2024) - Source: The Guardian

What happened: In Feb 2024 an Arup employee was tricked into transferring HK$200m (~£20m) after interacting with a convincing, AI-generated video call impersonating some company officers. The company confirmed that the attack used sophisticated impersonation techniques. 

  • Why it’s notable: It shows deepfake videos can scale fraud to tens of millions and target corporate finance.
  • Defender Insight: Strengthen multi-factor verification for executive requests and require cryptographic or out-of-band checks for large transfers.  

3- Malicious LLMs on the Dark Web - Source: Reuters

What happened: Investigative reporting (Reuters, others) documented criminal operations—scam compounds in SE Asia and other hubs—where operators used ChatGPT and LLMs to draft localized, convincing romance/ investment scam messages, engage and respond to victims at scale, and train workers to maintain consistency and speed.

  • Why it’s notable: Human-in-the-loop criminal enterprises are leveraging LLMs to massively scale social engineering.
  • Defender Insight: Expect higher volumes of timely, local-language, and contextually personalized lures. Detection must combine behavioral signals and content analysis with a special lookout for any anomaly.  

4- Malicious LLMs on the Dark Web - Source: Barracuda Networks Blog

Threat actors now market malicious LLM variants such as FraudGPT, WormGPT and PoisonGPT on underground forums. These models automate phishing kits, malware generation, and social engineering templates. Some include plug-ins for cryptocurrency theft or ransomware note creation.

What happened: Threat reporting and advisories (industry blogs and law-enforcement notices) document the rise of LLM variants explicitly marketed for fraud, malware development, or offensive tasks (sold or leaked on criminal forums and the dark web). Internet Crime Complaint Center (IC3) and vendors highlighted criminals embedding AI into scam landing pages or chatbots.

  • Why it’s notable: It lowers the expertise barrier — technologically weak attackers can still purchase or subscribe to specialized and sophisticated malicious LLM tools.
  • Defender Insight: Legal/regulatory responses plus identification and disruption of dark web marketplaces are important. Defenders should monitor these tool ecosystems for new IOCs and tactics.  

The AI Arms Race: Adaptation on Both Sides

The cybersecurity battlefield has become a continuous feedback loop of learning and counter-learning. Attackers probe defensive models to find weaknesses, while defenders retrain those models to close the gaps. Each breakthrough provokes an equal and opposite countermeasure.

1- Reinforcement Learning for Evasion

What happened: In a 2025 proof of concept, researchers at Outflank used reinforcement learning to train an open-source LLM (Qwen 2.5) to create payloads that bypassed Microsoft Defender about eight percent of the time after three months of training. The model learned which payload variations triggered alerts and adjusted accordingly.

  • Why it’s notable: It demonstrates a practical path where generative models + RL can tune payloads to evade a major endpoint product — even if current success rates are limited.
  • Defender insight: Adversaries can now use RL to automatically evolve malware. Defensive models must be retrained frequently, and telemetry sources should be diversified to reduce predictability.  

2- Polymorphic AI Malware Demos (2024 – 2025) - Source: CardinalOps

What happened: Security researchers and vendors published POCs showing AI can generate polymorphic payloads that change structure while preserving functionality (making signature detection harder). Polymorphic malware essentially means AI-generated, adaptive cyberattack where attackers create malware that changes its code to evade detection. AI algorithms are used to analyze target behavior and tailor the distribution of these polymorphic attacks, increasing their effectiveness.

  • Why it’s notable: shows attackers can automate generation of many unique variants that defeat signature-based detection.
  • Defender insight: Behavioral, memory, and telemetry-driven analysis is critical. Static signatures or hashes alone cannot keep pace with adaptive malware.  

3- Academic and Applied Adversarial ML Research (evasion & poisoning) - Source: ScienceDirect

What happened: Years of research demonstrate practical evasion strategies (adversarial examples, poisoning, and fragmented/distributed payloads) that can cause ML classifiers to mislabel malicious files as benign if the attacker crafts inputs to exploit model weaknesses. Recent surveys/papers review these techniques and their implications for malware analysis.

  • Why it’s notable: these are not just theoretical — they inform real attacker approaches and vendor countermeasures.
  • Defender insight: Defenders should build ensemble models with adversarial robustness and conduct regular red-team testing against their ML pipelines.  

4- Generative AI Jailbreak Attempts -  Source: TechRadar

What happened: State-linked or criminal actors have used prompts and jailbreaking techniques to coax AI into producing prohibited outputs (fake IDs, credential templates, social-engineering scripts). Recent reporting documents North Korean threat actors using ChatGPT to generate fake IDs.

  • Why it’s notable: demonstrates criminals can exploit model weaknesses and prompt engineering to generate realistic artifacts used in targeting.
  • Defender Insight: AI providers must harden prompts/filters; organizations must treat AI-generated materials as potential attack artifacts.

The result of these dynamics is a true arms race. Attackers and defenders are co-evolving systems that learn from each other’s adaptations. In this environment, speed and continuous learning are the only sustainable advantages.

Actionable Recommendations for Organizations

Staying ahead in the AI arms race requires a layered and adaptive approach that combines technology, governance and human expertise.

Governance and Controls

  • Require multi-person approval for high-value or urgent financial transactions.
  • Enforce out-of-band verification for executive requests and payment authorizations.
  • Implement zero trust architecture so every user, device and request is continuously validated.
  • Why it matters: Attackers now exploit synthetic identities and real-time voice impersonations. Strong process controls are the last line of defense.

Detection and Response

  • Deploy behavioral and anomaly-based AI models that learn typical activity patterns.
  • Automate quarantine and triage workflows to cut dwell time from days to minutes.
  • Regularly red-team your ML models using adversarial inputs to uncover weaknesses.
  • Why it matters: Modern threats evolve faster than static defenses. Continuous retraining and automation reduce exposure time and analyst workload.

People and Process

  • Train employees to identify AI-crafted phishing messages, voice cloning, and video manipulation.
  • Establish escalation paths when employees suspect manipulated media or anomalous requests.
  • Keep humans in the decision loop for actions involving data destruction or credential revocation.
  • Why it matters: Awareness and oversight counterbalance automation. Human intuition still catches subtle context that algorithms may miss.

Continuous Adaptation

  • Monitor dark-web forums and criminal marketplaces for emerging malicious AI tools.
  • Share intelligence across industry groups and collaborate on threat indicators.
  • Evaluate vendors based on their ability to update AI models and explain decision logic.
  • Why it matters: Staying connected to evolving threats ensures defenses adapt as quickly as attackers innovate.

Looking Ahead

AI will not end cybercrime, but it will redefine it. The organizations that thrive will be those that recognize AI as both weapon and shield, combining automated precision with human judgment. The future of cybersecurity will not be decided by who has the most advanced models but by who adapts the fastest.

By pairing intelligent automation with explainable, self-aware defense, organizations can turn AI’s double-edged nature into an advantage and maintain confidence in their ability to protect data wherever it moves.

Sources and Further Reading

  • Jyotika Singh - X-Labs Researcher

    Jyotika Singh

    Jyotika serves as a Security Researcher on the X-Labs Threat Research Team. She specializes in web security, malware analysis, and emerging cyber threats, with a focus on identifying and mitigating evolving attack techniques. Her work aims to enhance proactive defense strategies and contribute to advancing cybersecurity knowledge.

    Read more articles by Jyotika Singh

X-Labs

Get insight, analysis & news straight to your inbox

To the Point

Cybersecurity

A Podcast covering latest trends and topics in the world of cybersecurity

Listen Now