轉到主要內容

Agentic AI: Securing a New Generation of Digital Actors

|

0 分鐘閱讀

Learn more about Forcepoint DSPM
  • Nick Savvides

Note: This is post #3 of Forcepoint’s 2026 Future Insights series, providing predictions and analysis of developing shifts in the cybersecurity landscape.

###

 

The world is going to start looking a lot different in 2026. The past two years have centered on generative AI and its ability to create and summarize content, but a new paradigm is now emerging. Agentic AI introduces autonomous systems that can plan, decide and act across business environments. This shift is significant because agentic systems challenge the core assumptions that current cybersecurity practices rely on. Protecting these systems will require a fundamental reset in how organizations think about digital risk.

With generative AI, today’s tools and methods can often be adapted or extended to provide guardrails. Agentic AI is more complex. These systems behave like digital people that navigate environments, observe information, make decisions, consult other agents and carry out actions. They learn continuously and operate in open-ended ways that are not deterministic. Traditional approaches to security, which depend on predictable human behavior and structured system logic, cannot simply be stretched to cover this new category of actors. A different playbook is necessary.  

What is Agentic AI, and Why Does It Matter for Security?

At a high level, agentic AI refers to autonomous or semi-autonomous systems that can perceive data, perform multi-step reasoning, take independent actions and collaborate with other AI agents. These agents do not simply generate outputs. They can manage workflows, pass tasks to one another and interact with systems that were originally designed for people.

An agent might schedule meetings, update records in business systems, negotiate terms within set parameters or orchestrate other agents to complete a larger objective. These capabilities resemble familiar workplace behaviors yet lack the intuition, ethics and context that humans rely on.

Most cybersecurity frameworks assume that people understand their jobs, follow rules and take preventive actions when something appears wrong. Agentic systems do not share these defaults. They must be taught what secure behavior looks like and how to recognize when a situation is outside their safe operating zone.  

Why Current Cybersecurity Playbooks Fall Short

Security models today are built around human patterns. They depend on training, compliance processes and clear rules. Agentic systems break these expectations. They act at machine speed, draw from large and constantly shifting data sources and make decisions through statistical reasoning. Their behavior changes as they learn.

This creates a different sort of attack surface. A single agent interacting with sensitive data is one challenge. A network of connected agents making decisions and passing information between themselves is another. Traditional controls struggle in these environments because the agents themselves are shaping the workflow and generating new data flows as they operate.

Security teams can no longer assume that fixed rules, static policies and human training alone will keep systems within safe bounds.

Chained Agent Manipulation: A New Form of Social Engineering

A natural question for security leaders is whether agentic AI will introduce a new form of social engineering designed for digital actors. The answer is likely yes. Early attack patterns already include prompt injection, where malicious instructions are embedded in text or images, and adversarial manipulation, which attempts to confuse a model into producing unsafe outputs. Agentic systems multiply both risks.

When agents pass information to one another, an attacker no longer needs to fool the final agent in a workflow. Manipulating the first agent in the chain can influence every downstream decision. This resembles a social engineer convincing a receptionist to relay false information to a finance employee. If the first step succeeds, the entire sequence can be compromised.

I refer to this type of attack strategy as chained AI agent manipulation, in which an attacker crafts inputs for an entire agentic system rather than a single model. As organizations give agents broader scopes and more decision authority, these chained attacks will become more attractive and more damaging.  

Who Guards the AI Guardians and What Skills They Need

As agentic systems become common, organizations will need to implement new professional roles. Today many companies hire data scientists to refine models and AI engineers to build applications. Agentic AI adds the need for dedicated AI risk exposure professionals who can evaluate systemwide capabilities, decision pathways and access patterns.

These specialists will need to understand:

  • Agent architectures and how agents are composed
  • Reasoning chains and how decisions are reached
  • Inter-agent communication and where manipulation could occur
  • The full chain of risk from data access to final action

This work combines elements of data science, psychology and security operations. The current workforce for these skills is limited, and training will take time.  

In parallel, there will be pressure to build supervisory AI agents that monitor customer-facing agents. This raises a scale problem. Reviewing every input and output would slow performance and reduce the value of automation. Selective review introduces loopholes. Hard rules are predictable, while soft rules require the primary agent to decide when something looks risky, which can itself be manipulated.  

Teaching AI Agents to Learn Secure Behavior

A long-term goal is to train AI agents to recognize their own potential for insecure actions. They will need:

  • Policy awareness built into their reasoning
  • The ability to assess when a decision exceeds a risk threshold
  • Clear mechanisms to escalate to humans or security agents

When humans review an agent’s decisions, there is a real risk of over-trusting the system. If a person solves a math problem but receives a slightly different result from a calculator, many people will believe the calculator even if it is wrong. This tendency can create risk blindness when supervising AI agents. People may approve decisions because the system appears confident, not because the decision is demonstrably safe.  

How Security Teams Must Evolve for Agentic AI

Security teams will need a new strategic outlook. Most current defenses depend on deterministic signals such as known malicious domains or simple sensitive data matches. Agentic AI environments are statistical and dynamic. Teams must assume that agents will make mistakes, be manipulated or expose data unintentionally.

Protection should focus on:

  • Behavioral monitoring of agents and their workflows
  • Anomaly detection across agent-to-agent and agent-to-data interactions
  • Guardrails that can intervene when agents drift into unsafe territory

To support this, tools that understand modern data environments are essential. Data Security Posture Management (DSPM) solutions are critical for showing organizations where sensitive data lives, which agents and users are accessing it and how that exposure is changing over time. Achieving this visibility is a foundation for securing agentic systems that learn and adapt on their own.

Preparing for the Agentic Future

Agentic AI introduces a new type of digital participant. Organizations are tasked with securing not just securing data and human users, but also autonomous systems acting on their behalf. The companies that succeed will rethink governance and build oversight into the design of these systems. Now is the time to map data flows, set boundaries, establish monitoring and ensure that tools are ready for agent-to-data and agent-to-agent interactions.

We secured the internet for people. The next challenge is securing it for digital actors. 

  • nick_savvides.jpg

    Nick Savvides

    Nick Savvides serves as Field CTO & Head of Strategic Business, APAC at Forcepoint. In this role, he is responsible for growing the company’s strategic business with its key customers in the region. This involves taking the lead to solve customers’ most complex security issues while accelerating the adoption of human-centric security systems to support their business growth and digital transformation. In addition, Savvides is responsible for providing thought leadership and over-the-horizon guidance to CISOs, industry and analysts.

    閱讀更多文章 Nick Savvides

X-Labs

直接將洞察力、分析與新聞發送到您的收件箱

直奔主題

網絡安全

涵蓋網絡安全領域最新趨勢和話題的播客

立即收聽