From Data Truth to Runtime Trust
How Forcepoint and F5 Deliver Unified AI Security
0 분 읽기

Neeraj Nayak
Every organization we talk to is having two conversations about AI security.
The first happens in an architecture meeting: inside the security operations center, at a departmental review or in front of the board, wherever the enterprise is deciding how to build, deploy and protect the AI systems it is standing up internally. The second happens every time an employee pastes sensitive data like a line of proprietary code into a GPT and hits enter. Both conversations are critical because they are about real exposure. And until now, no single alliance has been created to address both at once.
Today, Forcepoint and F5 are changing that with a joint commitment to cover the end-to-end AI data surface. Together, we're thinking of AI risk differently than anyone has before, because the exposure doesn't start when a model or copilot is queried. It begins at the moment data is created, then shared, repurposed and extended all the way through to the runtime layer where deployed AI systems operate, adapt and, yes, occasionally go sideways.
AI is fast becoming the operational nervous system of the enterprise. It changes data behavior, writes code, answers customers, analyzes legal documents and powers internal copilots. The problem is not intent or adoption. The problem is control.
Many organizations have accelerated AI without first securing the data feeding it or the runtime environments where it operates. Sensitive information is entering generative tools without guardrails. AI agents are interacting with systems that were never designed for autonomous decision making. Custom models are being trained on data that has not been classified or governed.
The result is predictable: Sensitive data leakage. Shadow AI exposure. Model manipulation. Governance breakdowns. A widening gap between innovation and security.
Here's how Forcepoint and F5 have partnered to close that gap.
Data Truth: Knowing Risk Before AI Touches It
You cannot secure AI in production if you do not first understand and control the data that feeds it. Forcepoint's Self-Aware Data Security approach, enabled by our Data Security Cloud, discovers, classifies and governs data across structured and unstructured repositories, from Snowflake and Databricks environments to cloud apps, collaboration platforms, endpoints and legacy systems.
Persistent, explainable labels provide context on regulatory or business impact on the data and follow it across hybrid environments. The context informs enforcement policies that govern data access, not just identify it. By mapping AI workflows to datasets classified by Forcepoint, organizations can separate data that's appropriate for AI training and inference from data that must remain restricted. Teams can enforce those decisions consistently before a model ever touches it.
Runtime Trust: Protecting AI in Production
Even perfectly governed data does not eliminate runtime risk. Live AI systems interact dynamically with users, APIs, models and external services. They require active protection at the point of execution.
F5 brings runtime enforcement, adversarial resilience and inference-layer controls through its AI‑runtime protection capabilities, testing AI systems against real-world attack scenarios, monitoring for misuse and defending against adversarial threats at the point of execution.
With Forcepoint's AI-native classification providing continuous data context into F5's runtime layer, AI security moves from enforcing generic patterns to responding to your actual sensitive data. This becomes runtime enforcement grounded in data reality, not guesswork.
Continuous Protection Across the AI Data Surface
Forcepoint and F5 solutions do not overlap. Together we are complementary layers of the same architecture, covering the full AI journey for data.
Forcepoint stops sensitive data from reaching AI systems it should never touch, whether that is an unauthorized external tool or an internal model without the right policies in place. F5 protects the AI systems the enterprise has built and deployed, enforcing policy at runtime and defending against threats that emerge once models operate in real time.
Our joint approach is a clear progression, or maturity model, for AI security, outlinedin our F5 Solution Brief:
- Understand data: Begin with data discovery and classification.
- Prioritize AI: Advance through use case prioritization and data access governance.
- Protect continuously: Extends into runtime protection, threat detection, model integrity and continuous assurance.
Together, data intelligence informs enforcement and runtime telemetry feeds back into access governance decisions. Forcepoint anchors the data security foundation. F5 secures the runtime and provides continuous assurance. The two companies are protecting sensitive data both before and during AI execution so that the AI infrastructure can scale without expanding risk.

Forcepoint's AI Security Maturity Model
Real-World Use Cases
Three scenarios illustrate what our end-to-end coverage looks like in practice.
Shadow AI and External Generative Tools. An employee uploads a merger document to a public AI tool. Forcepoint identifies the sensitive data leaving endpoints or cloud repositories and applies policy controls. F5 protects API interactions and blocks malicious prompt exploitation. Together, we reduce data leakage and runtime abuse.
Enterprise AI Agents. An internal finance AI agent accesses structured and unstructured financial data. Forcepoint ensures that only approved, governed and correctly classified datasets are made available to enterprise AI agents, and provides the policy signals that determine which data can flow into or out of those agents. F5 protects enterprise AI agents and LLM deployments at runtime by detecting misuse, anomalous interactions and other threats targeting AI APIs, models and underlying LLM behavior, helping organizations assess and mitigate risk as AI systems operate in production.
Custom LLM Deployments. A healthcare organization deploys a private model trained on sensitive clinical data. Forcepoint governs and classifies the underlying data assets to enforce compliance and establish AI guardrails that control how data is accessed and used. F5 secures the application layer by protecting APIs and monitoring runtime misuse and anomalous behavior across the LLM deployment. F5 also enables AI Red Teaming by assessing model behavior and identifying potential risk scenarios. The result is secure innovation, validated through continuous risk assessment, without sacrificing trust.
A Partnership Built for Where AI Is Going
AI will continue to expand across the enterprise, and security teams need a model that protects both the data that guides AI and the systems where it operates. Forcepoint and F5 bring these layers together so organizations can innovate without losing control. With data truth and runtime trust working as one, AI becomes safer to deploy and easier to scale with confidence. Contact us to learn how Forcepoint and F5 work together to safeguard your innovations.
For more information, please read the press release announcing this new partnership and F5's blog post.

Neeraj Nayak
더 많은 기사 읽기 Neeraj NayakNeeraj Nayak is a Senior Product Marketing Manager at Forcepoint. With over a decade of experience in the cybersecurity industry, Neeraj has a deep understanding of cybersecurity solutions including SASE, SSE, CASB, ZTNA, DLP, and SD-WAN. Neeraj previously held product marketing roles at Netskope, Skyhigh Security and Lookout. Neeraj holds an MBA degree from IIM Mumbai and an Engineering degree from NIT Warangal.
AI Security with Forcepoint and F5: From Data Visibility to Runtime Assurance솔루션 개요 읽기
X-Labs
내 받은 편지함으로 인사이트, 분석 및 뉴스 바로 받기
