AI Security Best Practices: How to Protect Sensitive Data in GenAI Tools
0 min read

Lionel Menchaca
Generative AI is now embedded in how employees research, summarize, write, code and make decisions. That shift changes your risk profile because every prompt, paste, upload, connector and AI-generated output becomes a potential data exposure event.
If you are building a program around AI security best practices, the goal is not to publish another policy. The goal is to put enforceable controls around the exact moments sensitive data is most likely to spill into the wrong tool, tenant, plugin or log trail, while enabling approved GenAI use at scale.
Why GenAI changes the data loss equation
GenAI compresses data movement into a single interface. In seconds, a user can paste regulated data “for context,” upload a contract for summarization or connect an assistant to repositories that were never intended to feed AI workflows. The risk compounds when shadow AI adoption outpaces governance and when controls differ across managed devices, unmanaged devices and SaaS environments.
The practical implication for senior security leaders is simple: you are not just approving an AI vendor. You are securing a new interaction model that creates new data paths and new audit obligations.
The following gest practices below focus on concrete steps you can implement across prompt, upload, connector and output.
Start with an AI data map
Before you write or refresh policy, build a lightweight AI data map that your team can maintain and update quarterly. This becomes your blueprint for enforcement and your baseline for executive reporting.
Ground this work in an established risk framework such as the NIST AI Risk Management Framework (AI RMF 1.0) so your inventory, controls and reporting map to a repeatable governance model.
Capture four elements:
- Tool inventory: which GenAI tools are in use today? This should include both approved tools and shadow AI ones as well.
- Data categories at risk: PII, PHI, PCI, credentials, source code, customer lists, contracts, pricing, roadmaps and M&A materials
- Interaction paths: copy-paste, file upload, downloads of generated output, plugins/extensions and connectors to corporate systems
- Enforcement points: secure web gateway, endpoint controls, SaaS controls and enterprise AI APIs where available
The map is your reality check. If you cannot name the tools, the data and the paths, you cannot realistically enforce AI security best practices.
Control shadow AI without slowing teams down
Shadow AI is rarely malicious. It is a productivity response to friction. Your objective is to make the secure path easier than the risky path.
1- Create an allowlist with tenant specificity. “ChatGPT” is not a control statement. Specify the approved enterprise tenant, approved clients and approved integrations.
2- Block and guide, don't just block. If you simply ban AI sites, users route around you. Use granular access controls that restrict risky services while steering users to approved tools.
3- Centralize enforcement. Route AI access through inspectable control points so you can apply consistent policy and measure outcomes.
A high-signal KPI: the percentage of GenAI usage that is visible and policy-enforced vs unknown usage.
Put guardrails on prompts, uploads and outputs
Most real-world leakage happens at the moment users share context. Effective AI security best practices treat prompts, uploads and outputs as policy-enforced data egress paths, not training topics.
Prioritize these controls:
- Inline prompt inspection: detect sensitive content before a prompt is submitted, then block, coach or require justification based on policy
- Inline upload controls: prevent sensitive files from being uploaded to GenAI tools unless the tool, tenant and use case are explicitly approved
- Output controls: apply policy to AI-generated content before it is downloaded, copied into other systems or shared externally
- Consistency across channels: apply the same control logic across web, endpoint and SaaS so users cannot bypass guardrails by switching devices or apps
To make enforcement practical, define tiers that map directly to actions:
- Never allowed in GenAI: credentials, encryption keys, regulated identifiers, customer lists, unreleased financials and source code repositories
- Allowed only in approved enterprise GenAI tools: internal content that must stay within approved tenants and cannot be used for training outside your boundary
- Generally allowed: public or low-sensitivity content with monitoring and logging
One operational note that matters: keyword blocking is brittle. Classification-driven policy is what scales because it answers “what data is this?” before you decide “is it allowed here.”
Secure connectors and agent workflows with least privilege by design
As agentic AI assistants connect to email, SharePoint, ticketing systems and CRMs, risk expands from “what users type” to “what the assistant can access and do.” Treat connectors as privileged integration points and apply least privilege from day one.
Design requirements that reduce blast radius:
- Least privilege on every connector: assistants should not inherit broad access by default
- Scope limits: constrain retrieval by folder, record type and time range wherever possible
- Deterministic action gates: require human approval for irreversible actions like external sharing, permission changes, payments or destructive operations
- Connector audit trails: log connector scope, retrieval activity and downstream actions as first-class evidence
If you do not control connector permissions, you do not control the assistant.
Design for prompt injection containment
Prompt injection and instruction hijacking are not problems you can train users out of. And as the UK’s National Cyber Security Centre notes, prompt injection is not SQL injection, so the right response is containment controls, not user warnings. They are design problems that need containment. The safest operating assumption is that untrusted content will eventually influence an AI system’s behavior.
For a practical taxonomy of what to design against, align your review checklist to the OWASP Top 10 for LLM Applications and map each risk to an enforceable control.
Apply these controls when models can browse, summarize external content or call internal tools:
- Treat model output as untrusted input: never let free-form output directly drive tool execution or configuration changes
- Put deterministic gates in front of actions: allowlisted actions, strict schemas and hard constraints before an agent can call tools or APIs
- Separate duties: let the model recommend, but let controlled systems decide and execute
- Red-team routinely: test prompt injection and agent workflows as a release gate, not as a one-time exercise
These controls are what turn AI security best practices into a repeatable engineering pattern instead of a set of warnings.
Make compliance evidence part of the rollout
AI adoption expands where regulated data can travel and where you must produce evidence. Build reporting requirements into the program design, not after a legal review or an audit request.
If you need a formal governance backbone for leadership and audit stakeholders, use ISO/IEC 42001 as a reference point for building an AI management system with clear roles, controls and continuous improvement.
Minimum evidence you should be able to produce on demand:
- Which AI tools were used, by whom and from which device types
- Which policies were triggered, blocked or allowed and why
- Which connectors were used and what access scopes they had
- What retention, residency and usage commitments apply to each approved tool and tenant
This evidence also makes operations better. It shows where controls are too strict, where policies are too loose and where shadow AI pressure is rising.
A 30-60-90-day plan that delivers measurable outcomes
Days 0-30: Establish visibility and first guardrails
- Build the AI data map and publish an allowlist with approved tenants and connectors
- Route GenAI access through enforceable control points
- Turn on prompt and upload guardrails for your highest-risk data categories
Days 31-60: Expand enforcement and reduce exceptions
- Extend controls to unmanaged devices where feasible
- Apply least privilege and scope limits to connectors
- Standardize policy across web, endpoint and SaaS channels
Days 61-90: Prove, tune and operationalize
- Stand up audit-ready reporting and executive KPIs
- Red-team prompt injection and agent workflows as a standard test gate
- Formalize a quarterly governance cycle for tools, connectors, policies and evidence readiness
The payoff is not just reduced leakage. It is faster enablement. When controls are clear, enforceable and measurable, you can approve GenAI use cases with confidence.
Operationalize with Forcepoint DSPM, SWG and DLP
Security programs break down when they stop at policy and never reach consistent enforcement. Forcepoint supports operational AI security best practices by aligning discovery, governance and enforcement around the data paths GenAI introduces.
- DSPM: Forcepoint DSPM discovers and classifies sensitive data across cloud, SaaS and on-prem environments so guardrails focus on the data most likely to enter prompts, uploads and connectors. AI Mesh helps improve classification by using AI-driven models to identify and categorize sensitive content with less manual triage.
- SWG: Use Forcepoint Web Security to control GenAI access in the browser by allowing approved AI services, blocking risky ones and enforcing in-line protections on prompts, uploads and downloads with centralized visibility and logging.
- DLP: Use Forcepoint DLP to enforce unified controls across web, endpoint, email and cloud so protections follow data in motion, at rest and in use, including prompts, uploads and risky downloads.
If you want a practical reference point for safely enabling GenAI while controlling shadow AI and sensitive data exposure, talk to an expert today.

Lionel Menchaca
Read more articles by Lionel MenchacaAs the Content Marketing and Technical Writing Specialist, Lionel leads Forcepoint's blogging efforts. He's responsible for the company's global editorial strategy and is part of a core team responsible for content strategy and execution on behalf of the company.
Before Forcepoint, Lionel founded and ran Dell's blogging and social media efforts for seven years. He has a degree from the University of Texas at Austin in Archaeological Studies.
Gartner®: Security Leaders’ Guide to Data Security in the Age of GenAIView the Report
X-Labs
Get insight, analysis & news straight to your inbox

To the Point
Cybersecurity
A Podcast covering latest trends and topics in the world of cybersecurity
Listen Now