Skip to main content

Gartner Recommends Blocking AI Browsers – Here’s Why

|

0 min read

Safely enable the use of AI with Forcepoint

Artificial intelligence is in its social media era.  

The unprecedented success of Facebook, Instagram and Twitter spawned a frenzy of new contenders seeking to reshape what social media looked like, ranging from the mildly successful Vine – a video-sharing app arguably ahead of its time – to the now defunct Musical.ly, the lip-syncing predecessor to the wildly popular TikTok.

We’ve begun to see a similar narrative with AI, with new innovators seeking to sow AI into the fabric of wearable gadgets to AI-powered software that wasn’t entirely what it appeared to be. However, the latest frontier for AI is seeking to reinvent something that has become all too familiar: the web browser.

With Perplexity releasing Comet in July 2025 and OpenAI releasing its rival Atlas not long thereafter in October 2025, people across the world now have a new way to interact with AI. However, it’s giving some Gartner analysts cause for concern – enough to encourage organizations to block the usage of these AI browsers.

Gartner Analysts: Cybersecurity Must Block AI Browsers for Now

AI browsers act as virtual assistants, helping to summarize web pages and emails or automate elementary but time-consuming tasks. But these enhancements come with an expensive price tag, according to Gartner analysts Dennis Xu, Evgeny Mirolyubov, and Jon Watts.

Their report, Cybersecurity Must Block AI Browsers for Now, argues that the default settings within these browsers introduce data risk by putting user experience ahead of security.

Gartner points to the autonomous nature of the AI browsers as a threat to data security when paired with authenticated web resources. It notes susceptibility to emerging risks as well as traditional threats as primary causes for concern, warning that the use of these services could lead to sensitive data leakage.

The report shares a variety of security risks that AI browsers could introduce, from the common data leakage, credential abuse and phishing attempts to the more advanced prompt-injection induced rogue agent actions.

As a result, Gartner recommends CISOs block the use of AI browsers entirely, pointing to the potentially ruinous impacts that an incident stemming from these virtual assistants could have.

Where organizations have a higher risk tolerance and want to test the new technology, the analysts detail the five critical risks the AI browsers pose and steps to mitigate them within their report.

Top Cybersecurity Risks Posed by AI Browsers

Think about the web applications that might be open in your browser right now. From Atlassian to Zoho and everything in-between, the tabs in your browser are a treasure trove of personally identifiable information, financial analytics and other proprietary data.

Inviting an AI browser into the mix essentially grants permission for that service – whether it be OpenAI, Perplexity, or a new contender – to ingest and train from that data, if the right settings aren’t in place.  

This risk of data leakage isn’t too different from the challenges plaguing organizations’ use of ChatGPT, Copilot and other AI applications. Protecting data in ChatGPT, for instance, depends on a mix of framework, process and technologies.

But AI browsers supercharge data risk by removing guardrails through automation. Top data threats posed by these browsers include:

1- Prompt injection attacks

2- Unapproved data sharing and/or training

3- Credential abuse

4- Shadow AI and IT

5- Automating security awareness training

How to Mitigate the Risks of AI Browsers

While Gartner details recommended mitigation techniques to counter the five risks it highlights in its report, organizations can safeguard against the emerging risks of AI browsers in a variety of ways.

Assess platform risk

Security leaders keen on deploying the new technology should begin with a careful review of the back end of the service so that they can weigh the benefits with its risks. This may have already been completed if your organization had previously approved the use of ChatGPT or Perplexity.

Catalogue sensitive data

Rampant AI usage – including shadow AI that lives outside the purview of security teams – is reinforcing the importance of accurate discovery and classification of sensitive data through AI-native solutions like Forcepoint Data Security Posture Management. Understanding where this data lives, who has access to it and what type of risk it poses is a critical step in any AI adoption strategy.

Prevent access

Unlike AI applications, there are only a handful of AI browsers on the market. As Gartne recommends, preventing employees from downloading these applications until the organization has decided on their approach to them is a safe alternative.

Safeguard data

It’s now more important than ever that the right users have access only to the data they need to perform their job. If, for instance, a prompt injection attack enables an attacker to gain access to authenticated web resources, CISOs need the assurance their security controls can spot that change in behavior, adapt to the new risk and prevent the attack from escalating with tools such as Forcepoint Risk-Adaptive Protection.

Forcepoint helps organizations across the world safely enable AI to drive productivity while mitigating risk. Talk to an expert today to learn how Forcepoint Data Security Cloud can help your company take advantage of AI, securely. 

X-Labs

Get insight, analysis & news straight to your inbox

To the Point

Cybersecurity

A Podcast covering latest trends and topics in the world of cybersecurity

Listen Now