Get a Break from the Chaos of RSA and Meet with Forcepoint at the St. Regis.

Close
Settembre 13, 2023

Yes, ChatGPT Saves Your Data. Here's How to Keep It Secure.

Bryan Arnott

Note from Lionel: Before I was starting to think about my next AI-related topic to write about, Bryan sent this post my way. He did such a good job with it that I thought it made sense for his post to be the next in our ForcepointAI blog series. Without further ado, here's Bryan!

ChatGPT swept the world off its feet in November 2022 and the generative AI revolution it ushered in doesn’t seem like it will disappear anytime soon. Organizations across the world are embracing AI as a do-it-all virtual assistant to review software code, write web content and find insights from financial data, among other things. But what happens with all the information you enter into ChatGPT with your prompts?

Does ChatGPT save your data? Yes, it does – and it probably saves more of it than you realize.

ChatGPT collects both your account-level information as well as your conversation history. This includes records such as your email address, device, IP address and location, as well as any public or private information you use in your ChatGPT prompts.

Here’s what that means for your business, and how you can keep your sensitive data secure.

 

What types of data does ChatGPT store?

If you work in IT, then what I’m about to say next probably won’t surprise you. Yes, ChatGPT saves your data.

There are two types of personal information that OpenAI says it collects in its privacy policy: personal information it receives automatically, and the personal information you provide.

The personal information ChatGPT receives automatically is no different from any other SaaS app, and includes:

  • Device data, including what device and operating system you use.
  • Usage data, including your location, the time and the version you use.
  • Log data, including your IP address and the browser you use.

All this data is used by OpenAI to analyze how its users interact with ChatGPT. But ChatGPT also saves the data you provide, which includes:

  • Account information, including your name, email address and contact details.
  • User content, including the information you use in prompts and the files you upload.

The data you provide is used to help OpenAI train the ChatGPT model to become better at answering questions and helping users.

With OpenAI collecting and storing identifying information about you and the user content you submit, countless companies across the world have moved to ban employees from using it.

However, at Forcepoint, we don’t think that’s the right move.

 

The risks of ChatGPT saving your data

When ChatGPT saves your data, the primary risk is that it will result in a data leak or a data breach. Though unintentional, even a common Q&A with ChatGPT can open up a business to data security risks. These include:

  • ChatGPT training from your data and sharing sensitive information, such as intellectual property or personally identifiable information, with other users outside of your organization.
  • OpenAI itself becoming a victim of a data breach, exposing the data your users have submitted.

To OpenAI’s credit, the company has made it possible for users to stop ChatGPT from training its models on your conversations. Users should take the time to turn off chat history but reviewing generative AI security on a user-by-user basis is a recipe for sensitive data to slip through the cracks.

Treating ChatGPT and generative AI as you would any other third-party software vendor is key to ensuring your organization keeps users happy and productive and its most sensitive information secure.

Take the time to develop an organizational view on generative AI depending on who should have access to it, what types of activities it can be used for, and what sort of AI programs can be used safely. This information can then feed a comprehensive strategy for securing generative AI.

 

Forcepoint and generative AI security

You can’t outright block generative AI. The technology isn’t going anywhere and it’s too important of a tool to ignore.

Instead, Forcepoint generative AI security solutions enable organizations to unlock the potential of AI, safely.

Forcepoint ONE SSE, part of our Data-first SASE platform, is uniquely capable of monitoring traffic to generative AI applications and preventing access by unauthorized users on both managed and unmanaged devices. Forcepoint ONE uses Threatseeker URL categorization and filtering to ensure policies are applied to the hordes of new generative AI applications that are constantly emerging, ensuring consistent coverage.

Forcepoint DLP is being used for ChatGPT by extending data security controls to the web and public cloud and preventing sensitive information from being pasted or uploaded into the chat. It prevents data from being unintentionally leaked to ChatGPT, even if the user has turned off conversation history with OpenAI.

ChatGPT saving data isn’t a security risk in and of itself but embracing generative AI without a robust data security strategy that takes it into consideration is. Talk with a Forcepoint expert today to learn how your organization can secure the use of generative AI.

 

###

Note: This is the sixth post in our AI In Business series. You can check out all other posts in the series via the ForcepointAI tag. Here’s a list of my other posts in the series:  

Bryan Arnott

Bryan Arnott is a Senior Content Marketer and Digital Strategist at Forcepoint.

Read more articles by Bryan Arnott

Informazioni su Forcepoint

Forcepoint è l'azienda leader nel settore della sicurezza informatica per la protezione degli utenti e dei dati. La sua missione è tutelare le aziende e guidare la crescita e la trasformazione digitale. Le nostre soluzioni armonizzate si adattano in tempo reale al modo in cui le persone interagiscono con i dati, forniscono un accesso sicuro e, allo stesso tempo, consentono ai dipendenti di creare valore.