We all know deploying AI comes with its own set of risks. But the benefits AI brings in the form of productivity gains, data and analytics efficiencies and competitive advantages are impossible to ignore. As a society and industry, we are better off learning how to embrace the technology safely to move Generative AI tools quickly from the shadows into daily operations under corporate security control. We need strategies to minimize risks while optimizing benefits.
2024 Future Insights Series
At Forcepoint, we recommend organizations of all sizes adopt a “Data-first” zero trust approach when it comes to using AI tools. As organizations of all sizes grapple with how to implement AI safely and effectively, they need to create AI corporate and security policies that protect employees and company data across a growing number of use cases—all in a rapidly-changing AI landscape that shows no signs of slowing down.
Policies need to cover both build and buy scenarios
Many businesses start by deciding which AI tools to use, then move on to creating corporate policies that provide usage guidelines for employees. In those cases where organizations leverage existing GPTs and proprietary AI tools, for example, most are creating AI policies that govern the use of tools, especially as it relates to protecting company data. Beyond corporate policies, web security technology like Forcepoint ONE SWG can limit and control access to a select group of employees. And, our data security everywhere solution allows businesses to create and enforce security policies designed to protect proprietary company data that’s shared within AI tools.
Companies that choose to develop their own GPTs will have even more options to consider. Many times, their existing corporate and security policies will cover the use of data when custom GPTs are rolled out internally. But what if they decide on offering that custom GPT to customers for purchase?
Large enterprise organizations can go further by either fine-tuning their own LLMs, or building their own. Dell’s recent Hugging Face partnership will make it easier for large organizations to either build or to deploy dedicated server hardware designed to handle the intense workloads required to train LLMs. In those cases, large organizations will need expanded AI corporate policies that provide guidelines pertaining to the use of the internal data used to train or fine-tune those custom LLMs. And security policies designed to protect proprietary company data will likely need to be expanded as well.
In less than a year since the explosion and disruption of GenAI’s introduction, the AI trajectory has been an exciting one to witness. In a short time, organizations have gone from considering whether or not to enable AI tools internally to some considering rolling out customized GPTs or even fine-tuning or creating their own LLMs. A few short months ago, some of those options would have seemed far-fetched. Now they are viable options.
In 2024, AI-related innovations will create new possibilities we’re not even considering at the moment. Moving forward, organizations of all sizes will need to create and expand corporate AI policies that govern how employees can interact safely with AI. And AI security policies will need to extend beyond commercial AI tools to also cover internally-developed GPTs and LLMs. At Forcepoint, we have web and data security solutions all designed to future-proof adoption of emerging technologies such as GenAI, no matter how quickly the technology landscape evolves.