Get a Break from the Chaos of RSA and Meet with Forcepoint at the St. Regis.

Close
六月 19, 2023

ChatGPT on the Impact of Generative AI in Cybersecurity

Part three of a five-part series written by ChatGPT
ChatGPT

We already know hackers are using artificial intelligence. Many companies are banning the use of ChatGPT (for now). We think generative AI can unlock your employees’ efficiency and creativity beyond the ways tools like the personal computer, the Internet and mobile devices already have. Forcepoint has the tools that will empower your employees to harness the power of tools like ChatGPT while keeping your organization’s critical data safe and secure. 

 

In this five-part ChatGPT series, we asked ChatGPT about the impact of generative AI in cybersecurity. This third post in the ChatGPT series looks at how AI is being used in cyberattacks. 

What are the different types of generative AI (eg text, video, audio, graphical), and how could each one be used in cyber-attacks? What about attacks combining two types? Please provide specific examples for each threat/attack as this feature will be split into sections detailing each one.

 

Different types of generative AI can be employed in various cyber-attacks, and combining multiple types can make these attacks even more sophisticated and challenging to detect. Here, we outline the types of generative AI and provide specific examples of how they can be used in cyber-attacks:

1. Text-based generative AI: Attack: Advanced phishing and spear-phishing 

Example: An attacker uses a text-generating AI model to create contextually relevant and personalized phishing emails that convincingly impersonate a company's CEO. The emails are sent to employees, tricking them into revealing sensitive information or transferring funds to the attacker's account.

2. Video-based generative AI (Deepfakes): Attack: Disinformation and reputational damage

Example: An attacker creates a deepfake video that appears to show a company's executive admitting to fraud or unethical practices. The video is leaked online, causing significant reputational damage to the company and negatively impacting its stock price.

3. Audio-based generative AI (Voice synthesis): Attack: Voice phishing (vishing) and impersonation

Example: An attacker uses AI-generated voice synthesis to impersonate a company's CFO in a phone call, instructing an employee in the finance department to transfer funds to an external account as part of a fraudulent invoice payment.

4. Graphical-based generative AI (GANs for image synthesis): Attack: Creation of fake identities for social engineering

Example: An attacker uses a GAN to generate synthetic images of non-existent people and uses these images to create fake social media profiles. These profiles are then used to establish trust with a company's employees, gathering intelligence for a targeted attack or manipulating them into revealing sensitive information.

 

Combining two or more types of generative AI can amplify the effectiveness of cyber-attacks:

  • Attack: Multi-modal deepfake (video and audio)

Example: An attacker creates a deepfake video of a company's executive delivering a fake emergency announcement, which is combined with AI-generated voice synthesis to make the audio match the executive's voice. The attacker shares the video with employees via email or social media, tricking them into taking actions that compromise the company's security or result in financial losses.

  • Attack: Coordinated disinformation campaign (text, images, video, audio)

Example: An attacker launches a coordinated disinformation campaign against a company using AI-generated text, images, video, and audio to create fake news articles, social media posts, and multimedia content. This campaign spreads false information about the company, leading to reputational damage, lost customers, and a decline in stock value.

 

As generative AI becomes more sophisticated, it is essential for organizations to develop robust security measures and educate their employees about the potential risks associated with AI-generated content. This includes investing in AI-driven defense technologies, training employees to recognize various types of attacks, and fostering a strong security culture within the organization.

ChatGPT

The team at OpenAI trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect...

Read more articles by ChatGPT

About Forcepoint

Forcepoint is the leading user and data protection cybersecurity company, entrusted to safeguard organizations while driving digital transformation and growth. Our solutions adapt in real-time to how people interact with data, providing secure access while enabling employees to create value.