A note from our series editor, Global CTO Nicolas Fischbach:
Welcome to the fourth post in our Forcepoint Future Insights series, which will offer six separate points of view on the trends and events we believe the cybersecurity industry will need to deal with in 2021. Check out the previous posts in the series:
- The Emergence of the Zoom of Cybersecurity
- Inherent Bias in Machine Learning
- People Do People Things
Update: The Future Insights 2021 eBook is now available for download for those of you who want to dig into all six insights in one place.
Here's the next post from Eric Trexler, Vice President of Sales, Global Government:
In 2021 and beyond, disinformation is inevitable as people continue to believe what they read at face value without any additional research. Most Americans are now aware of the fact that Russians, through hacks and disinformation, attempted to influence the 2016 election. Admiral Mike Rogers, who ran the National Security Agency, says that, in hindsight, not enough was done to combat disinformation. He told NPR, “I don’t think we really fully understood the magnitude.”
Since then, there have been countless instances of high-profile disinformation campaigns and attacks. In 2018, the Cambridge Analytica scandal came to light; Facebook user data was covertly harvested by the British political consulting firm. More recently, a Guardian headline declared: “Facebook is out of control. If it were a country it would be North Korea.” In a related story, much has been said about the Brexit disinformation scheme considered the “...greatest electoral fraud perpetrated in Britain for more than a century.” Fake explosion videos, election meddling efforts, contrived protests, and more – it seems the Internet Research Agency may be the best at sowing discord, confusion, and disinformation that others will continue to follow - and believe.
Currently, disinformation is one of the biggest yet most nebulous threats facing democracy. It’s a high-stakes, low-consequence information war. Adversaries are turning technology and our core values against us. How do you combat the abuse of the First Amendment without trashing the spirit of the First Amendment? The Internet was founded on anonymity, which makes disinformation difficult to combat. It’s cheap, it’s easy, and people want to believe what they read when it aligns with their ideas and mindset. At the same time, it’s gotten easier to create deepfakes and malicious bots, as the tools behind them have been democratized and widely disseminated. We’ve even seen the rise of disinformation-as-a-service which, when weaponized against corporations, can be “...extremely painful, knock billions off share prices and cost CEOs their jobs,” Sharb Farjami, global chief executive officer of Storyful, told the Financial Times.
However, the government has more recently turned its attention to big tech and the monopolies they have carved out for themselves in recent years. In fact, a majority of the public now supports regulation of big tech companies. Just as the FCC regulates television and radio in the United States, there is mounting pressure for governmental oversight of social media platforms to rein in the runaway disinformation issue. The Honest Ads Act, for one, would mandate the same transparency with regard to social media advertising that’s required of traditional advertising, which is a step in the right direction.
Still, disinformation comes in multiple forms. There is simply no silver bullet to remedy the threat—no single tool that can guide people to truth or safety. Instead, everyone must be diligent about questioning what they see online, as opposed to simply taking information at face value without further thought or inquiry. The good news is that compared to several years ago, there’s much more awareness today on disinformation campaigns and their intent, as well as growing dialogue with social media organizations around the issue.
Additionally, public/private partnerships could help combat disinformation campaigns and bad actors—particularly in the U.S. and U.K., where open standards can leave organizations more open to attack. The Carnegie Endowment recently suggested a consortium that brings together academics, large social platforms, and commercial tech companies to ramp up disinformation research. Historically speaking, innovation is largely driven by necessity. While disinformation is a large and growing threat, it’s exciting to think what new technology could come from such an initiative, or how social media could evolve to meet this urgent challenge.
In 2021 and beyond, disinformation will continue to increase in focus and scope. And why not? Disinformation campaigns are easy and low-cost to implement, while the risk and penalties are nearly nonexistent.