Future Insights - The Rise of Insider Threat-as-a-Service
A note from our series editor, Global CTO Nicolas Fischbach:
Welcome to the fifth post in our Forcepoint Future Insights series, which will offer six separate points of view on the trends and events we believe the cybersecurity industry will need to deal with in 2021. Check out the previous posts in the series:
- The Emergence of the Zoom of Cybersecurity
- Inherent Bias in Machine Learning
- People Do People Things
- Disinformation is Inevitable
Here's the next post from our Chief Strategy and Trust Officer, Myrna Soto:
The biggest threats will come from where you least expect
When envisioning the threats to your organization, malicious nation states or greedy virtual thieves located halfway around the world might loom large. But what if the risk is an undercover employee? What if it’s a person who’s not even real? What if it’s the neighbor you never suspected? In 2021 we’re going to see threats emerge from unexpected places, and sometimes the call will be coming from inside the house.
In the past we’ve thought of “insider threats” as disgruntled employees who walk out of the building with proprietary information hidden in their briefcases. But today, your employees may be scattered around the world, you may hire them after only meeting via Zoom, and they may never step foot inside one of your offices. And today, you can buy almost anything on the dark web, including “trusted insiders.” In 2021, I expect to see organized cells of recruitment infiltrators offering specifically targeted means for bad actors to become trusted employees, with the goal of exfiltrating priceless IP. These “bad actors," literally, will become deep undercover agents who fly through the interview process and pass all the hurdles your HR and security teams have in place to stop them.
We want to believe our employees are good people—but the stats tell us that between 15 and 25 per cent are not. The only way to find these people before they do irreparable damage to your organization is by understanding human behavior and knowing when their activities don’t match their profile.
I believe we’ll see another form of fake identity coming specifically for the financial services industry in 2021. According to McKinsey, synthetic ID fraud is the fastest-growing type of financial crime in the United States and is spreading to other geographies. Synthetic fraudsters use real and fake credentials to build a phony profile good enough to apply for credit. Although the applications are normally rejected by the credit bureau, having a file is enough to set up accounts and start building a “real” credit history to apply for bank accounts, credit cards, and loans. It’s almost impossible to tell a real identity from a synth, and since there’s no individual person whose ID is stolen, the real victims are the businesses left with no way to recover their losses.
You would think that modern technologies such as machine learning (ML) could easily identify this kind of fraud. The issue is finding the data set to train the ML: how do you show it how to identify a fake persona when they’re almost indistinguishable from real people?
The answer is to dig deeper to establish identity with third party data feeds which show a consistent history or a face-to-face identification of a passport or driving license. Over time, businesses can build a checklist of inconsistencies commonly found in synthetic identities and use this to train an algorithm to automatically flag suspect files for action.
The hacker next door
We know from our studies into behavior that it’s easier, and more comfortable for cybersecurity professionals to believe that all attacks come from external forces, and picture the attackers as devious foreign actors. But the truth is, nation states usually have higher value targets in sight than schools or hospitals. They also want to fly under the radar: when stealing a formula for a vaccine for example, it’s more valuable if no one realizes it’s missing.
Annoyance attacks or DDoS hits also often have local suspects. For example, who knows a school system’s security better than a student who uses the network, and who has a better reason to cause a disruption?
Miami-Dade Schools found this out the hard way when a 16-year-old student was revealed as the mastermind behind a cyberattack on the first day of school. The network was overwhelmed with DDoS attacks causing error messages and glitches that disrupted virtual classrooms for days
Very often when it’s a data breach, it’s more likely to be coming from the inside. We see many cases of low-level data theft from employees who think they won’t get caught, and certainly a whole host of data breaches caused by simple human error or poor security administration. With COVID-19 continuing to push working and education to the home and hospitals increasingly utilizing telemedicine to treat patients, thousands of enterprises are more reliant on technology, and more at risk from troublesome insiders, than ever before.
Insider threat needs to be taken seriously and accepted as a real risk by security leaders, who should ask tough questions about whether they have the tools and solutions in place to spot and stop anomalous behavior, before it’s too late.