August 3, 2017

A Human-First Approach to Predictive Analytics

Richard Ford Chief Scientist

The annual Black Hat USA conference last month was exactly what one would predict from a mashup of the world’s top hackers, security vendors, and Las Vegas: a vortex of lights, sounds, content, and cocktails, with a healthy dose of tall tales thrown in (“I once hacked a box THIS BIG…”). While grueling, the event is also a worthwhile and fun trip.

Aside from the stream of media and analyst meetings I had there, I was at the conference to present around the tricky problem of ‘pre-crime’ – the idea that we can predict a user’s probability of misbehaving either accidentally or intentionally. While this sounds like the stuff of science fiction (the talk title was sparked by the Philip K. Dick short story “The Minority Report”, and the movie of the same name), technical advances have made these types of human-centric predictions quite achievable. And, as we all know, with that great power comes great responsibility (thank you, Ben Parker).

The science of this falls under “predictive analytics”, though one would be forgiven for missing this term in the sea of Black Hat buzzwords. The basic idea is that we can use past history combined with knowledge about the present to make predictions about likely futures. These predictions aren’t like a Magic 8 Ball – that is, they don’t necessarily come true, and when they do, it’s certainly not by magic. Instead, they are most definitely in the realm of real science, and as such their accuracy can be measured, rated, and tuned.

As an industry, we’ve done this with respect to potentially malicious infrastructure before, but it has the potential to become a little disturbing when applied to people. The difference, of course, is that packets don’t have feelings. When you block a website based upon a prediction, typically there is not too much at stake. That is not to say a false positive doesn't matter, but it’s all relative. When you apply that logic to a person, however, it is a very different story.

My central thesis at Black Hat was that predictive security analytics with respect to human behavior is a very powerful tool, but also poses a significant risk to employee/employer relationships. This is particularly true when we confuse mitigating a risk with penalizing, or otherwise punishing, a person for something they haven’t done yet.

Let’s take a look at the following example.

The use of predictive analytics for determining that a user is considering leaving an organization is already pretty common. In fact, it’s often used for enhancing employee retention. That’s a huge positive and helps teams stay together. Conversely, in the insider threat space, the knowledge that an employee is probably leaving can have darker overtones. Since many studies have shown that a large percentage of employees will steal corporate data when they leave, one can consider the prediction of departure as a risk indicator. This in turn increases the probability that a data move by the user represents a theft. This is both an opportunity and a problem.

On one hand, an employer has the right and a duty to protect its corporate IP. The employer being concerned that an action carries with it a bit more risk is a good thing. The trick is what is done with that knowledge. One option is to limit that user’s access – but that is essentially penalizing a user for something they haven’t done yet. Another approach might be to increase monitoring, so if data is stolen, the company has some form of remediation. Finally, one could imagine dynamically and silently encrypting data that is moved off a managed device. If that data moves back to another corporate device, it is decrypted, and all is well. If it is moved to a personal device, it’s protected and locked up in a nice encrypted package.

The point here is that context is everything and we must use these analytics in a human-first way. Each of the mitigations above works technically, but only one really wins from the human perspective. Humans are not predictable, neat streams of 1s and 0s, and shouldn’t be treated as such. Analytics that allow us to predict an employee’s likely future cannot be used in a way that disadvantages that employee in the workplace. In fact, they should be used to head off potential trouble, mitigating potential damages, and ultimately strengthening the relationship between the company and its employees.

Sadly, this human-first approach is still in its infancy in the industry. While walking the floor in the exhibit halls, I saw a lot of uses for big data, but my fear is – and remains – that these powerful predictive analytics algorithms will end up being deployed based on numbers, not on people. In that world, we all lose. The only way forward using predictive analytics responsibly is with a “human first” approach.

About Forcepoint

Forcepoint is the leading user and data protection cybersecurity company, entrusted to safeguard organizations while driving digital transformation and growth. Our solutions adapt in real-time to how people interact with data, providing secure access while enabling employees to create value.