In any job, there are good days, bad days, and GREAT days, and last week I got to experience what I considered to be a great day for me – the opportunity to be part of a panel in the “Intelligent Future” track of South by Southwest Interactive.
The panel, entitled “Pre-Crime: It’s not just Science Fiction anymore” was a whole lot of fun to put together, working with fellow panelists Jennifer Lynch (attorney with the Electronic Frontier Foundation), David Brin (futurist and author), and moderator Joe Brown (editor in chief of Popular Science). Furthermore, the topic is tantalizing, as it speaks to something that is happening today but we don’t think about: predictive policing.
For many readers, the idea of predicting crime might sound like science fiction – and indeed as the topic of Philip K. Dick’s “The Minority Report” it actually is. However, the use of Predictive Analytics (PA) to predict real-world crime is already being put to use on the streets of some US cities today. Our panel delved into some of the opportunities – and risks – of this kind of policing.
In case you’re wondering how I fit in to this topic, the subject isn’t as far removed from my day job as you might think. Forcepoint’s Human Point System is designed to provide early warning of employee misbehavior or impersonation – it’s essentially “pre-crime” detection, but on a more focused basis. Indeed, much of Forcepoint’s research is designed to discover ways that we can make a prediction about (for example) future data theft and provide protection for a company without penalizing the employee impacted by the prediction – who after all has not necessarily done anything wrong yet.
That’s the key with predictive analytics. It’s a two-step process: what is predicted, and what can be done about it. If the answer to the latter question is nothing, well, the prediction (even if accurate) is pretty much worthless. Fortunately, when it comes to data protection, there’s quite a lot we can do while still balancing the legitimate interests of both employee and employer. Done right (something we spend quite some time trying to do!), you really can have it both ways.
Unfortunately, when it comes to policing, things can be a bit more complicated. Let me explore that a bit.
First, as the panel explained, there are levels of prediction, ranging from predicting where crimes might occur to who is likely to commit a particular crime. As you can imagine, the impact of that latter prediction is a little bit more personal – imagine being suspected of a crime just because a computer said so… and that’s the heart of the issue. A prediction is just a probability; it does not actually tell the future.
Second, just like with cybersecurity, crime-based PA makes use of data from the past to predict the future. Thus, any algorithm is only as good as the data it is provided, and if that data is biased in terms of race or gender (or other attributes, for that matter) the predictions made will share that same bias. This is a well-known problem in PA – executives like Satya Nadella have said that one of the ten commandments of PA is “A.I. must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.” That’s a strong statement – and one that my fellow panelists and I would agree with, I think. Many would say that there is good evidence of bias in existing crime data – thus, any algorithm built on that data would itself be biased in the same way.
When it comes to crime, there’s been some pushback in the research community about these concerns. For example, ACLU and a group of 16 other organizations issued a statement expressing concern about the use of predictive policing, based on precisely these concerns about algorithmic bias. Furthermore, there is research that supports the argument that some predictive algorithms have, with no bad intention, created a system that impacts minorities more than others. Nobody wants that.
This all sounds very dark, but it doesn’t have to be. To quote Brandeis, “Sunlight is said to be the very best of disinfectants” – and our discussion aimed to shed light on what is a complex problem.
To progress, transparency is the key here: any system that attempts to predict crime should be subjected to rigorous third party scrutiny. Additionally, by thinking about the police role beyond strictly enforcement to include connection and engagement in the community, there is real opportunity. That’s the ray of hope that shone through: that we can work together across disciplines to help reduce crime, reduce discrimination, and reduce unreasonable surveillance through the application of science.
To do this, we must (MUST!) be willing to engage in a broad social dialogue about the type of society we want to live in, and the price we are willing to pay to reach that end state. To that end, groups like EFF play a critical role in helping force transparency when it is not readily given and provide a voice that speaks up clearly and articulately for those who might be losers in a predictive future. I applaud their efforts.
David Brin perhaps said it best during our time together. In essence, he argues that it is miraculous that we have come this far as a society, and we need to recognize the progress we have made – as well as the fact that we are as yet an imperfect union. By keeping both the vision and the problems front and center, we can work to fulfill the promise of PA done right.
I hope you take the time to look at some of the links I shared in this post, because PA is coming to a police department – or computer – near you soon. The technology will impact you in many ways, some subtle, some less so. We use it today in computer security, for example. To both designers, end users, and defenders, I say this: let’s keep the vision, but not be blind to the challenges that vision presents.