Contextualizing trust – how best to protect businesses' critical data

The way forward is fairly clear – wide adoption of fine-grained, context-sensitive trust is the right approach. Furthermore, this contextual model of trust needs to be applied to more than just people on specific programs, but to every entity that interacts with a system, as well as the system itself.
Let’s cut straight to the chase: today’s businesses have real challenges with critical data protection. It’s not just the obvious targets too, such as healthcare providers and financial services; users happily share personal information via social networks, and the wealth of data tracked by the Internet of Things is mind-boggling. If you have a credit score – and you almost certainly do – there’s a lot of information that’s out there in the ether about you.
Sadly, when I say out there, I don’t just mean in the hands of the credit agencies: the 2017 breach of credit reporting agency Equifax exposed 143 million American consumers. Similarly, in September of this year, the largest breach in Facebook’s history exposed the personal information of nearly 50 million users. In today’s society, even if you’re off the grid, trust me when I tell you you’re not actually “off the grid”.
Given the lack of good choices for consumers, end users find themselves in the unenviable position of having little choice but to trust the companies that their data passes through. Similarly, though, these same companies are also in the position of having to trust – or distrust – those people, devices, systems, and infrastructure that make up the overall IT environment.
That’s a lot of trust, and is in fact part of the challenge. While people, devices, systems and infrastructure can have different priorities, agendas, and weaknesses, businesses have traditionally painted with a pretty broad brush when it comes to trust. That’s a mistake, as trust can be a powerful ally in our hunt for better, safer, and more secure relationships between provider and consumer.
In the commercial world, businesses could start by taking a page from the government playbook. Government agencies have a long history with trust - one only has to look at the clearance process that validates the trustworthiness of those who access classified data and how classified data is isolated from those who do not possess the requisite clearances for access. Governments also understand the need to balance the dual concepts of trustworthiness and risk acceptance as part of using multilevel secure systems. Mostly, this works well, even against some fairly determined external attackers as well as malicious insiders and spies. It’s easy to focus on the times this approach has broken down, but we also need to be aware of the many times it has performed exactly as designed.
But even this framework has weaknesses. Trust is interwoven throughout, but only in parts of the government, and even then, sometimes dosed out in a rather “all or nothing” way. The problem is the way we typically decontextualize trust and that stems from how we tend to think about trust differently in the online world.
When you meet someone socially and are getting to know them, most of us take cues from those initial conversations to figure out how trustworthy someone is. However, that trust is situational. You might trust your new friend to provide a ride to dinner, but not trust them (yet) with the keys to your car. It’s not just about determining whether a person is “good” or “bad”, but whether they are likely to perform a particular action reliably or not. For example, you might know someone is entirely well-intentioned but very clumsy. You might not trust that clumsy person with your most treasured crystal glass, for example, even though they are honest, reliable, and kind. It’s not that they’re bad… they’re just bad with fragile things. It’s a subtle but important difference: it’s not just do you trust someone, but do you trust someone with respect to a particular action.
Now let’s compare this to how we view trust with respect to computing. Here, defenders often apply a high degree of “inside/outside” thinking along the lines of “what’s inside is good and what’s outside is bad”. Once I’m logged in to a machine or part of an organization, I’m pretty much given free rein within my granted rights. There are some checks and balances: insider threat programs, for example, try to identify those insiders who are a danger to the organization. However, the overarching paradigm is one of trust or distrust, with not much in between. It’s the same with machines: once a machine is placed on the network, we generally trust it completely. This type of “trust” isn’t trust at all – because it’s not situational - it’s a more “permit or deny” privilege-based system... and it’s easy for an attacker to exploit. Essentially, by decontextualizing the trust and reducing it to an all or nothing score, we allow anyone who gets into that trusted domain to do whatever they like: the challenge for the attacker is then reduced to how to get in the door.
The way forward is fairly clear – wide adoption of fine-grained, context-sensitive trust is the right approach. Furthermore, this contextual model of trust needs to be applied to more than just people on specific programs, but to every entity that interacts with a system, as well as the system itself.
The only long-term solution to data management is to embrace this contextual-trust-based architecture in a consistent and broad manner. Moving away from the “inside/outside, good/bad” mindset has already started (behavioral analytics are a much needed first step and can be very effective in detecting abuses of granted trust) – and it should be encouraged. What’s needed next is to recognize that trust comes in degrees and to utilize this contextual trust concept to deliver risk-adapting cybersecurity policies. The world really is about shades of grey; treating it that way will enable us to protect the data that is most important to us.