RSA Exclusive: Try new products, meet our executive team, and see VIP guests you won't find anywhere else.

Close
December 11, 2018

A “critical period” for cybersecurity

Dr. Margaret Cunningham Principal Research Scientist

The field of cybersecurity is undergoing a critical period, where learnings from cognitive and behavioral science provide us with a springboard for advancing the underlying structures and outcomes of information security strategies through a more layered approach.

Critical periods are phases of development where specific types of insight or information can disproportionately shape the future. For instance, people experience critical periods when learning languages and animals experience critical periods when imprinting on their parents.  Cybersecurity is currently in the midst of a critical period, where we have the opportunity to adopt insights from cognitive and behavioral science to advance the underlying structures and outcomes of information security strategies. 

The future isn’t just automated

While it’s long been a fear that machines are coming to replace human workers, we’re starting to realize this is unlikely to happen completely—simply because it’s not the best of use of humans or machines. A recent survey by Accenture researchers published in Harvard Business Review found companies that have people and machines working together achieve better performance. Companies that replace workers completely with automation saw only short-term benefits. The real power comes when businesses harness the strengths of each. The Accenture team presents two models for this collaboration: humans supporting machines by teaching them, explaining the results they deliver, and maintaining the systems; and machines helping humans by enhancing creativity, communicating information or taking the physical place of humans in dangerous or exhausting positions.

For the cybersecurity industry, we envision a more integrated approach coming out of this critical period. We face ever-evolving threats that require adaptable tools that can not only react, but anticipate. Unfortunately, machines today aren’t great at dealing with the unknown, unanticipated and unexpected. We call the ambiguous area of unexpected activities the “gray space.” This includes all sorts of human, employee and entity activities that don’t fit into a model we’ve trained machines to recognize.

Artificial Intelligence (AI) is meant to bridge this gap with machines that can evaluate ambiguous activities, but we’re not there yet. AI can work well in certain niches, such as healthcare, marketing or autonomous vehicles for example, but it is not always broadly effective. Our 2019 Cybersecurity Predictions Report states that there is no real cybersecurity AI at the moment, and there won’t be in 2019. Today’s cybersecurity solutions are more like machine learning, and require a lot of the support the Accenture team enumerated in their humans-helping-machines model. That sort of solution won’t be sufficient in the long term.  

Applying cognitive science to machines 

While humans come with their own shortcomings, such as limited processing power, fatigue, and bias, people are much better at discerning the intent of ambiguous activities. This capability takes time to emerge—infants and children incrementally improve their ability make sense of their environment and spend years learning to reason through abstract concepts as adolescents.

Our vision is that machines can similarly evolve or mature based on what scientists know about cognitive science. We can build upon our understanding of how children develop the ability to differentiate between similar things - such as zebras and horses. Or what allows us to immediately recognize that something in our environment is out of place.

While we’re guiding cybersecurity tools through this critical period, the goal is to build on the capabilities that make machines so powerful: ability to process large quantities of data, unlimited attention span and focus, huge memories, and incredible speed. Ultimately, we’ll have a potent tool to face the cybersecurity threats to come.

Read how cognitive science can help improve cybersecurity tools in our report: Exploring the Gray Space of Cybersecurity with Insights from Cognitive Science.

Dr. Margaret Cunningham

Principal Research Scientist

Dr. Margaret Cunningham is Principal Research Scientist for Human Behavior within our Global Government and Critical Infrastructure (G2CI) group, focused on establishing a human-centric model for improving cybersecurity. Previously, Cunningham supported technology acquisition, research and...

Read more articles by Dr. Margaret Cunningham

About Forcepoint

Forcepoint is the leading user and data protection cybersecurity company, entrusted to safeguard organizations while driving digital transformation and growth. Our solutions adapt in real-time to how people interact with data, providing secure access while enabling employees to create value.