X-Labs
August 6, 2020

Believe What I Do, Not What I Say - Understanding User Security Habits with Behavioral Analytics

Dr. Margaret Cunningham Principal Research Scientist

You’ve run out of time to complete your annual information security training, so you find the final reminder email, click the link, and log into the training portal.  As the training runs on one of your screens, you continue to send emails, complete other tasks, or browse the internet.  If you’re unlucky, your training module strips your ability to multi-task, so you take a trip to the kitchen while it drones on the background.  Finally! It is time to take the exam.  On your second attempt you manage an 84%, which is good enough to move on with your day.

While this scenario describes less than ideal training materials and processes (a topic worthy of its own blog post), it reflects what many employees experience. 

Why is this the case?

A few quick searches on security awareness training uncovers an echo chamber of articles that insist that you must increase training[1] to address “human error” since “humans are the weakest link in cybersecurity.”[2] [3] [4]  The result is that most organizations create training materials that fit into their current compliance-oriented training strategy, and some invest in superior training methods such as mock attacks, continuous education, and reinforcement of key learning objectives. 

What you rarely find, however, is data that illustrates how well security awareness training captures individual differences in the real, day-to-day behaviors that are aligned with strong security practices. While training is not going to be replaced at any point in the near future, organizations that are serious about building resiliency against cyberthreats must invest in understanding human behavior.

Recent research[5] shows that there is often a mismatch between a person’s self-reported level of security awareness (such as their responses to a questionnaire or quiz) and their actual behavior when challenged by a security threat.  This means that a perfect score on an annual training module, or a questionnaire filled with desirable responses, may not translate to behaviors that keep an organization safe. 

However, when a person’s actual behaviors (not self-reported documentation) are analyzed, their typical day-to-day actions align with their performance on security challenges.  Simply put, people who engage in less safe or less “security aware” behaviors are less likely to pass security challenges (like being phished) whereas people who engage in safer, more “security aware” behaviors are more likely to pass security challenges. 

In fact, looking at the data in Table 1, you can see that across multiple challenge types, successfully besting the challenge correlated nearly perfectly with measures of security awareness derived from behavioral data.  However, the correlation disappears (or is negative!) when you look for a relationship between performance on security challenges and security awareness responses from a questionnaire. 

Additionally, Bitton and colleagues found discrepancies between self-reported and actual behavior beyond the security challenges. Participants who reported that they would avoid websites with security warnings, never download unsecured/unencrypted files, and use password protected locked screens, often contradicted themselves by engaging in those very behaviors. 

Conclusion

As our workforce is increasingly bombarded by sophisticated attacks, it is critical to move away from an inflated sense of security built on biased self-report measures, and towards a true-to-life behavioral assessment of human resiliency to cyber threats.

Behavioral analytics can advance an organization’s understanding of employees’ actual levels of security awareness by stripping away the bias inherent to self-report measures and performance on low fidelity training modules. 

Understanding behavior in this manner also provides a meaningful baseline of end user behaviors, which can further an organization’s ability to measure the impact of future trainings or security awareness campaigns. 

-- 

[5] Bitton, Boymgold, Puzis, & Shabtai, 2020. Evaluating the information security awareness of smartphone users. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1-13; Black Hat 2020 Human Factors Presentation: https://www.blackhat.com/us-20/briefings/schedule/#a-framework-for-evaluating-and-patching-the-human-factor-in-cybersecurity-2069

Dr. Margaret Cunningham

Principal Research Scientist

Dr. Margaret Cunningham is Principal Research Scientist for Human Behavior within our Global Government and Critical Infrastructure (G2CI) group, focused on establishing a human-centric model for improving cybersecurity. Previously, Cunningham supported technology acquisition, research and...

Read more articles by Dr. Margaret Cunningham

About Forcepoint

Forcepoint is the leading user and data protection cybersecurity company, entrusted to safeguard organizations while driving digital transformation and growth. Our solutions adapt in real-time to how people interact with data, providing secure access while enabling employees to create value.