Rachael: We have such a fascinating guest today joining us. Maria Bada, Ph.D. is a lecturer in cyberpsychology at Queen Mary University in London and a RISCS fellow in cybercrime.
Eric: Welcome, Dr. Maria Bada.
Rachael: This is such great timing. Insider Threat Awareness Month is in September. This is the fantastic foundation for our conversation today. Let me start at the back level. How did you even get into this world of cyberpsychology? I'm so fascinated by the path to get there myself.
Maria: It took a while. I started psychology and my Ph.D.'s in media psychology. Back then, what I was studying more than a decade ago, the term cyberpsychology didn't even exist. We used to say media psychology. I started looking from a psychological perspective, at how people behave online.
Along the way, I was very lucky to work with very big universities like Oxford and Cambridge and the UK. I worked with governments, including the UK government, but also looked at the topic of cybersecurity capacity building. How social skills of that of a psychologist could be used by the government to develop awareness campaigns and educate the public on safe online behavior.
It's been a long way, but it has been an exciting and fascinating role definitely. Let me not forget the very important work with practitioners and the private sector is well ahead in the area of cybersecurity. So the learnings for an academic working with the private sector have been brilliant really.
Eric: You mean the private sector is ahead of the academic sector?
How People Operate Online and Become Potential Insider Risk
Maria: I think at the moment they work together a lot, but definitely the private sector. If we think about telecoms or the finance sector, they're very well ahead from a technical perspective and knowledge. But the private sector works very closely with academics, funding research, and then utilizing the tools developed. I would say they go alongside.
Eric: You said something that struck me. You study how people operate online. How do people operate online? I've thought about it, but never in that type of that manner, I guess. How would you summarize how people operate online?
Maria: We see a completely different behavior online if we compare it to the same person's behavior offline, if I may say. Within cyberpsychology, there are different theories trying to explain that difference. I can mention the online disinhibition effect. The fact that I don't have someone face to face and I don't interact with them face to face. Sometimes they're anonymous, sometimes they're in a different time zone, which really causes that effect of disinhibition.
It allows us to trust people easier, share confidential information easier, and think of people as less risky or dangerous if I may say. It really leads us into trusting people more because we lack facial clues. We lack that interaction. We're not used to, or at least we were not used to interacting with others online.
I guess now teenagers and young children, they become more tech-savvy and they have digital skills. I've done a lot of work around understanding behavior online from the user perspective, but also the criminal perspective. We see differences there as well.
The Potential Impact
Maria: This inhibition effect can lead you even into becoming a criminal because you can't realize the potential impact your actions might have online.
Eric: I'm looking at it from my personal perspective, Maria. I am more conservative online. If I put something on LinkedIn or if I send something to somebody, I will write it, read it, and rethink it. Oftentimes, I rewrite it because I feel it's more permanent. I feel it's searchable.
The Google effect, where I'll say anything in my personal life, where I'll walk around the house without makeup on or my hair did, or we could get worse than that. But I'm more carefree in the confines of my own area. I don't know. Rachael, what do you think about it, Miss TikTok and no multifactor authentication user?
Rachael: I do see it from that perspective maybe because I do spend so much time on these social channels and looking at the comments. I think a lot about that documentary series. I'm not sure if you got that in the UK, Maria, it was on Netflix. Was it Web Lies? Brian Knappenberger had directed it. It was this great cyber series looking at different perspectives.
One of them was a young boy, a teenager who was basically extorting young ladies, sextortion if you will. He was friends with them. To your point, his online life was nefarious. But then he would hear these girls talk to him about, "Oh my gosh, this is happening to me." And he's the one that's doing it. It's just really fascinating. I think that's a great example of what you're talking about.
Eric: No, that's freaky. That's disturbing.
Awareness of the Potential Risk Online
Maria: It is, but it is happening. Especially with online romance comes as well. So many women have been manipulated. To be honest, if I take your comment, Eric, that is not the case for everyone. It could be your background, or it could be your knowledge and awareness of the potential risk online. That is not the case for every kind of citizen, every user.
It really depends on the emotional state you are in, especially if we take as an example the online romance scams. Women who get manipulated are in a very fragile emotional state. They want to trust and they want to be loved by someone. So they are easier being manipulated by someone in that sense. But yeah, really overall I think cyberpsychology is looking at online behavior and trends because that online behavior is not static. It changes.
We see trends. Basically, we're trying to understand how these are really initiated and how we can work towards developing prevention initiatives. I mentioned cyber security awareness campaigns. So how do we really promote online safety by educating people about online risks and harms?
Eric: What's most effective so far in your research?
Maria: That's a big question. That's basically my Ph.D. I think, the best solution, if we think about the national level approach is to start early on. We need to educate young children. I've been looking and collecting data in different countries, different regions, and how school children are behaving online. We see really the teenage group, the group of children being teenagers, being very fragile and at risk online at this point. There are many reasons why.
A Lot of Risks Emerging Online
Maria: To be honest, young children really begin using the internet from the age of five, if not even before that. So that builds on. If there's no guidance from school or from parents, we see really a lot of risks emerging online. The most typical example is cyberbullying and the impact that can have on a child really impacting their personality, and their self-esteem. There's not much you can do. Or actually, there is a lot of work you need to do after that to reshape and protect that child. I think I'm a big fan of real awareness, education, and building skills at all levels.
Eric: Where does a parent go to get training or education on how to educate their children? When to talk to them, what to talk to them about, and when? I know I struggled with that. We're on the internet all the time. It was wired, and now it's wireless and wired. The kids have been on like you said from very early on. As I watched my three boys develop, one's almost 29, one's 14. There have been some pretty big generational differences when they picked up cell phones when they picked up the internet when they got in trouble on the
internet for the first time. But where do you go as a parent to get educated so that you can educate?
Maria: There are different resources provided at the governmental level. In the UK, the government has a website providing guidance for parents and teachers, but also there are a number of NGOs really providing that type of information.
[14:27] Multiple Resources Online About Insider Risk
Maria: Also, schools, if you're lucky and your child goes to a school that provides parent training and seminars, then that's a way. But definitely, there are multiple resources online you can really find information about that.
Eric: You're reminding me of when I was at McAfee. Well, it was team building, but it was more a community effort where we did go into an elementary school, 5, 6 through maybe age 10, 11. We did do a session on internet security. I totally forgot about that, but okay. So that's really where it starts.
Maria: That's where it starts or where it should start, to be honest. Now, if we look at the organizational level, what is happening within companies and organizations? We see employees not having that background and that knowledge early on. Companies are struggling to educate their employees. Since we're talking about the insider threat, avoid the potential intentional or unintentional insider threat phase.
Rachael: I think to your point too, a lot of people are so trusting of online, and let's say you're approached through a Facebook post. Somehow, they weaseled their way into your work email. You exchange an email or something like that. It's really fascinating, the lengths that attackers will go to to take advantage of privileged access. There were some stats about insider threat saying privileged misuse is around 20-ish percent. Is that what you are seeing as well?
Eric: We're defining privileged misuse as somebody who has authority and does bad behavior with said authorities? Is that intentional or unintentional?
Maria: I would say mainly intentional. That's another point because it really depends on the sensitivity of the data that an employee holds and whether they should be holding that data.
Should Employees Have Access to Sensitive Data?
Maria: Not many employees should be having access to sensitive data. Maybe that's one step, the first step to avoiding that intentional risk, potential insider risk. But that is definitely intentional.
Eric: Then we have negligent users. What do you think about them?
Maria: I would say that potentially all employees could be an unintentional insider risk. For many reasons, it could be that you've done your training. You have knowledge of what to avoid. But on a specific day, you're really stressed. You have to do things very quickly and you accidentally click a link in an email. That makes you a potential insider risk, but that was due to stressful kind of conditions. It's not that you didn't have the knowledge.
I think employees could be a potential insider risk for different reasons, not just lack of education really, and awareness. I can open up the discussion around cybersecurity culture and how the environment and the ecosystem of an organization can shape all that. Even if you become an unintentional insider risk, you still can save time. Report what happened immediately rather than being afraid of the implications of you clicking a link.
Even if we go into the serious stuff, even if you receive a ransom message, you shouldn't be avoiding that, a ransomware attack, reporting it immediately. So having a security culture of no blaming basically can lead to that. But, that's a big discussion there.
Eric: We've actually heard that over and over on the show. Early on in the show, we did a lot around trust and we did a lot around insider threat and insider risk.
The Goals of the Insider Risk Awareness Program
Eric: One of the comments we heard over and over again was, "Be open about the program, the goals of the program. Why we're doing it is not to spy on you, but to help protect you and let people know what you're doing. But also recognize that mistakes happen. Don't punish when people make mistakes. Foster that discussion and that openness where if they did something wrong, they feel comfortable asking for help rather than just saying nothing."
Maria: I totally agree, this is the right way. This is how you really promote, as you said, openness. You promote a positive mind around cyber security because there's not necessarily an impact out of it. But working with AwareGO, with AwareGO's clients, and other companies within the private sector, I see that this is not the case really. We see a more strict approach being followed by many large companies now.
For example, there are cases where if you fail phishing simulation three times, you could be fired basically. So the implications there are quite serious and employees, of course, are afraid. Clearly, if you fail your phishing simulation once, you have maybe a meeting with your manager. You fail it twice, you go up to the CEO. If you fail it the third time, you're fired.
Eric: Why do they do that? To get them to take it seriously?
Maria: Yes. It's a strict approach basically, no risk is acceptable within the culture of these organizations. So they try to eliminate risk this way. But this way you destroy any chance of building an open culture.
A Means to Trap People
Rachael: I follow InfoSec Twitter, and I just love the conversations there. They talk a lot about where the IT or security team will send intentionally a phishing email out to people and basically use it as a means to trap people. You clicked on it. It's like, "Why are you tricking me?" There's that trust, I think to your point of, "Now the company's trying to trick me into clicking on a phishing email." How do you work in an environment like that?
Maria: That can have really serious implications as well. With AwareGO, we've done a research project trying to identify phishing simulations and whether they're effective, and what employees think about them. We actually identified that employees did not find them effective. And we have examples of companies where employees thought of or considered everything as a phishing simulation and they would report it to IT.
Eric: That'll show them. You want to mess with me here?
Maria: They wouldn't open any attachments. They wouldn't accept any calendar invites. That brought a huge issue within the company and that clearly illustrates how they're not effective. They could lead to employees being disgruntled and not happy. So that's really not the way to go.
Rachael: That's already, I think a challenge with the great resignation. I was reading some stat and let me know if you've heard this one as well. 44% increase in insider risk in 2022 and I'm wondering, is it because of all these levers? They feel like, "This is my data. It's coming with me." Is that what you're seeing too?
Where We See Insider Risk Emerging
Maria: There are so many reasons. If we look at the profiles of insiders, usually, they are the code developer who might leave disgruntled because they didn't get the promotion. They definitely sense ownership of the code, and the tool they developed, so they want to take it with them. Definitely, that is one of the reasons. But also we see risks emerging because a lot of people have been working from home for the last two years.
We see more than unintentional insider risk emerging because you wouldn't face the risk of having to work from your own laptop. Or having to have sensitive data stored in an unsecured device. There are new risks emerging. But definitely, when we're looking at insider risk, there are different reasons, especially whether we're talking about an intentional or unintentional insider threat.
Eric: That makes sense to me at some level. I'm not supporting it, but the code writer, they're emotional. They didn't get protected, they're probably young. I guess age doesn't necessarily matter. But I created this. This is my work, this is my art, and I want to take it with me. It's the property of the company. I think we've been around long enough, trained long enough, we know that.
Legally it's the property of the company, they've probably been briefed. But I do understand the emotional response to that. If you feel like you were wronged or not looked at, how would you recommend companies better bridge that gap? Because you're not going to get every promotion you want. You're not going to get every answer you want. You have these emotional feelings, I understand that. But you can't take the code with you.
[25:11] An Open Cybersecurity Culture
Maria: And not just the code, you can't take the data with you.
Eric: You can't stick it up on Wiki leaks and you can't sell it. I understand you're angry. How does a company and how should an employee think through that? Any recommendation?
Maria: I think it goes back to that discussion of an open cybersecurity culture where an employee feels respected and heard of. You don't have to get every promotion, but at least you could be respected. When it comes to insider risk, there is a lot of academic work around understanding the personalities, and their profiles. Usually, they lack social skills. They don't get along with other employees.
Basically, that lack of social skills and social naivety actually puts them in a situation where they could be easier manipulated. In many cases, an insider threat is not really the lead in stealing the data. They could be manipulated by someone else. That could be due to, as you said, the ownership of the tool or the code. But it could be due to financial issues they might be facing at that specific moment.
Other stressors, family issues. It's so many things that could really lead to that. What we're doing at this point with AwareGO is promoting that security culture and ecosystem where employees are open to talking to each other or even to their manager without being afraid. If it's about feeling disgruntled, they can really express that.
That communication or lack of communication can lead to an insider actually taking action. Even if they're considering selling data, it doesn't mean that they will do it. But it builds up gradually into them actually becoming an insider risk.
Bad Organizational Cultures Breed an Insider Risk
Maria: Again, that's a long-term process. It might be that an employee who is really hardworking, you could not imagine that he could be a risk for the company. But after a number of years due to usually bad, organizational cultures, this happens.
Eric: You've said that a few times now. So in a good functioning culture, the probability of insider risk goes down?
Maria: I would say, yes. Absolutely.
Eric: How do you define a good culture? What do you look for?
Maria: I look for a company or an organization where employees talk to their manager and it's easy to communicate with others. We don't have maybe an authoritarian approach. The manager treats employees with respect. I think I mentioned that already. We have examples of really bad cultures out there and employees being desperate basically. Either you leave, you quit, or you might turn into an insider risk. Which road you will follow depends on many things, personality, and family issues.
Rachael: It's interesting talking about the disgruntled worker too, because there's this term going around these days, quiet quitting. That's always what I think about. They're going to linger in the background and don't call attention to themselves, but you're not quite sure what they're doing day to day. That's a little scary.
Coming back onto this remote work, since that is the world we're living in today. How do you know the people that you're hiring are the people they say they are? We hire people and I've never met them in person. They work from their home, I hope that's who they say.
Eric: You may not meet them for a year or two, if ever.
A Trend in the Hiring Process
Rachael: So if you don't have a great culture to your point, those people are going to just may never integrate. Then they just go off and, "I'm going to do what I want. I'm going to take what I want and then I'll just go find something better and maybe do it there too." But, how do you even do that with remote workers? In an office I can see how you could better manage that, but remotely?
Maria: I see a lot of companies really struggling with that. Now we see a trend of companies using different approaches during the hiring process. Starting early on, you try to do your best to identify the best candidate. So you might be using artificial intelligence and algorithms to really identify the best candidate. You take them through personality questionnaires. There are many approaches at the moment.
But as you said, being at the office face to face, and interacting with one another is the best way to build such a culture definitely. There are approaches being taken throughout the two years of the pandemic that people have been working remotely. Such as online coffees, online discussions, and different kinds of initiatives for interaction with one another. They're definitely not as effective as team-building exercises in person, I would say.
Eric: You're looking at somebody on a screen and you are sharing a drink with them. First of all, I don't drink coffee or alcohol. I'm out on both of them. I've got my trustee water with me, but I'd rather be in person. When I see Rachael, you can see from her smile. I love
Rachael. But when we are together in person, it's just magical. It's so different than online.
Establishing Deeper Connection
Eric: We've known each other for almost five years. But it's so different when we're together. Personally, I have difficulty when it's remote, establishing a bond, establishing that deeper connection. I think I have trouble in general with that, but certainly, it isn't helping online.
Maria: All of us. I think, definitely, this is the way it is. Hopefully, it will not be for long. Many companies are out now returning to the office.
Eric: But we're seeing employees not want to come back or want to come back when they want to come back. We've talked about this on the show before, and I've talked about it a lot in life. I worry about the early career stage employees who don't have the ability to build that deep relationship and help their career early on. The mentorship, just running into somebody and having a problem or overhearing somebody talking about it.
It makes me think about your comment on good culture versus bad culture. The likelihood for me to feel disenfranchised, for me to feel angry about something and not be able to communicate, not be able to talk to somebody. I almost feel like that remote nature raises the risk of insider threat. Not just because it's easier, but because I don't have the ability to develop deeper bonds, I don't have the ability to vent.
I don't have the ability to say, "Rachael, you're not going to believe what happened. What would you do with that," at the water cooler. I have to set up a session or call her.
Maria: Especially if you are the early career kind of professional who just joined the company and they joined remotely, so building that relationship is even harder.
[33:20] What Makes Someone Become an Insider Risk?
Maria: Of course, I would agree that this could make it easier for someone to become an insider.
Rachael: I think there was an article about a bleeping computer that was talking about how the ransomware gangs are just openly reaching out to employees. "You want to help me out? I'll give you a kickback." What is this about?
Maria: Yes, it's not necessarily new, but it is definitely a trend. We see ransomware attackers really targeting random companies now, even small and medium enterprises. And we see them approaching employees and inviting them to collaborate. For example, getting a message. If you actually persuade the company to pay the ransom, you will get 40% of the amount.
We actually have examples of employees within insurance companies because insurers are actually part of the equation of paying the ransom. They communicate with employees within an insurance company. They’re asking them to really persuade the company to pay the ransom and they will get a huge amount as well. So yes, it's really becoming a trend.
It's quite interesting how attackers who initiate ransomware attacks now, seek the assistance of employees. That could be due to many reasons. We see, especially in the US legislation coming, where I think it came already of not paying the ransom and other countries are following that. If no one is going to pay, there's no gain. So attackers are really considering different techniques. To be honest, cybercriminals are utilizing a lot of psychological theories and manipulation techniques. They're studying as well.
Eric: There's a ransomware actor in Eastern Europe, Africa, China, wherever, coercing an employee to help them. So the employee helps them, and the company discovers it and gets law enforcement involved.
The Ransomware Amounts Are Increasing Exponentially
Eric: The odds of the person in Eastern Europe actually being incarcerated or paying a penalty are pretty low. But the odds of the employee who assisted them, I would say that's rather high. Jail time, fines, lawsuits, and the risk equation really do a shift in companies. I think that can get shut down pretty quickly with a couple of good examples.
Rachael: Well, risk-reward. I'm just saying that 40% of $50 million would cause one to pause. These ransomware amounts are just increasing exponentially.
Eric: You're saying you'd never even think about the jail time or the consequences because you're physically in a location where there are laws that can get you. Eastern Europe is a little more Russia. It's a little more wild, wild west. Sorry, that's a US expression. Just means crazy and anything goes.
Rachael: Money's powerful.
Maria: It goes back to how you perceive risk. As you said, coming from an Eastern European country, you might perceive risk as not something that might really touch you, especially when we're talking about cybercrime. That goes back to attitudes about law enforcement, attitudes about the government, everything. All that can shape how you perceive the risk of getting caught.
Eric: One of the questions I wanted to ask today is the effort to mitigate the risk of insider threats. We're seeing more organizations collecting additional data prior to and even during employment around social media, some are requiring personality tests. Is it helpful? Does it provide context? Who uses it? What's the right of the employee versus the right of the employer? It sounds like a great idea if you can get away with it and it works. But I don't know.
Who Should We Hire?
Eric: You'd see Rachael out on TikTok watching her dog videos. Is that a good sign for Rachael or a bad sign? Should we hire her?
Maria: It clearly depends. If Rachael is posting photos of the dog saying, "I will be on leave next weekend. I live here. This is my address."
Eric: I'm more worried about her spending hours and hours watching dog videos on TikTok instead of working. Rachael, how many hours a day are you saying, or are you thinking you're doing right now?
Rachael: I don't know. But it's late at night when I can't sleep.
Eric: An hour, 20 minutes?
Rachael: Two hours-ish at a time.
Eric: Two hours a night? We're looking at 14 hours of productivity that are gone right there.
Maria: But it's her free time.
Eric: This is the problem when we start looking at social media though. I look at it as a waste of time. You look at it as her free time. She should be able to do whatever she wants.
Maria: The thing is that especially going back to your question, companies can collect data. I mentioned artificial intelligence. They can use different tools to actually collect data on an employee. Their habits, how active they are on social media, potentially their attitudes around the government, or their political views even. So all that is publicly available.
They're free to do that. But what I'm thinking is, how valuable or how useful can that be? If we're taking the example of insider risk, I can collect all this data. I can even hire an employee and take them through personality tests.
How an Employee Might End Up Becoming an Insider Risk
Maria: But again, that might give me an indication of how they see the world, how they perceive the world, and how they make decisions. That is mostly really targeted towards, "Is this employee best placed for this position? Do they have managerial skills," for example.
But it won't show me how an employee down the line might end up becoming an insider risk.For me, it's all about, "How will you use the data and how can this be useful?" It might be that you identify a psychopath using some of the personality trait questionnaires, which is great. So you won't hire them. But, how much other really valuable information will you gain out of doing that?
So it is a trend, especially because organizations and companies are now really trying to minimize risk as much as possible. That is one of the approaches taken.
We see other examples, of how not succeeding in phishing simulations might get you fired. So we see more organizations trying different approaches to eliminate risk. But again, I'm not in favor of really extreme approaches. I think that there are better ways, and I mentioned this cybersecurity cultural approach. Building that sense of belonging into a company, might give you much more than a personality test for this specific reason.
Eric: But you can interpret it. I wouldn't know a psychopath from you, Rachael. I'm not qualified to do that. I've got to feel there's an issue there with inclusion and diversity. You start limiting the candidate pool, which I'm 100% against. I like it a little crazy and I love the fact that Rachael loves her dog videos, quite frankly. Though I can't understand it, it makes Rachael, Rachael.
[42:39]How You Eliminate Brilliant Candidates
Maria: This way definitely you eliminate maybe some brilliant candidates.
Rachael: Going back to artificial intelligence, because there are tools that are actually doing that. They filter CVs based on what you asked them to do based on your algorithm. Sometimes they eliminate the number of candidates because the algorithm is not programmed in a way to look for characteristics and skills that we would be identifying while speaking to someone.
So immediately the pool of candidates could be minimized and you lose talent. For me, that is what you lose. You're trying to go with the book of saying in your CV that you have the X certificates and the X training. But you’re ignoring other skills, which is a huge mistake.
You're losing talent definitely this way.
Eric: So you're obviously not in favor?
Maria: It depends on how you use it, why you use it and how you have built your algorithm. Usually, we've had examples of Amazon. They stopped using artificial intelligence in their hiring process because it was discriminating. We see that algorithms can be biased, you can't really follow a one-way approach. You can use these tools if it makes your life easier, but don't really stop there. Have an interview, and talk to people. You can't just do that part. So yes, that's where I am.
Eric: As we're wrapping up, can I ask you two questions that I've been dying to get a psychologist on the line to ask?
Eric: Can you explain Rachael's fascination with the dog videos on TikTok?
Maria: Well, we would have to have a discussion about this. I can't just do it on call.
The Benefits of Multifactor Authentication
Rachael: But it's like a serotonin fix. It's happiness, a chemical fix for puppies and kittens. Who doesn't love that?
Eric: I'm not trained, but what I'm taking out of this is both the trained and the untrained have no idea about this fascination with TikTok dog videos. One last question then. Rachael has been in cybersecurity for a good part of her career now. She is very talented at what she does. Trust me on that. She prepares for our CEO, she prepares the brief you heard or ask about, and she brings data to discussions.
You've heard that today again. She refuses to use multifactor authentication in her personal life when we've had governments, had experts over and over on the show talking about the benefits of multifactor authentication. In her personal life, it's just too much.
Rachael: It's so much work.
Eric: Any idea why she chooses knowing multifactor authentication is one of the best tools she can use?
Maria: I think she just mentioned that it's too much work. Going back to cyberpsychology, using fearful tactics of telling people how if you don't use multifactor authentication, you're at risk of this, really pushes people away. It leads them into a sense of helplessness because they either don't know how to set this up or they're too busy to do it. So we need a different approach to this, definitely not telling people what to do, but maybe telling people what to avoid.
Rachael: Well, and what if we made it easier? I think there's got to be a way to make it easier because I got to go, and I have two phones. So if I do it, then I have to grab another phone.
The Biggest Problem With Security
Rachael: What if I don't have it with me, then I can't do it in my account? Who wants to deal with all that? I just want to do what I need to do. That's really the biggest problem with security.
Eric: People do what they want to do.
Maria: People actually want not to have to do anything. There's a lot of research on work happening at this point on security by design, in order to really take all responsibility from the user, putting it down to the technology. We will be seeing that in a couple of years. So
Rachael, don't worry. There will be a point where you don't have to do it.
Eric: Trust me. She's not worried. That's the problem. I'm worried for her. Rachael, I'll help you one day. We're going to do late childhood education with you and I'll figure it out. We'll both get there.
Rachael: Awesome. Well, Dr. Maria Bada, thank you so much for joining us today. This has been a lot of fun. I really enjoyed this conversation. Thank you to all our listeners. We really appreciate you joining us. We'd love your feedback. Please drop us a line. We’d love to hear what you think about the topics we discussed, but also what you'd like to hear more about. Please don't be afraid to tiptoe over that subscribe button and just pound it in there. Then you would get a fresh podcast to your email inbox every Tuesday. Until next time, be safe.
About Our Guest
Dr. Maria Bada, Ph.D. is a Lecturer in Cyberpsychology at Queen Mary University in London and a RISCS Fellow in cybercrime. Her focus is on the human aspect of cybercrime and cybersecurity. She is also a cyber expert at AwareGo.