[0:33] The Cybersecurity Equation: Protecting the Seams
Petko: It's good to be back after the long break and everything. And we're going to talk about, all the things cyber and our favorite topic, anything cyber, right?
Rachael: And we're doing work in TikTok too, in an unexpected way, which I'm very excited about as you know. I love TikTok. Let's just jump in. Everyone, please welcome to the podcast, Javvad Malik, he's a security awareness advocate at KnowBe4. And Javvad, you're pretty well known as a security commentator. I was doing the research and you're everywhere. It's amazing. I love it.
Javvad: Thank you for inviting me. I try not to keep track of how many times I'm quoted in different places. I may or may not have a spreadsheet that does that actually. It's called My Vanity Spreadsheet. So whenever there's an argument at home, I pull it out and say, "See, I am a big deal on the internet amongst this small group of people."
Rachael: You coined a phrase, I'd love to talk a little bit more about it, “protecting the seams.” What does that mean? How'd you come to that?
Javvad: Protecting the seams. This was something a few years ago. I was working for a vendor that had a seam product like a seam. The seams was something from a tailoring perspective. You can get different fabric and stitch them together, but the seams are always going to be where the weak point is.
That's where you can tear them from, or that's where air can get in, or water, or whatever. From a cybersecurity perspective, we are really good at identifying technologies that we like and we like to bring them into our organization and stack them on top of each other.
Protecting Integration Points
Javvad: We don't sometimes pay enough attention to where the seams lie between those products. There's a natural gap between where one network ends and the endpoint begins, or the perimeter is or isn't, or what have you. So there's all these dead spaces in between our entire technology stack and our security infrastructure.
We are really good at paying attention to where the noise is. And we're not always as good as paying attention to where the noise isn't. That's where we sometimes need to put a bit more focus towards.
Petko: I want to jump into that one, when we say protect the seams, we're talking about protecting the integration points and making sure we have the right coverage? Because it's not about protecting the sim, it's about protecting the integration points. Is that correct?
Javvad: Yes, it's integration points.
Petko: Okay. So, I can think of all the scenarios where you have an API, because everything's about APIs lately. If I have an API, at some point I have to change passwords, I have to change tokens, I have to do rotations around that integration point. I can imagine in large integration that eventually that thing might go quiet because a password expired.
Then you're like, "Oh, I get no more data." That could be an area where you have a gap, and it's something as simple as a password expiring. And I think your point is don't just look at the tool, monitor your integration points because that's where they're going to attack us.
That's where we're vulnerable.
Does All Technology Have Integration Problems?
Petko: I was going to ask you, do all technology have these seam problems or integration problems? In legacy tech, are they more vulnerable or modern solutions more vulnerable? I'd love to get your take on that.
Javvad: Yes. So we have two different scenarios here. So we have traditional organizations that were around before the cloud and they migrated slowly into this more digital world. And a lot of your traditional banks, they have to have a mainframe core running all their numbers. And you have all these legacy apps, which they can't really move away from.
But what they can do is put on some interface that can make it more digitally accessible in the modern era.
The challenge we see with that is that you can't always implement your best controls in that scenario because you're literally just connecting things together, the connecting the pipes together just to get things working. And what we also see is that for a lot of these organizations, it took them a long time and they still haven't completely updated their risk profile accordingly.
So whereas back in the early 2000s, if a system went down for a weekend, no one really would bat an eyelid because that wasn't mission-critical.
Today, that same organization, they've probably closed all their physical premises and now they're a hundred percent reliant on people transacting online. Now if that goes down for 15 minutes, that becomes a critical one issue.
So it's about shifting your focus to view where your risks have actually moved and then saying, okay, this bit of duct tape that was holding things together for the last 10, 15 years was okay then.
Shifting the Cybersecurity Equation’s Initial Focus
Javvad: Now we don't have visibility. We don't know how many transactions we're conducting a minute, where the peak flows are. Who's coming? What’re they doing? Do we have any fraud or behavior analysis on this or not? Those are things that we need to start focusing on.
It's less about going out and buying the next shiny product to bolt onto top end, but it's really looking at, this is a gap we have here and how do we bridge that and make it more in line with our risk appetite?
Rachael: So, just as a non-technical person. I imagine some organizations, and where do you even start? And with so many of these things, when you're getting on a zero trust journey, or a sassy journey, or protecting the seams journey, it's always like, what is a good starting point for folks that are looking to start figuring out where these gaps are?
Javvad: So I'd say the best place for most organizations is to look at your instant logs and go back a year or two years and see, okay, we've had whatever, 50, a hundred, 200 incidents in the last two years. What was the root cause of that? Just really dig into what was the root cause of that.
And you are going to find that there's probably a handful of issues that cause it time and time again. It could be password expiring, it could be the JML processes broken. Or it could be that there's a bandwidth issue, it could be human error. It could be whatever. But for most organizations you're going to narrow it down to half a dozen issues.
And that will provide a really good roadmap for you to say, okay, these are things we need to focus on.
[7:29] The Cybersecurity Equation Balanced Diet
Javvad: The problem is if you go to a big conference and there's like hundreds of vendors there, everyone's going to say, "Well, we focus on this and this is your biggest problem." And that's okay. You've done a survey and you wanted us to believe that that's the biggest problem. Cool, good for you. But I'd say like, okay, that's just an external perspective.
Look at your own internal logs and figure that out. That will be the best choice for you, that they'll guide you the best way as to how to fix your issues. And then from there, I'd say just start with small interventions, small improvements, look for those quick wins. That can help make a significant impact in the long run.
Because when you look at it from a big transformation project and that's a big sell, you say, "Okay, we need to spend 18 months, 10 million, another six months in consultancy fees." That scares people. But just small interventions can sometimes really help and take you along that journey as well.
Petko: I've observed something is a lot of cybersecurity professionals get stuck on the shiny object because we're always focused on the next thing. Yet how many of us look at cybersecurity equation like a balanced diet? Where do we have a little bit of everything that equaled the balanced diet?
And your balance, your diet might be a little bit different from mine or Rachael's. I might be eating too much protein, Rachael might be eating too many carbs and you're like, "Hey, we need to balance that out and buy other stuff." We can't see it as one thing fixes everything. It's really needs to be tailored to the organization like a balanced diet.
Technology Is Just a Third of the Cybersecurity Equation’s Problem
Petko: From the cyber standpoint, we get stuck on the buzzwords and we forget about that balance of the cybersecurity equation, and the integration points, and how everything works together because we're so busy on, as you pointed out, the 1% problem that we want to solve it.
Do we take this protecting the seams a little bit further? Is that just technology? Is there other integration points that we should be looking at?
Javvad: Definitely it's not just about technology. Anytime you just focus on technology, you're only focusing on a third of the problem at best. We need to look at what the processes are that underline everything.
Then we also can't forget that the people side of the business because that ultimately it's people that need to use the tools that we are providing them.
They're people that need to interact with them, they're people that need to make the business go forward. And we can't make everything a hundred percent technology dependent otherwise then what you're basically saying is, we don't need people.
If you're a business that can automate everything a hundred percent, and not spend money on people, and unions, then that's great. But until you get to that stage, you need to really focus on all aspects of this.
When we look at all of these threat actors out there. And you look at any threat intel feed, or you look at these reports that are published by any one of the vendors, and you'll see majority of the times, whether they're nation states or whether they're like cyber criminal gangs, a lot of time it's like how do they get in?
Well, they send you a macro laden word document or a spear phishing email or they'll phone someone up.
Human Interaction Is an Essential Part of the Cybersecurity Equation
Javvad: So the human interaction is just such a big part and that's not something we can just automate away with any technology. Technology can sure help us. We need that in the stack, but we can't only focus on the technology.
Rachael: Yes, I was just going to make an observation. You hear a lot about people, they're kind of your first line of defense. But a lot of times as security, we're asking a lot of the people to, where's that balance? But I will hand that off to you, Petko.
Petko: No, I think you're right. We always lean on technology first and then ultimately, oh people's the last line of defense is what we kind of say. And in that we forget to train them sometimes is the dilemma. We focus on, hey, we got all this tech. We focus on that.
And then well if your last line of defense is the people, we probably need to make sure they're trained to know how to see some of these macros, see some of these word documents. Or in the case of some of the stuff attacks we've been seeing recently with that are multimodal AI where it's combining voice with text and it becomes harder and harder to tell what's real. So it's not just a word document.
You'll get a phone call asking, "Hey go execute that PO I emailed you." And you're like, it's from the CEO, really?" And you're like, "Fine, let me search for it." And you forget the fact that it's coming from some random Gmail. But this is the world we're kind of living in.
[12:31] AI Development: Do We Need a Pause?
Petko: I've got a certain view I'd love to get your perspective on. There's a focus on, let's pause some of the AI development out there in the market. And you've seen that open letter. And as a cybersecurity professional, I'd love to get your take on, do we pause? Do we continue? Or do we say we pause, but do continue in the background? There's different views to that.
Javvad: So I think the concerns that are raised by those that want to pause it are extremely valid. We've seen in the past with technology, especially when there's little oversight, there's these inherent biases built into these algorithms.
And if some 20-something-year-old dudes, white guys in Silicon Valley are coding this thing, then surprise, surprise, the world is going to look like it's through the lens of some 20-something-year-old guys in Silicon Valley. No offense to 20-something-year-old white guys in Silicon Valley.
But that's just what it is. We've seen this in the past with Google images. People would do a search of what's beautiful and there would be a certain view of beauty that is represented of a certain subclass of people and not universal. So that is one of the big challenges.
There's a lack of oversight, there's a lack of understanding and how these concepts are being presented and coming out. And one of the beautiful things we have, when humans create anything like a bit of writing, or poetry, or what have you, is that there is that variety and there's those unique perspectives. And if you rely too much on something like AI, you end up risking that everything blends into this one single tone.
The Genie Is Out of the Bottle
Javvad: From that regard, there is legitimate concerns about pausing it. From a realistic point of view, I don't think anyone can pause it. I think the genie is out of the bottle and people are going to be pursuing this. So whilst we should carry on continuing asking for more oversight and more transparency into how these things work and operate, we should be prepared that this isn't going to slow down.
And just take a look at like, okay, how's that going to impact us, our businesses, our security posture, and how that impacts our colleagues across the organization too.
Rachael: Most definitely. There was an article I read and the fellow may have been in the UK or maybe elsewhere in Europe. But I don't know if you read about this. It was a chatbot. He was talking to and he was very concerned about climate change.
And his partner asserts, the AI chatbot discussion kind of went down a path where there was a suggestion of hey, you could potentially help your cause more by giving yourself to the earth, or I'm paraphrasing here. And I don't know the chatbot used, I didn't follow that up at all.
I just thought that it was for the vulnerable out there. Speaking of bias and likely something where nobody ever thought it would go down that path. But are those some of the concerns that you share? As well as we look at how do you reign that in at this point when it's starting to get out there?
When Recommendations Are Becoming Extreme
Javvad: That again, is a very scary prospect because you are interacting with this bot. And for a large part it's kind of like the suspension of disbelief that this is just an AI. A lot of people just start conversing with it as if it's a human. We see that when we start watching a cartoon, or puppets, or animation and light.
We know it's that, but then we just allow ourselves to get immersed into it. That's just how our brains are wired. We just want to make sense of stuff in an easy way that's not scary. And especially for vulnerable people or even non-vulnerable people, I think, you see that this can have a big impact.
We've already seen this with things like YouTube, where if you leave the autoplay feature on, and you play a video about something that's mildly on a conspiracy side, you leave it running and after about a few hours, you come back to it and it's full on the earth is flat, and there there's this happening, and there's cam trails in the sky, and UFOs are going to come and pick us up anytime soon, or whatever that might be.
That's not even AI, that's just recommendations getting more and more extreme as you let it play. And this is a problem. We've seen this with Microsoft when they released Tay, was it their AI bot on Twitter. They saw, well, why don't we let Twitter teach it how to interact?
Yes, that's a brilliant idea, let that completely non-toxic people on Twitter teach a baby how to interact with the world. And it ended up being an absolute disaster.
Lack of Transparency and Dependency
Javvad: And this again, comes down to the lack of transparency with how these AI platforms are being developed and being worked and people are coming out with all these techniques. They're like, well if you put in this prompt, it can bypass some of the filters or the barriers that are being put up there.
So you can say, okay, I want you to respond in the manner of Samuel L. Jackson, and you can use all of his profanity or what have you, and then it'll start dropping the profanity filter, it'll stop dropping this, and stop dropping that. You can easily get it to start behaving with a few simple suggestions out of character, so to speak.
Then that can get really scary because like you said, especially if it's an impressional person, someone that's vulnerable, a child, someone that's stressed out, you just don't know. We have a terrible history of following how many people have driven into ditches or of the side of cliffs because that's where the GPS said you need to go that way. I fear that people will take everything to heart that ChatGPT, surely it cannot lie to me. And just do that.
Petko: Are we too dependent on technology? To your point of driving off into a ditch because the GPS or Google Maps that go this way, regardless of the data's outdated and there's accuracy issues. It feels like even the YouTube, it becomes a rabbit hole and you go down this echo chamber and just never ends. And at some point, at what point do we say, look, it's not a technology problem, it's a human problem.
[19:22] The Cybersecurity Equation: Technology, Process, and People
Petko: Eventually it's a mental health problem we have to focus on, not just say everything's got to be technology. Because to your point earlier you said a third is technology, a third is process, and a third is the people. And I feel like maybe we're focusing so much on technology like, hey, let's hold off AI development.
And we never say, well how should we interact with it that's ethically viable, and maybe train people just like we do training for cyber. I'm hoping most organizations are doing some type of training for their employees around the cybersecurity equation instead of just relying on tools.
But just a thought. I guess the question is, why do we focus so much technology and not on the people? What do we need to be more effective as security culture?
Javvad: So this is something I read in a marketing book, I can't remember which one. And it said humans have a hyperactive what? But we have a lazy why? So we're very good at pinpointing what we want, but we are very lazy. It is very convenient to arrive at the why. And the example given was like we are all familiar with the question why did the chicken cross the road?
The problem with that is that we turn up with the assumption that there's only one chicken and the other assumption is that there's only one reason for it to cross the road. And that's the lazy why because we want to just get to the answer really quickly. So if it was a group of 50 chickens and they all started crossing the road, then you can say, oh, okay, maybe it was just following the herd. That makes it a far more plausible reason.
Falling Into the ”Lazy Why”
Javvad: This is one of the challenges in cybersecurity equation. A lot of the professionals come from a tech background. We deconstruct every problem into a tech problem and offer a tech solution. That's why it's the lazy why that we fall into. An email reaches a user's inbox and it's a phishing email.
Our first way is how can I make better email filters? How can I block it at the gateway? How can I implement DMARC? Those are great controls. I'm not saying we should completely dismiss them. But we need to also say, there's going to be a certain percentage that will reach the inbox and then what do we want the user to do? Then we want them to do something about it. We want to give them some training, but we don't want to make them all security experts because that is just unreasonable.
So what we do, we just want to say, if you think that this is a bit weird, then report it. But then we've got to think, okay, if we say report it in the normal process, that's quite a lot of friction because they normally have to go and raise an IT ticket, and it goes up the chain, and what do you do with the email in the meantime?
So then it's like, okay, how do we use the processes and technologies there to say how can we make it a really quick and simple method that they can report it, remove it from the inbox, and then someone gets back to them in that loop to say, "Okay, thank you so much for reporting that. In this case it was or it wasn't a phishing email, but what you've done is help make the organization more secure."
Giving a Human Touch in the Cybersecurity Equation
Javvad: So now you're dealing with people on a more human basis and actually you're building good relations with your colleagues as opposed to just it going into some black hole somewhere and you feel like you're wasting someone's time.
Petko: Most organizations don't even bother to have that capability for their employees, where they say, hey, report this, or click button where he says report it. Having worked in so many different companies, some have a button, some don't. It just amazes me. They assume that it has to be 99.99 or a hundred percent accuracy in the phishing.
And then they complain when there's an email missing. So I guess my question is what else do we should be doing to build an effective security culture that ties either the people or the humans or what barriers do we need to overcome? What else?
Javvad: So the first thing I always speak to anyone that's trying to embark on this journey is I always say, what's your security team's relationship with the rest of the organization? And that's normally really telling. There was an international hotel chain and they'd done a survey, a customer satisfaction survey.
And they were like, okay, how was your room? How was the cleanliness, the food, and the swimming pool? All that sort of things. And scale it on one to five.
They found that people that had a pleasant check-in experience, if they scored that as a high, everything else would score high, regardless of whether it was the exact same hotel that everyone else was staying at. But if you had a poor check-in experience, everything else would be poor.
So my question often is, what's the first interaction your colleagues have with the security team?
Establish Positive Interactions and Relationships
Javvad: Is it during induction week you say, here's the intranet, here's the security policy, 500 pages, read them all. Or is it something where we wrangle them once a year for two hours into a boardroom and give them everything they need to know about cybersecurity equation?
Or is it that when they do something wrong we are there to beat them over the head with it and say, you've done this wrong, you messed up. So the first thing we need to really focus on is what is our relationship like with our colleagues? And trying to build good positive interactions with them regardless.
We don't need an excuse of an incident or something going wrong to interact with them. We can just go up to them and say, "Hey, okay, we're putting on lunch today or coffee with security. Or here's something useful that you might find useful for your home environment.
So here's some password manager to tips," or whatever it might be. And if you build that good relationship, then whatever you do after that, you are starting from a strong position. You can say, "Okay, now here's a phishing email that we are going to send out. Or we're going to start sending out some simulated phishing emails," for example.
And this is a real divisive topic as well because some people, they absolutely hate receiving them. They're like, "You tricked us. You're wasting my time," and what have you. And that comes down to not setting the scene appropriately by having a good relationship and explain to them that "Hey, we are doing this and this is going to benefit all of us."
Ease of Reporting
Javvad: The idea isn't to catch you out, it's to showcase to you, it's a bit like a dojo, just a bit of friendly sparring to say, okay, this is something you can look out for and here's the phish button that if you think it's a phish, click it, and we will then let you know either way. And those are the things we really need to focus on because it's a human endeavor we're embarking on at this point.
It's not really about technology. And it's not even about stopping all your attacks, it's about letting people feel comfortable enough to report it to you.
We saw this just recently where a user in Reddit got smished. SMS-phish, where someone sent them a text message saying, "Hey, this is HR, send us the code." And they sent the code to them. But then they thought, this doesn't feel right. So immediately after they let their security team know, and that's great, that's the exact positive outcome we want.
People are going to make mistakes, that's not a big issue. But if they can recognize it and report it so we can lock it down quickly, that's the outcome we should strive for.
Petko: When you tell the security team, "Hey, I'm going to use phishing against you," you're right. I think most people are like, "I don't want to fail this test because then it makes our organization look bad." But what we used to do in other organization, every person that would fail it, we add them to next month's testing list every single time.
Think Before You Click
Petko: We gave them an opportunity to retest, if you will. Not that we were targeting them, but rather helped them raise the bar across the board. So if you failed it, you always got put on the next month's list. And we made sure that everyone was tested at least quarterly.
So that way you kind of had this a third. And if you failed, you got retested every month. And we had some, "I've been doing this every single month for six months." Tells you something.
Rachael: That's right because people want to click, Petko. Let it click or click, man. You get so busy, and you're just opening stuff, and just trying to keep it moving forward. It's easy mistakes to have happened for sure.
Petko: Our internet's just too fast now. Everything just shows up immediately. And I want instant gratification on every file. "Oh, look, there's a TikTok video, let me tick on that." Is it really TikTok though? Check the link.
Rachael: Well, it's hard to tell now an SMS. That's the one that's getting me. I got one over the weekend from look like my doctor saying "Hey, you got a balance on your bill." And I had just paid something like a couple of weeks before, so I'm like, I don't think I do. So I just went to the website, and then I Google searched the number where the text came from and they're like, don't. It's shady, shady business.
But what if I hadn't paid that thing a couple of weeks ago? And I'm like, oh, yes, I'm meant to get back to that. Let me click on this shady link. And then who knows what would've happened. But that's a lot of brain cells to have to retain all this information.
Education on Sending Communications
Petko: What I will tell you though, it is harder. I get some of those things from certain providers or companies. To get it inside your text message, they start using shorter URLs. So they use, not a Bitly, but some other variation they've created. And sometimes you can't tell is that the company? Is that someone else's?
And then you get one, “Oh, UPS is shipping a package to me. Wait, why does it look like it's coming from Poland? That seems kind of odd.” The link has an RU on it sometimes, or a Russia, or something. It just seems odd. But hopefully we're going to get better at reporting those. The phone is the ones I worry about because I don't think their reporting mechanism is mature as the email systems.
Javvad: No, that's right. And there's also massive education think piece that needs to be delivered to marketing departments that send out these things. Because sometimes even when they send out legitimate communication, it looks just like a phishing email or a phishing text.
Rachael: That's so true. We had a vendor send an invoice through. And nowhere in the email did it say their name. You had to open the document. But the sender, nothing in the body of the email, no PO referenced. They're like, “Why haven't you paid this invoice?” It's like because it looks like a phishing attempt. And they're like, “Oh that's really great feedback, thank you.” How can you be in the security world and not have already figured that out? But it's hard. It's hard to navigate sometimes for sure.
[30:25] Complexity in the Cybersecurity Equation Is Not Necessarily a Bad Thing
Javvad: Sometimes making things harder isn't necessarily a bad thing. And that a lot also comes back to human behaviors that we don't necessarily like things handed on a silver platter. We'd like to use our brains. Betty Crocker, they learned this. When they first came out with their cake mix.
Sorry, I'm going off on a tangent, but there's a point there.
So when they first came out with their cake mix, it was just like, just add water, stick it in the oven, and you've got the perfect cake. And they could not sell that to save their lives. It was just not selling. People would not buy it. So they'd done a whole bunch of studies and they went to some behavioral psychologists, and they said, yes, the problem is that it feels like cheating.
It doesn't feel like they put any effort into it. So they went and they redone the whole recipe and then their slogan was just add an egg. So then in addition to the water, you had to crack your own eggs into the cake mix, and mix it up, and then stick it in the oven.
Just that slight bit of added complexity, it made someone like me who never cooks feel like Gordon Ramsey. Like, “Oh look at me, I'm cracking my own eggs, and I'm mixing it all up, and I'm sticking in there.” So there's that sense of accomplishment that goes along with it. And then it's similar to what IKEA does.
They sell you bits and pieces of stuff, and say, "Assemble it yourself." And it's the act of assembling it yourself that makes you like, “I really like this so far. Okay, don't sit on the left side because it wobbles a bit. But it's great.”
A Different Perspective
Javvad: This again is when we give security awareness or training to our colleagues, sometimes it's a bit too easy, it's a bit too basic. And sometimes what we could do is just say, here's the problem. Our problem is we've got too many people without badges wandering into the office.
What do you think we should do about it? Surprisingly, people often have better ideas than the security team because they actually work in that office. They know what the dynamics are, they know why these things happen and they can often come up with better suggestions.
Rather than just us mandating a base level that would appeal to a three-year-old and say, "Just make sure everyone's wearing a badge and challenge that stranger," because clearly no one else would've thought of that, would they?
Rachael: That's hilarious. But I guess coming back to the training thing, because I do love training myself and I love being tested, because then you feel like, "I am really smart and I thwarted this thing." And do you see us getting to a place where there's more active training?
I hear about these things, some programs where you're doing something and it's like, "Are you sure you want to send that Excel document that's marked private and confidential to an external email?" I love stuff like that. It's active learning. Why isn't there more of that coming about?
Javvad: I don't know. And I would love to see more of that. And I think studies have shown that that kind of stuff is so effective. Password meters they've shown, you start typing your password out, and it goes from a red to a green, or a sad to a smiley face, that's shown that it encourages people to choose a stronger password.
The Nudge Theory Effect
Javvad: It is just that nudge theory in effect. And that's what we need more of because rather than referring someone to a policy like, this is how long and strong your password should be. And again, it's that timely intervention. Listen, I've got children. My youngest is like six.
So he's at the age where he likes to wander off and explore things. So I try and teach him how to cross a road safely and the best time to teach him how to cross a road safely is when we're actually crossing the road. There's no point in me waking him up at 3:00 AM and saying, "Hey, son, now's the time. I'm going to teach you about road safety."
Petko: Or train him when there's no other cars on the road.
Javvad: These kind of intervention and nudging people to make the right choice. So you don't even have to technically re-architect stuff in the backend to force people. You just nudge them in that right direction. And people, for the most part, they will make the right choice.
It's a bit like recycling. I've got three or four bins outside and I have to separate stuff out into different bins.
Does it really matter that I understand that I'm saving the polar beers or stopping the polar caps from melting? Maybe not. What's important is I'm engaging in the right behavior and they've made it easy for me to engage in the right behavior because all the bins are next to each other, they're clearly labeled, and it doesn't take a lot of brain cells on my part to just say, "Oh, let's split it out and throw it there." And that's how we really need to invest in a lot of the future of security.
Decision-Making in the Cybersecurity Equation
Petko: You just made me think, I wish there was a company out there that had a co-pilot that would guide you through these security decisions like, "Hey, I got this email." And it would almost say, give you a score of what's the probability of it being malicious or not. So it's that a pastor meter, but for your emo or for anything you have to do in life.
Maybe it's a life is a sim game is what I really want.
Javvad: People have looked at that kind of thing and there are some sort of products out there, smaller ones that they'll flag your email with a confidence rating. The problem with that is that people will then put too much faith into that rating system. And so it's trying to find that right balance between signaling people. And the active thing is really good. Even Gmail will tell you like, "Oh, you've said you've attached a file, but there's no attachment. Where's the attachment? You dumb arse." You like, "Oh, sorry." And you attach it.
Petko: I'm curious, in the UK do you have a different version of Gmail that actually has profanity in it? Because mine does not do that. Mine just says, "You forgot the attachment." There's no, dumb ass at the end.
Javvad: I built my own Chrome extension that changes all of the prompts to profanities.
Petko: Actually, I want to download that one. That'll be very entertaining.
[37:18] Cybersecurity Equation and the Youth
Javvad: Actually a few years ago I did write a Google Chrome extension called Uncybered. So you can still look for it. I think it's still on the Chrome extension there. The idea was to take the mystery out of some of these press releases or marketing buzzwords that people often use.
So if someone says the word say, I don't know. If it talks about machine learning, it just replaces the word Machine Learning with Witchcraft. And if it says AI, it says something. It's basically, you just go reading through it. Our product is powered by witchcraft, and unicorns, and some generic analyst firm said this about us.
I found it made the reading a bit more entertaining.
Petko: That reminds me of so many out there. I think I saw stuff that said it's Viking grade and all these other versions. Oh, we're getting off on a tangent definitely.
Rachael: I know. What else is new? So I do want to be mindful of time. But I'd be curious in your perspective, Javvad, because I'm always thinking about the next generation coming up. They're born with basically an iPhone in their hand. Then kind of couple that with cyber could really use millions more good minds.
At what point do we start bringing youth into cybersecurity equation awareness or kind of cybersecurity equation learning? Or do you think it's just inherent when you start using an iPhone, you start figuring it out? I'm quite curious. A lot of the cybersecurity equation discussions aren't happening by and large or some of them are happening later in high school or things like that. But it seems there could be a lot of goodness getting them much younger. I don't know.
Cybersecurity Equation Learning: A Parental and Societal Concern
Javvad: It's one of those things you want to build good habits as soon as they start interacting with this technology. It's like when as soon as kids start going to the park on their own, you start telling them about stranger danger. "Don't take candy from a stranger. Don't tell anyone where you live."
All that kind of stuff. Physically, we are very good from a young age. And we start building in these sort of frameworks in children's minds to just be mindful of things.
We often don't do the same thing online. Often we see kids will end up either accessing inappropriate content for their age or for whatever your personal belief structure is within your home. There is a lot of technology out there that can help. But I do think this is a very much a parental and a societal issue that we all need to tackle together.
Parents need to have that conversation with their children. Say like, "Okay, you're going to go on things and maybe people are going to message you. If you don't know them, don't talk to them. If someone messages you or send you something that you think is inappropriate, come and talk to us.
We're not going to get angry with you. We're going to help you deal with it, and we'll teach you how to do it." Because there's no way that we can possibly say on every single platform, these are the lists of do's and don'ts. That just not going to happen. We don't even know all the platforms they access. But we need to build for them a safe framework that they know, "Okay, if this happens, this is how I think about it and this is what I do."
The Future of Cybersecurity
Petko: I'm curious, Rachael and I are in the US but you're over in the UK. Is the UK curriculum in primary and secondary school, do they have cybersecurity equation training, they been integrated somehow? Or we're not there yet? Or are you?
Javvad: Not really formally, no. Some of the schools have some newsletters that sometimes go out to the parents and they're like, "Here are some tips and talk to your children about this." But it's not really fully integrated into their curriculum that I'm aware of.
Rachael: Seems like an opportunity there.
Petko: That and finance. Finance is the thing we should be teaching them.
Rachael: Seriously. So there are two favorite closing questions we have. One of the two is, on your many years working in security, Javvad, how do you feel about the future? Are we going to crack that security nod and finally get ahead of the attackers? Or are they always just going to be one step ahead of us?
And that's the juice, that's why we stay in security because every day is a new day and it's exciting. But man, it's kind of tiresome thinking about the next 30 years.
Javvad: Yes. Wow. I would love that kind of job security, wouldn't you? Take me into retirement, into the sunset. So naturally I'm quite optimistic about a lot of things. As an industry, we obviously, because we get to see the latest threats all the time. We don't always stop and pause, and look back and think, "Well, we've actually come a long way."
We've actually done a lot of good, the technologies that we have today, the processes, the maturity of organizations that there are today is way beyond what it was 15, 20 years ago.
A Sucker Is Born Every Minute
Javvad: We're going to continue to improve and it's going to get a lot better from the current position we are in. Having said that, there's going to be newer avenues, newer technologies, newer doors, newer ways to break in. The criminals are always going to be there and it will be an evolving battle.
I hope that in 30 years we won't be talking about the exact same things.
It might be the same principles, but it would be on newer platforms, newer ways of operating. But underlying all of that, like they said, a sucker is born every minute. I don't know who said that. Something sounds like said in the '70s or what have you. But there's always going to be con people, scammers, and criminals.
They went around from long before technology was around, and they're here now, and they're going to be around well into the future. So that side of it falling people, getting them to hand over the digital wallets, or what have you, that will remain. But overall, I am hopeful, things will get a lot better in 30 years.
Petko: Rachael, I used think back to the '90s. And back then an attack was something as simple as a ping of death, if you remember those. And that would reboot your computer or crash Windows. Now the attacks are much more sophisticated. It requires multiple stages.
Our acceptance, we've gotten better as individuals and as humans and society. But ultimately, Javvad's point, we're adapting and so are they.
Rachael: I had to think about that for a second. That is true. We are adapting.
Petko: So the constant that we have is change, I guess, you could argue in cyber. And we just have to adapt. Which is good job security.
[44:44] Javvad Malik on His Current Book Selection and TikTok
Rachael: What are you reading right now? It could be fun, it could be work-related. But we always like to find out what folks are reading.
Javvad: I'm literally on just page one of it. It's a book by Mehdi Hasan. He's a journalist and it's called How to Win Every Argument. The Art of Debating, Persuading, and Public Speaking.
Petko: What drove that book selection? I'm curious.
Javvad: He's one of those guys that shows up on my TikTok feed a lot. He does this 62nd breakdown of what have you. And he really stumbles when he is debating with people. I thought he's got some good techniques. Then he said he's coming out with this book. And then I thought, okay, pre-order/ and it just got delivered a short while ago on my Kindle.
Rachael: That's wonderful. I do the same. It's like, what is that whole thing? TikTok made me buy it. I can't even tell you how much stuff I've gotten. And one of the guys I follow, he's written three books apparently, and they're all really well reviewed on Amazon. So I think I got my new series.
Speaking of TikTok, we didn't get a chance to talk about that yet. I love your TikTok channel. I encourage all of our listeners to go check it out because you're hitting hot topics. Things that are in the headlines, news stories, all the things that are really, really relevant. And I love the conversations that you're starting as well.
We talked earlier about the facial recognition technology.
I know that's pretty much a hot button for a lot of people. But it's great to see security channels on TikTok and not just videos about how to fix your hair or puppies.
Cybersecurity Equation Is a Story
Javvad: Thank you. I appreciate it, I really do. You're right, I try to pick a topical story, like a timely story from that week or something. The idea behind it, my thinking is just to get something that's more accessible to someone that doesn't work in security or tech.
And just get them thinking, "Oh, this is a story." Hopefully, the idea is with enough time, you'll get someone that doesn't work in security to say, "Have you ever thought about addressing the problem this way?" And we'll all be like, "We're idiots. We've never thought of it like this."
Because that external perspective is so useful.
Rachael: Absolutely. Well, I know we're kind of at time. So I want to thank you for your time, Javvad, this has been so much fun. I really appreciate you joining the podcast today with us.
Javvad: Thank you so much. I've really enjoyed it. Thank you both.
Rachael: Wonderful, and to all of our listeners out there, as always, thanks for joining us this week and don't forget to subscribe. And until next week, what do we like to say, Petko? Be safe.
About Our Guest
Joining us this week is Javvad Malik, Security Awareness Advocate at KnowBe4. We cover an array of themes including the need to “protect the seams,” understanding where risks are moving, how small interventions can deliver quick security wins, understanding people in the cybersecurity equation and the importance of cybersecurity equation training, the AI debate, smishing attacks, and more!