
The Merging Worlds of AI, Cybersecurity, and Physical Threats with David Saunders - Part I
Share
Podcast
About This Episode
This week, hosts Rachael Lyon and Jonathan Knepher are kicking off the holiday season with a fascinating conversation featuring David Saunders, Director of Forcepoint Security Labs and a seasoned cybersecurity professional with more than two decades of experience. The discussion dives into the complex convergence of AI, cyber, and physical security, exploring how the rapid rise of artificial intelligence is reshaping the threat landscape. This includes emerging trends like attacks on backups, the growing sophistication of phishing campaigns powered by LLMs, and the ongoing challenge of keeping security ahead of attackers’ innovation. As the conversation unfolds, we discuss candid insights on everything from the future of backup strategies to the evolving tactics used by adversaries. Grab your favorite holiday treat and get ready for a timely, thought-provoking look at the forces shaping cybersecurity as we close up 2025 and look ahead to 2026.
Podcast
Popular Episodes
Podcast
The Merging Worlds of AI, Cybersecurity, and Physical Threats with David Saunders - Part I
Rachael Lyon:
Hello, everyone. Welcome to this week's episode of to the Point Podcast. I'm Rachael Lyon, here with my co-host, Jon Knepher. Happy holidays, Jon.
Jonathan Knepher:
Yes, happy holidays, Rachel.
Rachael Lyon:
I can't believe it's like right around the corner. And I guess by the time this publishes, it may already be passed. But I have to say, like, every year, right? It sneaks up faster and faster. But I have to say, I started buying presents in July.
Jonathan Knepher:
You're better than me. I got a week and I think I haven't even started yet.
Rachael Lyon:
Yeah, but you buy them so early you forget what you got. So I end up getting people more than one, you know, several gifts.
Jonathan Knepher:
We were walking around Palm Springs and my wife saw a candy store and I think we bought $150 worth of candy as stocking stuffers for everybody. We need to get gifts for genius.
Rachael Lyon:
I love it. I love it. Always, always a crowd pleaser. Candy. So I'm really excited for today's guest. We have joining us another forcepoint person. So it gets to be a forcepoint conversation today. Please welcome to the pod.
Rachael Lyon:
David Saunders. He's an experienced cybersecurity professional with a robust background in threat research and engineering. He's currently the director of Forcepoint Security Lab since about July 2015 and previously held positions such as Research Engineering Manager at Websense as well as working at Surf Control Black Spider, which is one of my favorite company names and activists, messagenet.
Jonathan Knepher:
Welcome.
Rachael Lyon:
Welcome, David.
David Saunders:
Thank you for having me.
Jonathan Knepher:
I'm just excited to talk to you today since we've been working together for so long now.
David Saunders:
Yeah, no, I was just getting. I was going to say the present thing. I know the feeling. Other than that I sit right next to the front of the house by the door. So with presents coming online, it's just this continual stream of running backwards and forwards. But hopefully that won't happen while we're on this conversation.
Jonathan Knepher:
If it does, you'll just have to share with us.
Rachael Lyon:
Exactly. Open it up. We would love to see what you get.
David Saunders:
Yeah, I will.
Jonathan Knepher:
So, David, like, we've been working together on all of our products here for a lot of years, more than 20 years. So, you know, I think this discussion is, you know, what are you seeing today? How are things Changing and so on. Like from previous guests, we've been talking a lot about how cyber and physical and AI are all merging together. I'm expecting you're seeing a lot of convergence and probably a lot of AI interactions on new threats. Can you talk through how you're, how you're seeing that and what's going on?
David Saunders:
Yeah, no, I mean, I think, you know, when you talk about AI, I mean, there's a definite sort of race to AI fi everything, you know, and you know, unfortunately, it does seem as though there's this trend of security being a secondary thought and safeguards aren't there, put in place where they should be. And definitely when you're talking about, say, cyber, physical and AI merging together, we're obviously very familiar with the cyberspace. When you think about the physical stuff, you sort of think about Internet of things. And that became a sort of buzzword a few years back. But shortly following that, you had headlines like My fridge is involved in Director of Hyware attack or Denial of service or whatever, which kind of led everyone to think about these hardware vendors. While they're adding their solution to the Internet, making it more usable, they're also putting that device at risk. So it's not entirely surprising that when you see that merge going on there, that AI, somewhere along the will come into effect. I mean, I read recently the UK national security, Sorry, the National Cybersecurity center reported a threefold increase between cyber and physical attacks being reported.
David Saunders:
There's no clear evidence that's tied to AI, but given what we're seeing with AI and affecting the security industry, it's not surprising that that will hasn't already start to have an impact on that merging of those three. Definitely. I think. And you know, even cybersecurity companies are challenged with the speed at which AI has come about, and we're supposedly the experts in that space, let alone if you're a hardware vendor and you're trying to get into that area. So it's not surprising, you know, that there's going to be a merge at some point.
Rachael Lyon:
Yeah, it's interesting your perspective, because i25 has been a really interesting year, I think, right. There's been a lot of developments and I think you said it one time, right, People are too busy protecting everything to specialize in a single vertical. And I'd be interested in what threat patterns you've seen in the last year and maybe what you see continuing into 2026.
David Saunders:
Yeah, no, sure. And I mean, you're right. I mean, if we think about what I do in XLabs. And our primary focus is always to protect custom, which basically means to be as fast and as quick and automate as much as we possibly can to achieve that. So often when we're looking at attacks, we're not always necessarily looking at patterns initially, but retrospectively we are, because that can sometimes lend us to pick up the next threat if we can see connections and change between those. But certainly if we look at 2025 and what's being, you know, reported, I mean, there are. There are some sort of common themes, I guess, between a lot of what gets reported, which often isn't much, especially with the big attacks. But, you know, there's a lot of talk about lateral movement.
David Saunders:
You know, the tracker gets into an environment and then moves quite swiftly across it. And also, you know, privileged escalation. And those are two challenges that have been with us for a long time, but they can be dealt with. You know, obviously privacy escalations, often to do with, you know, vulnerabilities and unpatched systems, you know, and certainly lateral movement is often around network segmentation, not looking at, you know, security in your organization where you allow access across your infrastructure. So, yeah, so that seems to be a sort of a. Definitely a common theme. I think one of the things that's really interested me, and I certainly hadn't picked up on this before, is backups being targeted. And I think obviously with ransomware and attackers, obviously one of their, if you like, themes is they'll try and encrypt your data and hold you for ransom.
David Saunders:
Clearly, if they can get access to your backups, they really have got you, because that's what most companies protect themselves with, really, in some way. And it kind of also made me think about sort of a parallel where, going back many, many years, in the days when we were sort of dealing with spammers and MX records, and, you know, an MX record would always point you primarily to a primary server and then a secondary server if you failed to connect to it. And spammers would always use a secondary server. And the reason they did was because you'd often think about patching or updating, you'd start with your primary, and therefore your secondary would be. Would be least protected. And in the case of backups, again, a lot of organizations, I think they will think about their primary data sources and make sure they're secure and access is controlled to it, but they wouldn't necessarily apply the same level of security or thought to protecting their backups. And. And certainly the best Case scenario, if the attacker got hold of your backups, they got access to your data.
David Saunders:
Worst case scenario is they also get access to your primary data and then game over. So I think that's an interesting trend where I hadn't heard of that happening. I guess. So that's something. And I guess the other thing that's definitely been in the headlines is the time that organizations have spent offline with some of these attacks. And it surprises that even engineers like myself who understand some of the complexities of the systems are surprised how long it's taking businesses to recover. And it seems to be less reported about what the attack has done, but about how the company took so long and had so much trouble getting back online. And.
David Saunders:
Yeah, and I guess the only final thing, again, that's consistent, and actually this isn't necessarily 2025, but always runs across the core, is that people are still our weakest link. And when we think about AI and where that's coming, as we all know, one of the things everyone thinks of AI, they think of impersonation or simulating a chatbot being human, like. And of course, it's very adept at trying to even push those boundaries of, of duping people, which, you know, are still the weakest link.
Jonathan Knepher:
I want to dig in a little more on this backups thing because this is something that, you know, I think, I think it affects all of us one way or another. Right. Like over the last, over the last 10, 15 years, I can't think of anybody who's still backing up to tape.
David Saunders:
Right.
Jonathan Knepher:
Like, backups are now online to other live systems. And I mean, you brought up ransomware, right? It's like the ransomware wants to encrypt or delete the backups. Like, what do we do? How do you possibly protect those backups?
David Saunders:
Yeah, it is interesting almost. Yeah. Does it sort of lend itself to going back to the old school? I mean, we all know with the. There were challenges with tapes and it wasn't unheard of for someone to check the backup to find it's been, you know, overwritten or, you know, somebody put a magnetic device over it or something like that, you know, and so it wasn't ideal, I guess, you know, in terms of backing up to the cloud these days. But I think, yeah, no, I do think we have to think about isolation and maybe, yeah, backing up to the cloud is great for the immediate backup, but maybe you need some older style traditional backup to physical device and remove off site or something like that. So, yeah, no, I definitely, I mean, I Think companies do need to think about that. They need to think, I mean, you know, but maybe again multiple locations, most organizations probably thinking about that as well, but certainly maybe not having it all online, you know, having some sort of air gap or physical disconnection from it.
Jonathan Knepher:
Yeah, I think that's great advice.
Rachael Lyon:
Yeah. You know, sometimes I advocate David or do we go back to the Stone Age to your point, it's like critical infrastructure. We just take it all offline and get back to okay, I manually turn things on and is that the only way to secure things in this day and age?
David Saunders:
I think it's interesting you say that and something that sort of crossed my mind is when it comes to security, especially web security, you know, we have a, you know, a kind of a switch between categorizing the Internet and actually being able to do real time scanning on content. And that's because obviously, you know, you can only keep up to the Internet some so far you've got to rely on what's there as there are certain websites where the content is very dynamic and you can't trust it and therefore you have to, you have to scan it. But we have this situation where recommendation or whether you would want somebody to go to a site that's not in our database and reality is if you're a business user, we should have everything in our database and therefore if you're going to somewhere that isn't in our database, you shouldn't be allowed to. And as risks with AI increases, you talk about going back to the Stone Age, but maybe we are going to have to think about, you know, you know, limiting, restricting the Internet, which I know goes up against it really and is it is a challenge. But yeah, maybe that again, I try not to be a sort of, you know, you know, a doomsayer if you like, in terms of AI. I mean it's like the Internet, you know, there's very positives as well as negatives to it certainly I think with AI like that, but maybe there are some, some consideration certainly from a business use of what you allow. And again, ultimately with AI and its use of AI Internet, they still have to rely on the same websites, emails, services or something. So ultimately if they are going to get harder and harder to detect and scan, we may need to roll back the openness if you like, able to handle it in a business context certainly.
David Saunders:
Right.
Jonathan Knepher:
I think you bring up an interesting point that I hadn't thought about.
David Saunders:
Right.
Jonathan Knepher:
Like there's so much AI slop out there, new websites coming on online that have nothing new. It's just AI garbage. Right. Like, is this also a strategy that would, like. I kind of feel like we need protection from that AI garbage. Right. Is this a strategy that works there too?
David Saunders:
Yeah, no, I mean, It's an interesting one. I mean, when we talk about legitimate traffic on the Internet in terms of, well, legitimate business traffic, I guess, in that clear sense. And we always have this sort of challenge, really, because you'd be surprised what legitimate business traffic can entail. But equally, when you've got that kind of content out there, it has very little legitimate business use. And again, While it's better generated, I mean, I think if you take the park domains examples, it's a very good thing. They do sit there with content that looks vaguely legit, but rubbish. It just rehashed to make it look like that. Maybe we'll work in the same model.
David Saunders:
Maybe Arsenal need a new category which is effectively like a park domain, but for AI rubbish. Really? That is your spam. AI spam, if you want to call it in a web context. So, yeah, no, it's definitely. I've not heard it again, you've got a good idea. Maybe I'll go back and have a chat with the team and suggest it maybe as something that we could add to our arsenal, so to speak. But yeah, no, definitely. I know where you're coming from.
David Saunders:
Everyone could pull up a website and pull in random content very quickly, make it look vaguely legit and you know, to pull someone in. It is very easy to do with so much content out there and the ability to scan so much. So, yeah, it's.
Rachael Lyon:
I was reading on, embarrassed to say, Facebook this morning, but you know, when we talk about AI and phishing intents, they're getting very, very sophisticated. I mean, back and forth conversations and you hear it a lot with people who are on the job market and recruiters and the back and forth. And the one I was reading about, I mean, it sounded legit. There was a conversation on the phone and then at the end of all of this engagement, turns out the AI was selling services for someone in Nigeria or something like that. But I'd be curious as we look at things like LLMs to help us along with our ideation and social engineering. How are people leveraging these tools? I mean, what are you seeing out there as these get more sophisticated with the common tactics we normally see.
David Saunders:
Right.
Rachael Lyon:
That have been tried and true success stories for security vulnerabilities.
David Saunders:
Yeah, no, I think, I mean, somebody said the other day, and I think it was an Interesting statement, but the era of typos is over. While it was never our primary go to for trying to detect phishing, you could pick up a little bit of noise and certainly, and you know, targeted attacks were always, you know, pretty good in terms of that. But yeah, no, I think that, I mean when we go back to like traditional phishing based attacks and that side of things, if we move beyond, if we think beyond those. When you think about LMMs, I think one of the things that is possibly also potentially a concern is the amount of data that businesses put on the Internet and some of that is intentional because they want to be out there and be known. We're all about selling products of some sort or whatever. But there's also a lot of information that's out there that you don't intend to get out there. It's been actively linked or individuals have put it in their social media media or LinkedIn and with the power of LLMs to be able to pull all that information together and know a lot more about a business than you would want them to necessarily know and how you're structured in your organizations, that will definitely, you know, enable you to, you know, craft more or more effective phishing because you know who you're going after. You've got an insider information effectively that you wouldn't have traditionally or if you got that information, it would have taken days, if not weeks and months of investigative work kind of thing.
David Saunders:
So yeah, that's definitely something. I think when you think about LLMs, beyond the traditional creating better emails, it's the fact that they can have that intel, they can generate that intel that you wouldn't. I think as well with AI and LLMs is the, the ability to just, and especially, I mean, you know, again, phishing attacks, we think of emails, but they can also be audio these days. Right. And you know, it's been reports of where they're able to pick up on hesitation and drop back in the conversation or change the conversation to realize that that person starting to pick up something there, which is, you know, again, quite kind of worrying because traditionally that might not happen. And dare I say it, even some of those call centers based wherever and trying to dupe people into whatever, they're often following a script. And generally even when you, you know, for a minute you're duped into thinking this could be legitimate and you realize it's not, it's because they're off on a script and they follow. Whereas an MM might be more easy because it was able to, if you empathize with the individual if want if a way of describing it, you know, in some way.
David Saunders:
But yeah, no. So I mean, I think, yeah, it is definitely interesting in terms of using about phishing attacks. I think, you know, when we think about AI, particularly in the way it's sort of being used in attacks generally, I think the bit that is really significant to me is that it doesn't typically generate any new attacks. There's nothing you hear about where it's an AI generated idea in some way. It's all about being more efficient, faster, and actually probably the most important thing is cheaper. And there's always been this sort of, I guess, old adage that you don't keep your underwear in your bank vault. And security has always been about a balance between you pay for as much security to protect the valuables to make sure that they're protected in some way, and businesses invest in cybersecurity for the same thing. It's not, maybe not the crown jewels, but it's their data these days that they want to protect.
David Saunders:
And there was a certain assumption that you paid certain amount, you were a big company, you had a bigger amount of budget you could pay for best technology, best vendor out there to protect you, and that was all you needed to do. And there was some element of truth in that because of course attackers have to invest their money, even if that's time, in attacking you. And the harder you make it, the more likely either going to go to someone else or they're going to give up. And when you think about AI, if it makes things cheaper, it enables them to more effectively attack either more candidates or even go after bigger candidates. And again, when we think about 2025, I mean, there's been some very, very big names obviously reported. And again, this is just in my head, but you start to wonder if that is influenced by the fact that AI is making some of these things cheaper and more easier to do at scale. So yeah, that's something that's in there. And again, we're thinking beyond the phishing scenario.
David Saunders:
I mean, there's something that was talked about from my colleagues the other day and again, there's parallels to old stuff that's just been rehashed and that's malicious. When you think about malicious iframe injection where gone by websites were compromised, typically the bottom of the page. I don't know why it was always at the bottom of the page. I guess it's easy to insert it there. That puts an iframe hidden from the person browsing the website. But it would basically run the script in the background. These days when you talk about AI, they talk about malicious indirect prompt injections. What that is.
David Saunders:
It's very similar technique, but what you're trying to do is poison the LMM effectively, because what you're doing is you're injecting into the web page hidden content in exactly the same way, but it's instructions for the LMM and it's to try and steer the MLM when they're using the web page in a different way than it would want to or it was intended to. And there's been interesting reports where they're trying to, if you like, push the LMM away from legitimate products to fraudulent products, for example. And I mean, supposedly one page reported with actually 24 different attempts to, if you like, inject some content into a page to try and poison LLM. I think in that particular case it was just trying different ways because obviously people that are actually using these LMMS are going to become aware of this. But again, it's another interesting thing that's happening now in that space. Sorry, I've gone a little bit of a run here.
Rachael Lyon:
This is great, though. I love it. I love it.
David Saunders:
Another thing again, when we're talking about AI and all this stuff, and again, it came up a little while ago talking about it, and it's really obvious when you think about it, but LMMS or AI stuff is all about learning and training and going back around time and time again. When we think about malware, it's been reported that it's very good at taking a bit of code and changing it and then running it again, changing it, running it again. And it doesn't take much to install a bank of AV engines and then taking some code and running it again, running it again and so forth. That's just about AI is good at that and could do it and be faster. But it's going to be a challenge. It's going to actually be a challenge for the AV community, if not all of us, in some way. But it's kind of interesting for a couple of reasons. Well, one, because the AV industries has always had challenges with polymorchic virus.
David Saunders:
That's nothing new. However, of course, this does mean, though, that there's an element in skill, a higher level of how we're bound to be able to create good polymorphic viruses. Whereas what this is doing is just taking the same thing, but just changing it enough to make sure that the signature won't detect it. So fundamentally, doing an easier task in some way. But AV companies have always had a bit of a balance between signatures and heuristics because they've had this challenge for a while. But is this going to actually mean they're going to have to rely more and more on heuristics because you're going to see more variants. Every malware will be different that ever gets sent everywhere. The combinations of variations will be just too great for them, which is then going to completely shift that AV industry.
David Saunders:
The other thing that brings me to think about is sandboxing and whether or not we'll see a little bit of a resurgence in the sandboxing field as well. Because if you could, you know, the chances are, you know, you'll never have seen this before. AV fight with heuristics, detect it, but the only way you could be sure is to actually run it. And that's why maybe, you know, sandboxing could be a spin off, or a beneficiary, if you like, of some of this AV variations.
Jonathan Knepher:
But with malware, well, you've sufficiently caused a lot of alarm. I mean, I think thinking through all those areas you mentioned, right, like having the adversaries having access to all this data people are putting online and then continue to make tailored attacks. I mean, it's, it's, it's just overall really scary.
Rachael Lyon:
And I hate to do this everyone, but we're going to pause today's discussion right here and pick back up next week. Thanks for joining us this week and as always, don't forget to smash that subscription button and we'll see you next week. Until next time, stay safe. Thanks for joining us on the to the Point Cyber Security podcast, brought to you by forcepoint. For more information, information and show notes from today's episode, please visit forcepoint.com podcast and don't forget to subscribe and leave a review on Apple Podcasts or your favorite listening platform.
About Our Guest
David Saunders, Director of Security Labs, Forcepoint
David Saunders is an experienced cybersecurity professional with a robust background in threat research and engineering. Currently serving as the Director of Forcepoint Security Labs since July 2015, David previously held the position of Research Engineering Manager at Websense from October 2007 to July 2015. Prior roles include Threat Research Manager at SurfControl, Threat Team Manager at BlackSpider Technologies Limited, and Development Team Lead at Activis / MessageNet. David holds a Master’s degree in Information, Communication, and Electronic Engineering from the University of Plymouth, attained between 1985 and 1990.








