Gehen Sie zum Hauptinhalt
Background image

Next-Gen Threats: Generative AI, Deepfakes, and Automated Cybersecurity Defense with Petko Stoyanov

Share

Podcast

About This Episode

In this episode, co-host Jonathan Knepher sits down with Petko Stoyanov—cybersecurity expert and former Forcepoint host—for a thought-provoking discussion about the evolving landscape of AI in cybersecurity. Together, they unpack the shifting trends seen at this year’s RSA conference, exploring how artificial intelligence is transitioning from a marketing buzzword to a mission-critical security feature. Petko delves into the real-world impact of generative AI models, the increasing sophistication of both attackers and defenders, and the pressing need for “security by design” in today’s fast-paced digital environment.

They discuss the new questions CISOs and CIOs should be asking about AI—such as where models are hosted, what data they process, and how to manage risks in regulated industries. Petko shares eye-opening anecdotes about the potential for AI to inadvertently leak sensitive data, the rise of targeted phishing in new languages powered by generative models, and why the CISO role is broader and more challenging than ever. 

Podcast

Popular Episodes

      Podcast

      Next-Gen Threats: Generative AI, Deepfakes, and Automated Cybersecurity Defense with Petko Stoyanov

      FP-TTP-330-Transcript Image-Petko

      [00:00] AI in Cybersecurity: A Strategic Shift

      Jonathan Knepher:
      Hello, everyone, and welcome to this week's episode of the Forcepoint To The Point Podcast.

      Jonathan Knepher:
      I'm Jon Knepher and I'm here today with our guest, Petko Stoyanov. How are you doing, Petko?

      Petko Stoyanov:
      Hey, Jon. Doing well. I think it's been a while since we've talked or even been on this podcast. It's been a year or so since we last talked, I think before RSA or last year's RSA.

      Jonathan Knepher:
      Yeah, I think it's been a year and I think you've had a lot going on that I'd love to talk with you about. Yeah. How was RSA this year?

      Petko Stoyanov:
      You know, it felt different this year from previous years. One I think is you kind of had a little bit more focus and energy around, I would say, how AI is actually making a difference. Less AI washing and more. Here's how we really do AI. I think AI actually materially went from an AI product to actually a feature that actually makes a difference. And tell me how. So we're definitely seeing a lot more of how do we communicate the impact of AI outside of just adding the word AI to everything? Because, you know, as I'm sure some of your listeners know, when people say AI, it can mean so many different things. You know, you could have.

      Petko Stoyanov:
      You could have. We tend to think when we hear AI. Most people think, oh, chat gdp, which is a generative model. They think that's the conversational AI is what they think about. But when sometimes security vendors use AI, it's getting harder and harder to tell, is it AI or is it just an Excel trend line that is being called AI?

      Jonathan Knepher:
      Oh, yeah, absolutely. And all sorts of machine learning, neural nets, generative AI. I mean, and how deep are each one of those in any given target?

      Petko Stoyanov:
      Yeah, I think one of the interesting thing is our address. We have nation states and hackers using AI against us. The defenders are using AI and it's a scalability problem. They can probably have a lot more AI models attacking us versus we have to get very purposeful on how we're using AI. And one thing I've noticed with a lot of CISOs and CIOs is they're asking more purposeful questions. They're asking specifically, okay, you have AI. Walk me through what Kind of AI, Is it your AI? Is it my AI? Where is it hosted? What kind of data is it processing? You know, security has become much more top of mind. I mean, I think during RSA we saw a much more broad interest around like security by design.

      Petko Stoyanov:
      So I think it was about a year ago CISA released this concept of security design. Let's focus on the top 10 things we can focus just so you build security in early, it's not a bolt on, it's early. The CISO, JPMorgan Chase actually just last month pushed out an open letter to the security industry talking specifically around the fact that we have to fix the supply chain. It mentioned that security is not a feature, it's the foundation. I think we're going to start seeing a change with security vendors and RSA and others that now have to bake security in early. And maybe it's an opportunity to help others, you know, have security baked in versus after the fact.

      Jonathan Knepher:
      What kinds of things do organizations need to do to bake that in? Like what are the questions they need to be asking?

      Petko Stoyanov:
      You know, there's a lot of open standards, so let's separate the two. I mean there's security baking it in and there's the which. There's lots of guidance out there around supply chain. You know, having visibility in your supply chain, understanding the shared risk of cloud that actually exists is. But knowing what kind of data goes into those clouds or even goes into a AI models. One of the things that I find most organizations don't realize is so many are quick to kind of adopt AI that when they start looking at AI, they start asking the question, oh, I'm using AI. But then okay, is that just. Is that Deep Seek or is that ChatGPT or is this a model that I'm operating in my environment? And the risk posture or the acceptance risk posture of an organization or a group might vary.

      Petko Stoyanov:
      If you're a commercial entity that does not work in financial sectors or regulated industries, you might find, okay, I can do this, I'm just doing consumer, not a big deal. But if you're working with any kind of regulated customers who have hipaa, have the financial sector has SEC regulations. If you're working in the federal government, depending for Federal Civilian or DoD, you have CMMC and other regulation that come in, you kind of have some teeth now of where you start to care where your data goes and which model goes into and how you bring that model.

      Jonathan Knepher:
      Yeah, I think, you know, a kind of a threat vector that I don't think a Lot of us have thought about, right, like you're talking about all the different, say generative AI models. Like what is the risk of there being kind of hidden training in those models that could be wakened up, you know, with certain, certain inputs? Like, is that a fear we have to, we have to be worrying about?

      Petko Stoyanov:
      I tend to think of like any of those AI models, like a personal brain, you, you know, like a child you're training. And right now we're in that early phase of training a child. We don't know what it's going to pick up. I remember when Microsoft years ago put an AI model on Twitter and within I'm going to say hours, it became a Nazi racist. You name, you add the acronyms afterwards because it was just learning and absorbing.

      Jonathan Knepher:
      I remember that it was profoundly how, how fast it it devolved.

      Petko Stoyanov:
      Yeah, they're trying to put the right guardrails in, but honestly, the technology related to understanding how to test those and how to even run it are so expensive that think about the GPUs and everything it takes. I mean, I think remember reading ChatGPTP was telling us like, was it the 04 model, one of the first ones they trained, which is their first model, it took like nine months to train and the amount of GPUs was like 100,000. Something crazy. Like now we've iterated faster. We have folks like nation states like China that looked at deep SEQ and said, oh, we figured out a way to do it fast, simpler because we found shortcuts in the math part of it. But you're right, the LLMs, we don't know what's in them. And I think one of the things that people keep forgetting about any of these language learning models is it's non deterministic. You asking the question today will have a different answer than one five minutes later or five minutes before.

      Petko Stoyanov:
      And that's just because of the data you've given it or not given it, or just the statistical probabilities that tend to get associated with mathematics. So you might get an unpredictability based on the order you ask the two questions. Well, it depends on what order I ask it in. You might have a different response. You might have a hallucination in there in addition to that where it might start thinking that there's a problem or it might. I mean we saw. What was it? I think one of the legal cases I was reading was really interesting is a lawyer who was using ChatGPT to write his statement. It was actually referencing other cases that did not exist.

      Petko Stoyanov:
      So it's hallucinating whole case law, which is kind of profound. So I think it's important to start looking at AI from the standpoint of how am I using it? Is it an assistant or is it automating everything? And I think a lot of folks early in there right now, let's use an assistant. As we kind of get more comfortable with the deterministic side of it and we can automate it with agents, we kind of have more predictability there. So the predictability allows us to automate and it might mean replacing like a security operations person that's doing tier one assessment or something else where we traditionally have said, oh, let's just automate that with rule sets when I can have an AI agent help integrate that faster.

       

      [07:53] AI as Assistant or Automation?

      Jonathan Knepher:
      Yeah, I remember that case you're talking about. I believe the attorneys involved ended up getting sanctioned and it was a pretty big deal. But yeah, to your point on, you know, kind of reassigning resources to using AIs here, what kinds of skills and so on do you think we need to invest in to properly use these AI tools within a security and response.

      Petko Stoyanov:
      Standpoint, I think it really depends on how we're using AI. Right. If it's AI doesn't exist, let's start talking about like AI or just the gender drone debate of you know, best or breed versus platform. If it's a platform approach, you're going to say, I'm just going to go with whatever solutions baked in. I'm not going to worry about creating my own AI. I'm going to let them bring everything I need. And it's to augment the fact you don't have a large security staff. Now if you are a large entity, think 10,000 plus users, you start asking the question of, well, platform works.

      Petko Stoyanov:
      But I need a little bit of best of breed approach where I need certain things that I care about that might be an endpoint, technology that might be something else, or I might need AI for my organization and I have to figure out how to deploy it. And it goes back to that conversation I was mentioning. If you're regulated, you might have to look at AI a little bit differently than if you're not regulated. If you're not regulated, you might say, go use ChatGPT. It doesn't matter, go use anything you want. It's a form of shadow it. If you're a little bit regulated, you kind of care about what data goes into it. Because a lot of times like you're.

      Petko Stoyanov:
      I think it's interesting, like I tend to view AI models kind of like the biggest insider risk in the world. It's always listening. It remembers everything and never forgets anything. And if you say the right words to it, it might tell you something, you know. So we've come a long way from when ChatGPT first came out. We were at one point it was mentioning what other people were data it had given it. So one scenario was Amazon had mentioned code words. An employee at Amazon had actually asked questions of ChatGPT.

      Petko Stoyanov:
      It actually had. They had mentioned a program that was there that was sensitive to Amazon. When you ask it, tell me about Amazon. It actually dumped all the class the sensitive programs that Amazon was executing on at that time.

      Jonathan Knepher:
      That's crazy. I hadn't heard that incident.

      Petko Stoyanov:
      Yeah, that was about a year, I think a year and a half ago or two. But that kind of gives the idea of like, okay, if I'm sharing an AI model with some other organizations, am I comfortable if this data got out? That's when you start asking, do I need a consumer version? Do I need a corporate level AI model? Do I need something that's self hosted, that I can operate in my own environment? I think we're seeing a lot more organizations go toward let me have my own dedicated AI model somehow maybe share the model, but have a separate RAG implementation where we're asking, the storage of what it learns is separate to my organization. It's not tied to someone else's.

      Jonathan Knepher:
      Yeah, that makes a lot of sense. Yeah, my own experimenting, it's not that hard to get a model running on your own. Infrastructure might not be as powerful. Is that kind of the direction you think regulated folks need to go?

      Petko Stoyanov:
      Yeah, most of them are going that route. They're going toward like an Azure OpenAI instance that's just theirs. They're going to a self hosted model. Depending on the infrastructure, they might even be looking at on prem. If you're working with Department of Defense where cloud is great, but then also they need a little bit of both because of compute. It's funny, I'm seeing a lot more organizations also ask the question of is cloud first cloud smart? Meaning do I go cloud only or do I augment myself with some hybrid capabilities where I have them on prem and we had a huge push toward cloud everything. And I think we're seeing now a rethink of the conversation of the map. Okay, let's do cloud for this and this other piece we'll do on prem because we've optimized cost or I care about this data A little bit differently.

      Petko Stoyanov:
      I think it's definitely a focus of how do you run the models locally? Because you need GPUs. You could use GPUs in Amazon, you could use their PaaS layer services as well. Same thing with Azure. But if you want the flexibility to try different models, move quickly. Cloud's definitely the way to go. And once you've tuned it and know exactly what your compute looks like, people bring those back on prem is what we've seen. Yeah.

      Jonathan Knepher:
      And that's kind of what folks that we've had here on the podcast have said to you a lot more. Move back towards hybrid, especially for that sensitive information. One thing also I want to ask about too. You see in a lot of the common social media this concern about AI, removing the need for the more junior engineers, security operators and so on. Do you think that there's a long term risk of, of, of like not having the right highly, highly trained, highly experienced people over time?

      Petko Stoyanov:
      I think, I think that. So when I look at security now, I think it's beyond this traditional soc. I've noticed, like Jonathan, we tend to talk about security typically within security operations centers. But like if you're a ciso, you're going to own compliance, you're going to own also the security of the applications. You might end up owning websites that now have to deal with fraud because you're dealing with E commerce and all of a sudden you're now dealing with fraud, which is different than traditional security. So I think the role of the CISO is much more broad than the security operations side of it. And I think AI in general will be critical to upleveling the skill set of individuals. And the running joke, is AI going to take your job? No, it's not.

      Petko Stoyanov:
      But someone using AI might take your job.

      Jonathan Knepher:
      You know, that's a good way of putting it. How does this though, like, you know, if I'm thinking about as, as that role expands with the ciso, right. How, how do we assure that you have the right levels of like, accountability and, and transparency if, if these new expanded CISO roles are using more and more AI.

      Petko Stoyanov:
      So I think it's them understanding what's in their models going back to like, where is it operating the physical environment, is it a sandbox? What's the compute look like? We're definitely seeing a lot more. You know, AI is just like any regular web app that we have out there. And just to be clear, I think it's important to understand that AI as a product is different than AI as a feature AI as a product would be like ChatGPT, deep seq, traditional ones that we are thinking as conversational AI, I view as a product. Now, AI as a feature in existing technology is very purposely built for an outcome that the customer typically wants. So some will tell you and argue that in Excel, if I create a chart and then create a trend line that's a form of AI, that's an AI feature, a lot of AI is just fancy math and algebra is a form of fancy math, and that's how you create the trend line in Excel. So I think going back to your question about the ciso, their role is really tailored to the organization. You know, it might be just protect me from my external threats. Okay, now protect me from internal threats.

      Petko Stoyanov:
      Well, protect my transactions now to ensure that they're being processed correctly so my business doesn't fail. What about brand reputation? What about executive protection? That CISO starts owning a lot of those things eventually.

      Jonathan Knepher:
      Yeah. Wow. It's a lot to take in on how wide that role's going to be.

      Petko Stoyanov:
      Well, it is already. I mean, we're seeing this in. As organizations get larger and larger, the CISO has to report to the CEO and has to have a direct line to the. The board to be transparent in what risk they're seeing. You know, because boards are now accountable for sometimes the risk, especially if they're public companies.

       

      [15:59] AI Regulation: Are the frameworks in place?

      Jonathan Knepher:
      Yeah, absolutely. Do, do you think the. The kind of. On the regulatory side, do you think the frameworks are in place to. To both allow what is needed, but also appropriately control how this expanding CISO role can use these technologies?

      Petko Stoyanov:
      I think it's constantly evolving. What's interesting is we are kind of early with AI in the government space in 2019, before ChatGPT and everything else about, try to look at what does NIST have to look at, how do we regulate that? MITRE is now looking at AI attack patterns and OWASP has similar things and things like that. It's becoming just like we saw years ago. Mitre, ATT and CK framework became a thing. We're seeing the same thing now for AI systems and models. But if you start thinking about AI is realistically a feature or is AI is a large language model, you start thinking about, do I want to test the model, do I use vetted models? I think if you're using vetted models or integrating smaller models into agents that automate things, it gets very, very. You can manage your risk a lot better. What I worry about is if there's websites like Hugging Face where you can Go download tons of models.

      Petko Stoyanov:
      And what's interesting is we tend to think of just ChatGPT, we think of Deep Seq, we think of Claude, we think of those as AI models or large ones. But if you go to Hugging Face, you'll see a million ones out there and it's like one for video, one's for text. You've got them for so many different things and tailoring. How do I assess this? How do I decide which ones are right? You need data scientists typically to help vet that. I would not be surprised in a future world where product engineering has data scientists that are not just building AI, but deciding which ones to integrate where and everything else, it's going to become much more critical.

      Jonathan Knepher:
      Yeah. By the way, all those models on Hugging Face, there are so many that are very interesting. I've been experimenting with some of the, the image processing ones and just the fact that you can use those later as like search engines across your image library. And it, it's just astonishing at how good they are, but also astonishing at some of the unusual connections they make that, that aren't necessarily so intuitive as a human that it connects the dots. Like is there a risk there too of some of those things is that like leaks into to other uses of AI?

      Petko Stoyanov:
      Jonathan, I'm kind of curious, like what kind of connections are you seeing? I mean I've played with some AI models out there with face swapping and things like that that look really interesting for this information. But I'm kind of curious what you said it's connected to certain things. I'd love to get your take on it.

      Jonathan Knepher:
      Yeah. So I've been using this software called Image, which is a kind of offline, like private cloud, if you will, alternative to, you know, iPhotos or Google Photos. And you can load in your own models for searching your various images. And it's just been interesting like using search terms like, I don't know, one day I was looking for, you know, a copy of an ID card that I had searched and like searching for ID card and it found like, you know, elements of other images that had information. And it was, you know, when you're thinking that it's this is just an image classifier AI, but yet it can read what's in every image. It can make connections as to what is in those images in kind of abstract ways. It was just, it was very amazing and incredibly useful. By the way.

      Petko Stoyanov:
      There was a company I saw that they would use like an image classifier. Actually I saw it at rsa. You walk up to a laptop, it takes a photo of you and then searches the Internet for it. Now it's not just a Google search. It was finding things that were not Google search is what I'll say. It was not public. And so it's kind of interesting, like how much of your photos tend to be out there. That, oh, I was at a wedding and the photographer took those photos and put them on this little website that no one knew about that's not indexed by Google, but somehow is indexed by other systems.

      Petko Stoyanov:
      It's amazing, I think, when you can't help but start thinking about those models and disinformation, I think, how do we. I think it was a year ago we would see a picture of the Pope with a white jacket, puffy jacket, if you remember that photo.

      Jonathan Knepher:
      Yeah, I remember that.

      Petko Stoyanov:
      And now it's like, well, we've gone past that. It's not a photo. Now we have a video of some, you know, government person standing in front of a podium saying certain things that you wonder if it's real or not.

      Jonathan Knepher:
      And they are so believable. Like, yeah, like your point a couple years ago, like, you could see, like the lighting didn't match or, or the, you know, that the surrounding borders and you can't tell anymore.

       

      [21:08] The Challenge of Trust in the Age of AI

      Petko Stoyanov:
      Yeah, I actually worry because honestly, like, what ends up happening is there's a thing called liar's dividend effect out there. And the more, more things we start seeing that are just a lie, we start wondering, okay, this has become a normal thing where we're seeing more AI generated content that undermines all the trust that we might have in all information that we think is legitimate. So it's hard to tell, is this real? Is that real? You kind of have a deep fake production. It's now gone through multiple different iterations into different content out there. So it gets harder and hard to tell what's real and what's not. And that's the part I worry about. Honestly, when we look at AI is not what we're going to do with AI is going to be great around technology and enabling more efficient things we can find around drug interactions, making things more efficient around our cyber defenses. That's great.

      Petko Stoyanov:
      Defenders. Attackers are using this to attack us already. But take that a step further and look at a nation state, using it for disinformation, what that would look like. Or we saw with Ukraine how AI has kind of changed how we fight. We used to fight with tanks. Now we have thousands of drones do the same thing and are more effective and targeted and they're orchestrated AI is just going to have such a profound impact on us. It's going to be. We can't imagine it.

      Petko Stoyanov:
      But I think the main thing is how do we get assurances that we don't undermine what we've got out there already?

      Jonathan Knepher:
      Well, I think to your point, I don't know that it's. When that happens, I think that's already happening. I think you have to believe that there are various governments using it for, for their own manipulation today. And how do we know what to trust and what not? And, and I, I worry that there'll be distrust of past information too. Right. Like, to your point, we're going to get so accustomed to artificial information. What is real, what isn't. How do you know?

      Petko Stoyanov:
      Can you imagine questioning all your history that you've ever learned in a textbook? Or maybe AI is hallucinating and telling you different facts now.

      Jonathan Knepher:
      Exactly. Do we have to draw a line like we know what was valid before 2023 or.

      Petko Stoyanov:
      My kids are using ChatGPT regularly and so they're constantly. It's more like instead of asking your parents, tell me about this now, I was like, go ask ChatGPT or Alexa, just go ask them. And outside of helping for homework, it's kind of interesting how they're iterating through that and asking questions. But maybe I'm hopeful somehow in society we'll figure this out eventually.

      Jonathan Knepher:
      Yeah, hopefully. So, you know, bringing this back to kind of the fundamental security questions though, what, what, what trends around AI and, and kind of the, the rest of what we've talked about, should, should CISOs and security teams be paying attention to going forward?

      Petko Stoyanov:
      Yeah, so I, I think again, I think we're seeing a lot more like phishing attacks than we've ever seen before. One thing I've noticed, if you look at like Japan, for example, for years Japan was not getting any kind of breaches, any kind of business email compromises or phishing attacks. And the reason was it was all the business was done in Japanese and not everyone speaks Japanese outside Japan. So you've kind of had this. But ever since ChatGPT now. Oh, let me just translate this. The perfect Japanese. And actually recently the Japanese government, I think it was the Financial Service Agency, they're now tracking, they've lost $2 billion just last month alone.

      Jonathan Knepher:
      You know, holy moly, that sells.

      Petko Stoyanov:
      And ever since ChatGPT came out in 2022, they now seen a dramatic increase in phishing attacks. And they've said it's already, you know, it's thousands of percent increases. It's getting more targeted for them. So it's ChatGPT now. ChatGPT and other variations are being used to basically target targeted phishing email attacks, to target people in local languages. And it gets even more targeted because you might speak a different language. They'll target you based on what they figured out you speak, target you specifically. And then it's a business email compromise.

      Petko Stoyanov:
      They authorize the wire transfer and it's just amazing. But the trends I'm kind of seeing is we're definitely going to start seeing more in the space around, I think security by itself is not going to go away. So we saw JPMorgan Chase bring that as the forefront. We're going to see security by design become much more prone. Sister released that a year ago. We saw that JP Morgan Chase doubled down, says we care about security as a foundation. We do more of that. We're definitely going to be seeing AI expanding its role from a defense tool and a growing threat vector.

      Petko Stoyanov:
      So I think we're shifting from threat detection now to automated incident response, kind of similar to agents and everything else. And organizations that lean into it, they've got to figure out how they manage that risk. Of course, that's something I kind of saw at rsa. I'm actually really interested what's happening around. Those are kind of strengths. But I'm also seeing a lot more commercialization of technology where in the defense sector, everything used to be defense specialists built for the government in defense. And now we're seeing a lot more commercial companies come into defense. Says, well, let's build something for you, and then if it works for you, we'll then take the commercial.

      Petko Stoyanov:
      Because if you believe the premise of security is critically important as a foundation, historically what we've done is outside of government or outside regulated industries. It's always been around convenience. Let me do convenience and then I'll do that first. But in regulated industries, it tends to be compliance or security first. And then I'll figure out the convenience factor of what I can work in. And by us actually opening up more commercial companies working in defense sector, the government in general, we're seeing a lot more. Let me bring a different mindset of how to solve the problem differently. We've already solved this in the commercial world.

      Petko Stoyanov:
      Let me bring those ideas and tailor, tailor the security and then be defense first and then go bring it back to commercial. So use defense to validate the security and then go back to commercial. Because it's getting harder and harder to know what to trust around them. So if we're doing secure, I guess if we're doing secured by design, you'll do all the check marks, you'll do security first, and then by essence, if you go defense first, you can then bring it back to commercial knowing that it's been valid and validated already.

      Jonathan Knepher:
      Yeah, I want to hit on one bit. You. You mentioned there too, around, like automated. Automated response. Like, are. Are we going to get to the case where at least at the front line. This is a. It's a robot war.

      Jonathan Knepher:
      It's the bad guys. AIs against our AIs. Like, what kind of world is that, that we're. That we're moving towards?

      Petko Stoyanov:
      I think it depends. I mean, if you look at companies like Andro and Palantir, Andrew's automating some of this ecosystem side of it. From a drone standpoint, Palantir is doing great analytics and models and AI decision making. That augments the human side of it, I think, from a battle management standpoint, I would not be surprised if we start having more autonomous drones that don't have a pilot, that could be a uav, that could be a smaller drone, that could be a buoy that's sitting somewhere in an ocean that's sensing data and sending it back, that pops up at a certain frequency. I think augmenting is key. We saw the dod, I think it was one of the Department of Defense was talking about automating. He used this term of human on the loop versus human on the loop. And human in the loop was someone that had to approve everything that was going through, all the decision making and everything else.

      Petko Stoyanov:
      Human on the loop was there watching it, and they're kind of just. And they had the ability to stop it if they need to. Hopefully I got that right. Always confuse the two. But human in the loop versus on the loop makes it huge. It makes interesting concepts.

      Jonathan Knepher:
      But if you're on the loop, right, like there's you. You have the risk of if they're not paying attention or they're not reading all of it, like, there's going to be so much action and data happening that even if you're on the loop, how are you possibly going to stay in control?

      Petko Stoyanov:
      Yeah, I mean, I think having the visibility of the actions is key. Right. But without that, we can't survive. We cannot have a human sitting there making every single decision. We have to say, oh, we've made all this data, we've collected it human. Does this look right? And after some iterative model, they said, okay, 99% of this we can automate. I think that's what we're going to start seeing. You might have some of the critical decision making still require a human at the very end.

      Petko Stoyanov:
      It just depends on what. Yeah.

      Jonathan Knepher:
      Yeah. Well, it's a scary world.

      Petko Stoyanov:
      So, Jonathan, I gotta ask you, you took over the podcast since I left. You're now the co host. How does it feel?

      Jonathan Knepher:
      You know, it's been a whole new experience for me. I've never really been a whole lot of public speaking figure, but it's been, it's been very enlightening and to be honest, I love it. It's getting to learn new things. We have so many great guests that it's been wonderful. And, you know, the opportunity to talk with you again after you've gone about your journey there with everfox has been good. Speaking of which, like, how have things been for you? What's kind of been going on in your life since. Moving. Moving on.

       

      [31:04] Petko’s Journey at Everfox

      Petko Stoyanov:
      Yeah. You know, Aaron Fox is interesting. We're definitely focused on what I refer to as defense tech, the technology for the US government. I think we actually spun out a forcepoint like in October of 2023, so a little over 16 months or so ago, or a little more than that. And it's been really interesting. We're focused on hard problems for the U.S. government and critical industries that actually said, okay, I care about security first. And we've done lots of cool things.

      Petko Stoyanov:
      We've made new acquisitions, we've integrated new technology. When you're kind of spinning out company and standing up a bunch of IT and security, you start asking question of what do I really need first, what do I need second? Prioritization becomes key. But I'm really excited about how we're looking at the problem around some of the security technologies we've built for the US Government. Because when you look at AI in general, I struggle to believe that detection technologies will be able to detect every single AI attack out there. Because it's from a just pure basic math. Every time they attack, we have to spend resources to detect it and respond with it. And that's computationally expensive. And we have to switch the way we start thinking about it.

      Petko Stoyanov:
      We have to change the mathematics of the game where it doesn't cost us as much, but it costs them more. And the only way to do that is there's certain core technologies that we develop for the US Government that are very protection first and detection later. And that way we know we've been vetted and validated for decades and technology works to protect some of the most sensitive networks in the world, you know, some of the biggest cloud vendors out there and others, to ensure that they get the right data to the right AI model or getting the right data to the right classification to ensure that we're protect, you know, not. And by doing this we're changing the math where they don't know if they got worked or not. They don't know how much it cost us because it didn't cost much at all. So it gives us the ability to scale and flip the mathematics on its head, actually.

      Jonathan Knepher:
      Yeah, I mean that's, that's the game, right? Make sure that it's, it's more expensive for the attackers than, than what they get out of, out of attacking.

      Petko Stoyanov:
      Yeah, I mean, I think that's the part that we've got to start looking at how in order to switch. We talk about having a seal shortage. That's certainly important. We also talk about all the technology we can implement. But having more detached technology does not solve our long term problem. Telling you something happened just creates a bigger list to the CIO or ciso. What we kind of need to do is how do we focus on protection first and then minimizing what's left over so we can focus on what's really important.

      Jonathan Knepher:
      Yeah, absolutely. Well, before we wrap up, Petko, any comments on what's up next for you? That is exciting.

      Petko Stoyanov:
      I think you're going to continue to see everfox kind of expand and we're already expanding. We've grown shown over the last few years, last year and we're hiring. I think we're one of the few in government and defense and non cybersecurity says look, we're actually hiring. We want to continue growing. So definitely check out our job openings that we've [email protected] we have a great onboarding package where we've got some great foxes we give out and others that kids love.

      Jonathan Knepher:
      Awesome.

      Petko Stoyanov:
      We end up taking them to conferences and they never come home. Everyone just takes them to their kids. So it's Arctic Fox that says Everfox on it. It's a huge hit. But you know, I think we're definitely, you're going to see us doing some great things. We've already done some great things in the last couple, you know, with three acquisitions and more to come and more interesting things to do out there.

      Jonathan Knepher:
      Yeah, absolutely. And, and Petko, we miss you here at forcepoint, but we're, we're glad you're doing great things over there at everfox. So, you know, please, please come back and join us on the podcast again. It's, it's been great talking to you today.

      Petko Stoyanov:
      Likewise.

      Jonathan Knepher:
      Thanks, Jonathan, and to our listeners, thank you for joining us this week. This has been the Force Point to the Point Cybersecurity podcast, and we'd like you all to push that like and subscribe button and join us every week. So thank you very much.

      Petko Stoyanov:
      Smash the subscribe button, right?

      Jonathan Knepher:
      Smash it.

       

      About Our Guest

      petko_stoyanov_3_0

      Petko Stoyanov, Vice President of Product Strategy at Everfox 

      Petko Stoyanov is the Vice President of Product Strategy at Everfox, where he applies over twenty years of expertise in cybersecurity, engineering management, and go-to-market.

      Previously, he was the Global Chief Technology Officer at Forcepoint, where he led a global threat intelligence team, enhancing product integration and partnerships with OEMs. His extensive background includes a role as McAfee’s Chief Technical Strategist for the Public Sector, where he spearheaded the development and strategy for the company’s government technology roadmap, FedRAMP initiatives, and technical sales enablement. At McAfee, Petko was a trusted advisor to government and commercial enterprise C-level executives, focusing on architecture and risk-based outcomes through value and metrics-driven approaches. Before his tenure at McAfee, he held several senior roles within the U.S. Intelligence Community and Department of Defense, driving significant IT modernization and cybersecurity initiatives that enhanced data security and overall security posture. Petko has held several prestigious certifications, including CEH, CISSP, CISSP-ISSEP, GCFA, and PMP. He earned a Bachelor’s degree in Systems Engineering from George Mason University and a Master’s in Engineering Management from the George Washington University.