
Breaking Down the Human Side of Advanced Cyber Attacks and Social Engineering With Margaret Cunningham - Part II
Share
Podcast
About This Episode
In this episode, hosts Rachael Lyon and Jonathan Knepher continue their fascinating conversation with Dr. Margaret Cunningham, Technical Director of Security and AI Strategy at Darktrace. With a background in applied experimental psychology and deep expertise in human-centered security, Dr. Cunningham dives into the real-world challenges and opportunities facing today’s cybersecurity professionals.
Together, they tackle everything from the simplicity—and sometimes the limitations—of “safe words” for security, to the complexities of measuring team performance and the persistent struggle to balance risk and resilience. Dr. Cunningham challenges the industry’s tendency to fixate on failures instead of celebrating successes, and she discusses the real impact AI and automation are having—both helpful and misleading—on cognitive workloads, security processes, and human expertise.
Podcast
Popular Episodes
50 mins
REPLAY: Someone Needs to Do Something, But Who?
Episode 278
March 26, 2024
47 mins
Cyberwar, Social Media’s Future and Passing the Mic with Peter W. Singer
Episode 206
November 8, 2022
56 mins
The Conga Line of Cybersecurity in 2022 with Manny Rivelo
Episode 167
January 25, 2022
48 mins
See Something, Do Something: A Conversation with Dmitri Alperovitch
Episode 160
November 30, 2021
Podcast
Breaking Down the Human Side of Advanced Cyber Attacks and Social Engineering With Margaret Cunningham - Part II

Rachael Lyon:
Hello, everyone. Welcome to this week's episode of To the Point Podcast.
Rachael Lyon:
I'm Rachael Lyon, joined here with my co-host Jon Knepher. This week, we pick back up for part two of our conversation with Dr. Margaret Cunningham, Technical Director, Security and AI Strategy at Darktrace. There, she advises on AI security strategy, innovation, data security, and risk governance. A recognized expert in human-centered security and Behavioral Analytics, Dr. Cunningham holds a PhD in Applied Experimental psychology and has been awarded multiple patents on human-centric risk modeling, security Persona development, and behavior-based threat detection. Now, without further ado, let's get to the point.
Everyday Security and the Psychology of Failure
Margaret Cunningham:
You know, we were talking about, like, what, what you could possibly do, and like how in security we tend to be really complicated and excited to share, like, tips and tricks that are super in-depth. Like, you should use like anomaly-based behavioral analytics. And people are like, that means nothing when really, like, setting up safe words with your family so that you don't get like, scammed is pretty easy.
Rachael Lyon:
Smart, too.
Jonathan Knepher:
Yep. So I agree with you there, but it seems like there's a very limited number of smart words or safe words that everybody picks.
Margaret Cunningham:
Oh, that's not surprising. It's probably like pickle, pizza, hot dog, banana. I wonder if those are in the top or if I'm just super weird.
Rachael Lyon:
We should look into that.
Jonathan Knepher:
I know that there's one in the top that is very, very commonly used. And yeah, it's just, it's. But it's been funny because I've heard that same safe word from multiple people I know for their own families, and it's just been like, you know, y'all pick the same word.
Rachael Lyon:
It's funny.
Margaret Cunningham:
Now I want to know what the word is.
Rachael Lyon:
Leave us hanging, Jon.
Jonathan Knepher:
It's the pineapple.
Margaret Cunningham:
It's pineapple.
Jonathan Knepher:
Yeah. Yeah. I've heard that from multiple people.
Rachael Lyon:
That's not.
Margaret Cunningham:
Now, I don't. I could go on a real tangent about pineapples, but we should.
Jonathan Knepher:
You could. I don't know if we want to leave people's secret word in the mix, but. Yeah.
Rachael Lyon:
You could probably Google it, though. I suspect that there's a nice list out there. Yeah.
Jonathan Knepher:
Yeah, it is. It is like at the top of the list, too. Yeah.
Margaret Cunningham:
Yeah. I mean, I, I think about this stuff all the time, and I Still think that I'm probably personally behind. Like, I'm probably very imperfect in how I've come up with my own strategies, or like, worked with my friends and family. And I try to be pretty proactive about it, and I still feel behind. So I can only imagine how lost people are feeling when this is not the central focus of their, you know, time spent thinking about what to do during the day. So I, you know, I try not to be doom and gloom, and you know, Rachel knows I am obsessive about, you know, how is the thing going to break, you know, what is the human error that's going to cause a misconfiguration? Why are we falling for things? How come we forgot that? How can we recover from interruptions? How can we understand what impacts human performance all the time? From what I've learned over many years of looking at all of the many fun ways to fail, is that we really should be seeing a lot more failure because ultimately, there are just lots of ways to fail. This has convinced me that we're undercounting a lot of the resilient factors and positive behaviors that people are doing. And that gives me a little bit of zest for life because ultimately, there are things that people are doing that we might not be aware of that are serving as protective factors both at an individual and organizational level.
Margaret Cunningham:
So I am currently on a tear of getting people to think more about what might be going right because the ratio of where we spend our effort is identifying problems, reverse engineering problems instead of identifying successes, and like what's the root cause analysis of that success. We just don't spend time there. And I think that we should. It's also kind of fun because you get to celebrate people or systems or something.
Rachael Lyon:
Exactly, yeah. And there's not a lot of that happening in cyber. We tend to over-rotate, and you know, and I've mentioned this before, we got onto one of my favorite things I heard you say about, kind of what do cybersecurity professionals and babies have in common? Right. It's the crying, staying up nights, stressed out, inheriting complex environments. But it's almost like you're triggered and trained to look for the things that aren't working and focus hyper focus on that, versus I think there's so many learnings on the things that you're getting. It's almost like that dichotomy as well that you can't get funding, more, more funding for your cybersecurity programs unless something goes wrong, which is seems kind of like the wrong, wrong way maybe to be thinking about things if you didn't.
Margaret Cunningham:
Pick up on this. I love metrics. I don't know if that came through or not, but you know, I've sat in a lot of reviews for like security, operations teams, and things like that. And I sent. Jon, is that you? There's so a lot of metrics that we use, like mean time to respond, and you know, mean time to detect, and all these things can almost be like a self-perpetuating issue. And you know what, if my team responded to 85 incidents last month, they're celebrated for doing that. And then the next month, maybe we did a lot of detection engineering, and we come up with these really great ways of offloading that from my team, and then they do 20. The message that goes there is that, oh Mike, maybe you don't need such a big team, or like, what have they been up to this whole time? Because of the way that we count things and the way that we message value for team performance, especially at the human layer.
Metrics That Mislead – Rethinking Performance in Cybersecurity
Margaret Cunningham:
And so, you know, performance metrics have always been fraught with issues. You know, lines of code is productivity is laughable, but these types of ways of showing that, like my team is here and we're working so hard, can be very toxic for actually creating a successful security program and even getting the best out of your tooling. Like if I have really cool, capable tooling in my tech stack, but if I use it to its best capabilities, it makes my team look like they're not doing as much.
Jonathan Knepher:
What should the metric be? It can't be number of breaches because that better stay zero.
Margaret Cunningham:
I think it's a mindset shift, and I don't think that most companies are prepared for it. I don't think that it's easy to report, and I got to tell you, most of the time we're looking for something that's easy to report and communicate. The nuance of we saved a lot of money by revamping our detection engineering, which reduced our need to respond to incidents by 50% is actually a pretty sophisticated conversation that sometimes the financial decision makers are not super keen on. A lot of I've had, I've seen some people successfully map it to other types of metrics like changes in their CSF score, things like that, where they're communicating it using business language and making sure that they're highlighting the successes and hard work of their teams differently. But that's, that's an exceptional leadership style.
Rachael Lyon:
So how do you make that more kind of the norm versus the exception? Right? I mean what? Because I feel like the industry is wanting to evolve and make change, right? And I think everyone's trying to kind of figure out how do we move forward, you know, but where do people start? Like, where can organizations start and kind of become more sophisticated in their approach to these things?
Margaret Cunningham:
I mean, I just annoyingly talk about it all the time, and I try to talk about it with people who I don't talk about it all the time with all the time. So like there's a lot of importance of connecting with people and learning about what they're struggling with because there's so many commonalities that if you listen, you can kind of connect the dots. And I'm very opinionated about the way I think would be great. So it's always sort of a journey and not everybody's going to agree with your approach or your perspective, and it's just kind of doggedly continuing to talk about it and also listen a lot change takes a lot of time. Every company is different, every domain is different. Risk tolerance, different. Compliance needs, different. We are not in the industry of one size fits all, or the industry of technology is something that you plug in and it works.
Margaret Cunningham:
And because of that, the communication requirements and the consistency and dedication to finding those shared connections across so many different types of settings is something that it takes a lot of time and energy. And I think that's why a lot of people who work in this space are obsessed with it. Because if you weren't, you probably left.
Jonathan Knepher:
Constantly changing and constantly new threats. But not everybody out there is keeping up on this. What should our listeners be doing to adapt, like you're saying?
Small Steps, Big Shifts; Driving Change in Complex Systems
Rachael Lyon:
Because I think, like we talked about last time, Jon, to also bring it back to our conversation with Betsy Cooper over at the Aspen Institute. Even one person can exact change, right? You just have to take that step. And you know, so I'd be curious your perspective there, Margaret. I mean, if one or two people can help bring these things forward in an organization, how can they get started?
Margaret Cunningham:
Perhaps I think it's okay to do something small. A lot of idealistic people who have mission in mind, sort of that way of systems thinking, feel very overwhelmed with the intricacies and how much change they can make. And that can be very defeating. I was actually listening to a hidden brain on how difficult it is for people in like the climate change universe, where they feel almost like I'm part of the problem, and they have that sense of like defeat. And I think that's can feel that way sometimes in this industry and I would say some of my most successful experiments and what has change if not an experiment have been when I've kept it as simple as possible and I picked one or two things to focus on, if it's a metric, if it's, you know, shifting in how you report a summary for that month and doing that one or one thing or those two things. Because when you can start seeing a little bit of a shift, other people start believing that things are possible to shift, and you can start building a bit of momentum. And if it all goes sideways and it doesn't work, the failure doesn't feel as immense, and a lot of experiments fail.
Rachael Lyon:
But that's kind of the point, right?
Margaret Cunningham:
Yeah. I mean, if they all, wouldn't that be weird? So I think that having the courage to start small is kind of harder than it sounds, but definitely worthwhile.
Rachael Lyon:
Agreed. And I think there's sports teams, right, where we've seen the incremental changes over time, they all of a sudden go from last place to first place. But it's the long game; it's not going to happen overnight. But I think this does raise an interesting question for me, too. Because of where you sit in the landscape of things boiling, the ocean is a real thing. I mean, there's so many nooks and crannies and focuses and things that, you know, you could put your attention to to affect change. So how do we even plan your time? Quarter to quarter, for example. I mean, how do, how do you go about, you know, setting priorities in.
Margaret Cunningham:
Your space, some days better than others. So what's potentially my red flag is that I really tend to find similarities in things, and I try to move to the core issue. When we talk about deep fakes, we talk about social engineering, we talk about voice cloning. I'm like, oh, these are really fancy versions of things that we've seen before. And ultimately, a lot of the basic security posture, zero trust mindset, a lot of that stuff is still so deeply relevant that if you can help people get over the this is brand new, this is very scary moment and say, hey, working backwards. This has a lot to do with account takeovers, access, identity, and ultimately like where you are right now in this process is pretty good. Sure, there's work to do; it could be more consistent or whatever, but if you can kind of bring it back down to some of the core issues, a lot of them are still the same. And that to me is how that consistency and long-term planning can be achieved without so much churn and without so much Distraction.
Jonathan Knepher:
But I think a big part of it, though, is scale. Right? Like what this new threat is, you know, we talked about, like you can now be attacked in your native language and not notice, you know, textual differences, and anybody is a target now. Like, there's no reason these attacks can't go against small businesses. Like, how do we, how do we get the technology that the big guys use to all the little folks and operate at scale?
Margaret Cunningham:
Yeah. So I've seen a really fun shift where some people in startups are specifically targeting small businesses and dedicating their time and effort to small businesses. I happened to be on an airplane the other day, and it was a wealth manager. He had 15 people, but they had a lot of assets. And I said, wow, like that's so uniquely challenging because you need enterprise-grade software. But like you're not on their radar, and it's not going to make sense. And so we do have a lot of gaps there on the types of technology needed, as well as the teams. So I know that there are some consortiums of security experts who will come in and work as consultants or, you know, there's MDR, things like that.
Margaret Cunningham:
But I do think that we have a significant gap on sophisticated tooling for small businesses that is affordable and maintainable. It's going to be a problem, and I think they are going to experience quite a bit of pain given the ease of exploiting vulnerabilities and what we've seen, um, with different types of what I would bet are AI-enabled attacks. So, and I gotta tell you, like my most out-of-date hardware is like in my body, and I'd say all of us have that same really cool hardware. It doesn't get better as we get older, but our brains are not like creating new pathways and updating like there's no new chip, and I'm not putting the chip in when it does come.
Jonathan Knepher:
I, I don't think we are either.
Margaret Cunningham:
No, I'm kind of anti-chip. I don't, I'm just like, let me just be weird, let me be like kind of a failure, you know, like, I don't, I don't want it. But you know, we can't expect that human beings are going to accommodate and adapt to the types of technology and the types of perceptual trickery that are here now. So it's going to have to be more than human.
Rachael Lyon:
Interesting. So could we go back to the beginning with you, Margaret, though? Because, as Jon knows, I always like to kind of wrap up our episodes with more of the Personal journey. Because, as you know, the path to cyber can be an interesting road, you know, and how did, how did you get here? You know, particularly, it's a very unique path. Right. Particularly as you came in through the psychology route, you know, how did this come about for you?
Margaret Cunningham:
I got really obsessed with human performance metrics. Deeply, deeply obsessed with it. And I was working for Homeland Security, doing human systems integration like tech R&D, acquisition cycles, operational testing. And in those situations, you have a lot of coordinated work where people who have brand new things that they want to see if they work for first responders or disaster recovery would all come together and say, like, how do we plug all this stuff in together? And when I saw it, I was like, oh, there are a lot of issues here for security. It sparked some pretty significant hyper fixation on that for me. And I will just call it that because I'm like, this makes no sense. And why aren't we talking about the people factor? I started deeply looking at the human component of cybersecurity, both from a tech integration and data science perspective, as well as operational needs of security teams. And I somehow navigated myself into R&D innovation, R&D, where I had no idea that my way of thinking was unique.
Margaret Cunningham:
I had the opportunity to partner with some super, super smart people and build new analytics and figure out what it meant to create software and deploy that software across different types of infrastructure. And it just sort of exploded from there. A lot of times, people say psychology, you must do security awareness or training, or user experience. And I have started referring to myself as a backend engineer or psychologist. Backend psychologist.
Jonathan Knepher:
Love that description.
Rachael Lyon:
That fit.
Margaret Cunningham:
Yeah.
Jonathan Knepher:
Yes, it is.
Margaret Cunningham:
I'm a backend psychologist and, and I love everything in the other spaces, but let me tell you, I'm so bad at it. I really like working further left. I love the cognitive issues associated with secure coding, architectural choices that support strong decisions, and ultimately, how we can use tooling to support different types of human decision-making. Which is a very unique application of psychology that I have had the luxury of somehow finding my people who keep inviting me back to the same party. So I just feel very lucky to be able to work on things I find interesting.
The Human Factor in a Tech-Driven World
Rachael Lyon:
Because it is a fascinating world, and one that I would think that we would need many more people to be part of. Right. Because it's critical. I think.
Margaret Cunningham:
Yeah, it's really hard to navigate. I don't think I've ever had a job title or job description that was like, we absolutely want an applied experimental psychologist to do this, it just like, isn't the way it's worked for me. And so I would say it's part my personality and being comfortable with ambiguity and a bit of stubbornness, and willingness to try and help people understand where I can add value in a space that is not traditionally built for someone like me.
Rachael Lyon:
Random, maybe random question here. I don't know, kind of bringing it back to AI and, you know, the stress and, you know, the ability to build programs and systems. Does AI help us alleviate some of that cognitive load so that we could focus more on experiences and secure by design and all the things that we want to do from the jump, versus trying to plug it in on the back end? Is that one of the opportunities, maybe we see with AI in the workforce, and particularly in security?
Margaret Cunningham:
I think it can get there. But right now, we are bucketing generative AI as AI. And ultimately, generative AI tooling does not take work off of people's table a lot of the times, because what it's doing is it's actually creating more data. It's summarizing something, it is explaining something. It's creating code. For me, maybe it's multiple things doing that together, but that doesn't actually help me very much when it comes to making a decision. It gives me a lot more to work through. And so, like, even if I'm using it as like a natural language query tool to go through, you know, threat data and do threat hunting, I still then need to process everything it generates.
Margaret Cunningham:
And because it's not deterministic, it's not like going to tell you the same thing every time. I then have an added cognitive burden of questioning whether or not what it's told me is correct. Right now, if we're talking about generative AI, I think it's one of the more deceptive types of helpfulness because ultimately, it's not reducing the cognitive load for people. There are plenty of other types of AI that can engage in, like reasoning behaviors for you or that can help classify things or predict things in ways that can be very, very helpful for reducing the amount of things a human has to go through. But those are very rare. It's. It's a totally different camp. And building those types of systems takes time, data, and maturity. Whereas some of the ease of access and how good it feels to get that fast answer from generative AI systems is slightly deceptive.
Margaret Cunningham:
So I think that's sort of a risk factor. It's also why, you know, Gartner, mit other recent studies, have said that there's a lot of failure in this space on AI projects. So I think we're coming to that point where people are starting to ask hard questions. What's the technique you're using? What is the outcome I can expect? What are the pros and cons of this approach? And I'm like excited that that conversation is. Is coming out because there are valid uses for generative AI and security. It's not always what we're seeing in the tools that are in the market today.
Rachael Lyon:
Sometimes I feel like it's empty calories in a way, what I get out of Gen AI, you know, I'm not satiated, if that's the right word. You know what I mean? I feel like it's always kind of coming up a bit short, and I'm not getting what I want out of it. So I think that's a great point you're making. And I guess the other question is, is this actually AI, or are we still in the ML more ML space, and we're not actually at AI just yet?
Margaret Cunningham:
We could be really wild about like the progression of things that we've called AI over the past decade. And then we poo poo it three years later and say, oh, no, that was just machine learning. So I mean, like, this is just like a fun, a fun conversation at this point. But I will say this. You kind of get close to what you want from gen, and there's a sense of, you just ask it again. You might get something slightly different way.
Rachael Lyon:
Yes, yes.
Margaret Cunningham:
That part of your brain, that anticipation center, is actually the strongest driver of behavior. It's not actually eating the candy; it's anticipating that you're going to eat the candy. And so that is called your nucleus accumbens. Actually, that idea that you're going to get the magic is so engaging for people that we return to the same patterns over and over and over again. Kind of like hunting for that hot tip from ChatGPT. And so it has that infinite scroll quality that we all know and love to hate from social media, but personalized and also very promising, which is fun.
Jonathan Knepher:
Yeah, yeah. And I know, like, I've fallen into that, right? It's like, oh, I don't want to write this little piece of code. I'll ask it. And it's wrong. And you like, let's do it again. Let's see how close we can get. And it's like. And it's like, holy moly.
Jonathan Knepher:
I could have written this by now by scratching.
Margaret Cunningham:
I came back from a work trip a few weeks ago, and I had like a blog idea in mind, and it was very, very weird. It was very niche. It was like Margaret Core. I was like, ugh. So I like draft it. And I'm like, no, that's not quite right. So then I'm pasting my draft from ChatGPT into Claude, and I'm like, Claude, critique this, make it better. And I'm passing it back and forth.
Margaret Cunningham:
All of a sudden, two and a half hours have passed, I still hate it. I still know exactly what I wanted to say. And it's not done because it seemed easier, but it was, in fact, much harder. And then, by the way, it's still not done. Just in case you were curious.
Jonathan Knepher:
And now you're going to have to write it yourself anyway.
Rachael Lyon:
Exactly, yeah.
Margaret Cunningham:
And so the entire process was engaging for me, fun for me, but ultimately the outcome fell flat. And so I do think that we're kind of going to see that happening. And there's going to be a lot of instances where people are going to have to revise their strategy, take a more mature approach in their products and in how they apply different types of AI and machine learning to these problems, or I think people are going to not see the value and potentially rip them out. So I don't know. I mean, I think there's a lot of cool stuff going on, but that emotional pull that we get from having that generative partner is not necessarily giving us the boost that we think, at least not in my case. And I've had friends say like, it's certainly helping me a lot and I'm like, that's awesome. Maybe user error, but I mean, Jon probably uses it much more for coding and still is, you know, spending a lot of time arguing and reviewing and debugging.
Jonathan Knepher:
Yeah, I mean, I've gotten to the point where I've pretty much given up using it for coding. It's just, it's just so painful. Right. And the whole, the whole thing is like an experiment for our listeners. Right? Like, try asking it things where you know the answers and you are an expert, and how nuancedly incorrect some of the answers are, but yet how convincing it is. And now question yourself when you're asking it questions that you don't know the answers. Well.
Rachael Lyon:
That's a whole other.
Margaret Cunningham:
I mean, bringing it back to security, right? We have people saying, like, multi-agent systems are like, my agent can talk to your agent, and all of these different things. And you know, MCP has a model context protocol, has a lot of kind of like appsec Core vulnerabilities. And there are all these different types of what I see as emerging supply chain risk factors associated with agentic workflows and associated with some of the automations that we are just chasing so hard that I'm actually like, I'm not a doomer. I'm excited. I think there's lots of ways that we're going to see these types of tools excel, expand, and show their use. But there's going to be, I would say, like two camps in and how well we execute on that vision. And many people are not going to either like build in a way that is like robust, dependable, focused on accuracy, reliability, using appropriate types of tools for use cases, and others who are going to throw everything at the wall and potentially not be able to provide the value they've promised. But I'm like deeply obsessed with this space.
Margaret Cunningham:
I think a lot about agentic workflows and the amount of freedom we give to these tools based on our prompts, which are also very free for all in the idea of not being able to undo something like, I don't know, like how did you interpret my prompt? Oh, how did you pick that tool? Oh, how did you do all that stuff together? Like, how would I undo that? Because there's not like a audit log for cognition, right? That's a little bit spicy for a Friday, but thank goodness there's no audit log for my own cognition. But when we think about like critical dependencies in AI infrastructure, a lot of things we've demanded in the past, our repeatability, the ability to reverse or undo something. And with generative systems and natural language, we strip away a lot of those capabilities and expectations for high-reliability systems. I still think eventually it's going to be awesome. We just happen to be in the fun zone. Can I call it the fun zone phase?
Rachael Lyon:
I don't know.
Margaret Cunningham:
Yeah, yeah, yeah.
Rachael Lyon:
Because yeah, still very in its infancy in many ways. There's a lot to learn and yeah, develop.
Margaret Cunningham:
So Jon, how are you maintaining your technical expertise if you're automating stuff? I'm just always curious. That's a question I ask people who are like dabbling in automations and potentially like abstracting your work.
Jonathan Knepher:
Yeah, I mean, the thing is, is when a lot of technical things, when you're automating them, like you have to still write to the same things that you would have done in the manual sense. So you know, in a non gen AI world, right, like you're, you're still doing the things, you just do them once and then you, you automate them and you come back to it. Like things change, and your automations need updating, and you're kind of always in the mix, so you never stop doing it, I guess, is my point.
Margaret Cunningham:
Do you think that if you are an AI native, like college computer science with all of these tools that you could build that repetitive kind of boring expertise that you benefit from as someone in your shoes now?
Jonathan Knepher:
Yeah, I mean, that is something I'm kind of afraid of. Right. Like there's, there's the issue of bad code, security code coming out, but also like exactly your point. Like the just going through the motion is how we learn. Right. Like, and if you're not doing that, and then I think as a person, like you're working on a project, you have to get the big picture of what you're doing. And a lot of these AI tools, right, like you have fairly limited context, and the output is like oh, well, you gave me that one function. But does it really fit in with everything else? So I am scared, and I'm scared too of the feedback loop, right?
AI’s Promise and Peril – Navigating the Fun Zone
Jonathan Knepher:
Like, what's going to happen years from now when now the next set of AI training is really all on this AI output, right? Like you're gonna, you're gonna distill and lose the creativity that went into it and, and the comprehensive thought.
Margaret Cunningham:
Yeah. I have to say I've been, I've been like working on a little side quest on understanding paths towards maintaining expertise while we automate from like a cognitive science perspective. I know just little things I try and sprinkle into my late evenings, but I'm always sort of fascinated at that balance of human and AI partnership and over-trust from potentially AI natives and under-trust from deeply experienced folks, and what it would look like to calibrate that properly and optimize that relationship and ownership of decisions and processes and things like that. I'm sure I'll figure it out, and then I'll share. But yeah, like a little simple thing. But I do find it like something that like as I go out and I chat with people all the time, top of mind for leadership and people who are thinking about like the future of their workforce and maintaining operations and dealing with disruptions, and just the skills required to navigate that on the human side that we're seeing potentially deteriorate in some ways. What's next for human expertise? We shall see.
Jonathan Knepher:
I think you're right, though, too. It is both sides. There is under trust and over trust, and where is the middle? I don't think any of us really know yet. Maybe we can ask the AI how much we should trust it.
Margaret Cunningham:
That's a great idea. I bet if you ask it and I ask it, it's going to give us the exact same answer.
Rachael Lyon:
You think, though, or not? Yeah. I got to say, I don't know.
Margaret Cunningham:
That was my terrible joke.
Rachael Lyon:
Phew. That was my terrible joke.
Margaret Cunningham:
But okay. The beauty in that is to me that is very highly reflective of people because I could ask any human and get a lot of different answers, and so demanding that like AI systems that are supposed to mirror or like the goal is AGI thinking that that's going to be like one thing is like thinking that like people are all going to be one thing. And so I think we have to consider like people aren't perfect, we're all very different. We all have our biases, all of those things. And be a little open-minded about the fact that our AI systems are probably going to have their own little bits of flair.
Rachael Lyon:
It's an exciting time. I think this is a very exciting time, and I can't wait to see how things play out in the next several years. I think it's, you know, we talk a lot about the next Industrial Revolution and things like that, but it really is pretty significant, kind of this threshold that we're on and where we go with it.
Margaret Cunningham:
Yeah, it's gonna be somewhere.
Rachael Lyon:
But where? That's the question. Where, where will it go? So many opportunities. I don't know.
Margaret Cunningham:
Yeah, I'm enjoying the ride. I have been having some of the most fun and entertaining, and challenging conversations I think I've ever had in my career in the past two years. And I've probably learned more things and then had to unlearn a lot of things. And to me, that makes it really, really fun and engaging. So I will be around for these conversations for however long.
Rachael Lyon:
Awesome. Well, then I think we need to have you back, you know, in the next nine to 12 months and see how far we've progressed in that time. Because I think there's going to be a lot happening in the near term and long term that'll be very, very fun to dissect.
Margaret Cunningham:
Yeah, maybe I'll clone a Rachel for you. Awesome.
Rachael Lyon:
I could use that.
Margaret Cunningham:
And then we could could do two interviews and see which one's the real Rachel.
Jonathan Knepher:
I'm just curious what you're going to make Rachel say.
Margaret Cunningham:
I'm not going to know.
Jonathan Knepher:
Oh yeah, you're going to make the AI drive the text too.
Rachael Lyon:
We'll see.
Margaret Cunningham:
We'll see what Rachel's digital footprint creates.
Rachael Lyon:
Sometimes I don't know what I'm going to say. So that's very exciting to me. It just comes out automatically.
Margaret Cunningham:
Well, we have that in common then.
Rachael Lyon:
That's right. That's why we get along so well. That's why we're one of my favorite people. Well, Margaret, Dr. Margaret Cunningham, thank you for joining us on the podcast. It's been too long since we've had you on, and we've missed you, but thank you for, as always, a very insightful, thoughtful conversation. You've given our listeners a lot to think about, but. But very important and meaty things that they should be thinking about.
Rachael Lyon:
So thank you.
Margaret Cunningham:
Yeah, thanks for having me. Anytime.
Rachael Lyon:
Fantastic. And Jonathan, you know I'm doing my virtual drum roll, the lead up to what do we want people to do?
Jonathan Knepher:
You need to smash that subscribe button.
Rachael Lyon:
And you get a fresh episode every single Tuesday. So until next time, everybody stay secure.
About Our Guest

Dr. Margaret Cunningham is the Technical Director, Security & AI Strategy at Darktrace, where she advises on AI security strategy, innovation, data security, and risk governance. She provides technical and strategic guidance to ensure enterprise security solutions evolve in response to emerging threats and customer needs. In this role, she collaborates closely with security leaders, customers, and industry partners to advance AI-driven security solutions and best practices.
A recognized expert in human-centered security and behavioral analytics, Dr. Cunningham has spoken at major industry conferences, including RSA and Infosec, and her insights have been featured in leading cybersecurity and business publications such as The New York Times, The Wall Street Journal, BBC, CyberWire, and Dark Reading.
With deep expertise spanning AI security, risk analytics, and behavioral modeling, Dr. Cunningham is a strong advocate for responsible AI and human-centric security design. Before joining Darktrace, she was the Principal Product Manager for Global Analytics at Forcepoint and Senior Staff Behavioral Engineer at Robinhood.
Cunningham holds a PhD in applied experimental psychology and has been awarded multiple patents on human-centric risk modeling, security persona development, and behavior-based threat detection.