انتقل إلى المحتوى الرئيسي
Background image

Breaking Down the Human Side of Advanced Cyber Attacks and Social Engineering With Margaret Cunningham - Part I

Share

Podcast

About This Episode

In this week’s episode, hosts Rachael Lyon and Jonathan Knepher are joined by the brilliant Dr. Margaret Cunningham, Technical Director for Security and AI Strategy at Darktrace. With a PhD in Applied Experimental Psychology and multiple patents to her name, Dr. Cunningham is a leading voice in human-centered security, behavioral analytics, and the ever-evolving intersection of people and technology.

Together, the trio dives into the fast-changing landscape of AI-driven threats—think voice cloning, deepfakes, and sophisticated social engineering attacks that challenge every notion of trust and identity. From the real-world dangers of phone scams using cloned voices, to high-profile incidents like the Coinbase insider threat and the rise of groups like Scattered Spider, you’ll hear stories that illuminate both the risks and solutions shaping today’s enterprise security.

They explore the future (and limits) of authentication, the importance—and pitfalls—of data collection, and why behavioral analytics are more crucial than ever in spotting anomalies. Dr. Cunningham also shares insights on transparency, industry responses, and the human factors that make cybersecurity so complex and fascinating.

Podcast

Popular Episodes

      Podcast

      Breaking Down the Human Side of Advanced Cyber Attacks and Social Engineering With Margaret Cunningham - Part I

      FP-TTP-Transcript Image-Margaret-Cunningham.png

      Rachael Lyon:
      Hello, everyone. Welcome to this week's episode of to the Point Podcast. I'm Rachel Lyon, here with my co host, Jon Knepher. Jon, can you tell all of our friends out there, where in the world are you today? You're clearly not at home.

      Jonathan Knepher:
      Yeah, I'm out in the field. I at the desert hot springs. Here to have a nice weekend away.

      Rachael Lyon:
      Nice. A nice long weekend for our friends at home who we're recording on the Friday before Labor Day. So Jon is getting a jump on the fun weekend ahead. Good for you.

      Jonathan Knepher:
      Absolutely.

      Rachael Lyon:
      All right, so without further ado though, because we're about to have an amazing conversation with one of my Most favorite people, Dr. Margaret Cunningham. She's the technical director for security and AI strategy at Darktrace, where she advises on AI security strategy, innovation, data security, and risk governance. She's a recognized expert in human centered security and behavioral analytics, and she's spoken at major industry conferences such as RSA and InfoSec. And her insights have been featured in many, many cybersecurity and global business publications such as New York Times, Wall Street Journal, BBC Cyberwire, and Dark Reading, just to name a few. She holds a PhD in Applied Experimental psychology and has been awarded multiple patents on human centric risk modeling, security Persona development, and behavior based threat detection. Wow, Margaret, what a busy be welcome.

      Margaret Cunningham:
      Thanks. And I'm really happy to be here today. I have to laugh. I don't know how I ended up doing all of those things, but I can't see myself stopping anytime soon either.

      Rachael Lyon:
      Thank goodness, because this is fun. You work on fun, fascinating, complex things that are so important for cybersecurity right now and in the future. Right. It's at the crux of everything we do.

      Margaret Cunningham:
      Yeah. I have to say, I've been pretty much obsessed with humans for a very long time. And given how everything has changed over the past 20 years, most things that have to do with people also have to do with technology. So my general focus on understanding the relationship between people and technology has kind of pulled me through a lot of different avenues, landed me squarely in cybersecurity. And I've had really the privilege of working with a lot of teams who are very creative who, who want to dig in on people plus technology, machine learning, AI. And right now Which I think is the super hotspot, is people working with AI and what's happening both on the offensive side as well as the defensive side. So I feel like I'm entering in the final cage with the bigger bosses and they just keep growing and growing. But all of a sudden, everybody wants to understand the human component.

      Margaret Cunningham:
      And it's been kind of a thrill for me professionally to have so many more rich conversations on this topic.

      Rachael Lyon:
      Very exciting time, I think. Where do we even start, Jon? Where are we going to start? Today's conversation.

       

      [03:39] Voice Cloning and Fraud Risks

      Jonathan Knepher:
      I want to start with something I experienced recently. So a friend of mine who normally calls me on FaceTime, I get a phone call from him on a normal. Looks like a normal phone call instead. I'm like, that's a little weird. And I answer and, you know, the call's a little off, but it definitely sounds like him. But he's asking me to do some weird things at work. And this was absolutely a voice cloning over the phone situation with caller ID spoofing. Like, it was good.

      Jonathan Knepher:
      Where are you seeing, you know, risks to enterprises on not only this, but other like AI, deep fakes and similar things. And what do we really need to do to protect ourselves? Because this, I mean, it really freaked me out when it happened. It's like, what can I trust?

      Margaret Cunningham:
      Yeah, trust nothing.

      Jonathan Knepher:
      No, exactly.

      Margaret Cunningham:
      So first I would. I would say you probably get a lot of calls asking you to do weird things professionally, because I've known you for a long time, Jon, so that wasn't really the flag. But it is very, very easy to spoof a phone number. It's very easy to make something look like it's coming from mom or dad or your boss. And ultimately anyone who has their voice publicly available from meeting recordings, YouTube speeches, it's fair game. A lot of the services that can create custom voices can also learn voices based on audio samples. Yes, it's a bit of a time commitment. No, it's not usually free to get a great one, but it's certainly not hard to do.

      Margaret Cunningham:
      And you have to have basically zero technical skills. So we're seeing a lot of this because the payoff can be huge. We had an incident where one of our C Suite members had their voice cloned that was used to contact an employee who had recently left. They were looking to get access, looking for that person to do some things. Luckily, it was someone who realized that it wasn't a good idea to do those activities and they flagged it. But there are so many instances where the emotional pressure, the Circumstances around the call or simply being distracted or exhausted makes it much harder to be a reliable barrier between sophisticated spoofing, deep fakes and the end result of losing access or fraud or other types of compromises.

      Rachael Lyon:
      So back in the day, and when I say back in the day, the 20 years longer, I don't know, that was last year. It's like dogmas lately. But remember movies where my voice is my passion password and, you know, that was the ultimate authentication thing. And, and now with AI, right, cloning all deepfakes, all of these things, how, how do you verify what's real and what's not? You know, I mean, we've got all these multifactor authentication techniques. I mean, is, is anything working or do we still have a lot of work to do to, you know, kind of like, yes, no, very clearly black and white of what's real, what's not real?

      Margaret Cunningham:
      I think a lot of things do work. I think that if you are consistently applying security controls like mfa. Great, wonderful. Don't stop doing that and don't lose faith in those types of protections because they can be very, very meaningful in terms of how quickly something can spread or just some friction to what might be a very easy way in. That said, nothing's perfect. And we've known that for a long time. Even if you feel like you've got like the most mature security posture, everything's great. People are like water.

      Margaret Cunningham:
      We are going to find our way. We're going to find the crack. The attackers know this, we know it. I make mistakes, like for 80% of my day.

      Rachael Lyon:
      Yeah.

      Margaret Cunningham:
      So there, there's no infallible truth. There's no perfect way of dealing with identity. And ultimately, if you make these really high friction, burdensome workflows, some people say, forget it, and then you have an even bigger problem. So there are trade offs on ease of use and different types of redundancies and protections that really should be considered. Because if it's so hard or confusing that people aren't doing it, you don't have a chance anyway.

      Rachael Lyon:
      But what if we're lazy, Margaret? And we are, some of us, as you know, as you know, I love face id. I mean, that is my, I love it. And I don't, I don't want to have to get another device and do a code and do the thing and. But apparently that's not, you know, that's not even safe. And I'm really, really bummed. And, you know, what's it going to be? Do we need, like, little things in our blood Type, you know, like in, what was it, Jon Wick or whatever those movies are nowadays where you have that layer of authentication. But you know, for us lazy people out there, are we ever gonna get to just a one and done type security measure? Is that a dream? Daydreaming?

      Margaret Cunningham:
      You know, I have a really core urge to help people understand privacy.

      Rachael Lyon:
      Yeah, there's that. Yes.

      Margaret Cunningham:
      And I have this sense of like, oh, wouldn't it be so wonderful if we could all just be magically like chipped in some way. But then. Right.

      Jonathan Knepher:
      But anything you do can end up just being copied or stolen.

      Rachael Lyon:
      Right. Well, I read something too about China. They are doing kind of brain computer interface work now, which I don't know what that means or where it would lead to, but it is a fascinating topic to that point. And the whole privacy issue as well, right?

      Margaret Cunningham:
      Yeah, I mean to me it's a little bit black mirror. I don't know if anybody watched Common people. Like, I really don't want to be in a subscription plan like running ads by accident.

      Rachael Lyon:
      And it's.

      Margaret Cunningham:
      It actually was such an amazing episode. If you haven't watched it, like go find it and then like try and fall asleep after. Good luck. But the reality is like, there really are a lot of imperfect ways of doing things. And just like we wouldn't, you know, bring some chainsaw to slice a piece of bread in my kitchen. We need to think about the types of tools that we put in place and really deeply understand the use case for it as well as some of the follow on risk. So I love behavioral analytics. I think it's extremely powerful.

       

      [10:46] The Privacy Paradox and Data Collection Dilemma

      Margaret Cunningham:
      I love understanding what's normal, what's not. And this actually comes with quite a lot of responsibility on how you capture the data needed to do that, as well as the responsibility of flagging something in a way that reflects the severity or the impact of what's happening. I think a lot about that in terms of what we're doing with technology, the types of layers that we put on people for security purposes, and some of the follow on risk factors, like how much data are we collecting about people, what are we doing with that data, who has access to it, how are the models trained? If we're using AI and ultimately are we stacking up technology because we can, or is it really driving an outcome that's meaningful, that's worth some of those trade offs on the data collection? A lot there, but just one of those things that I think about in my spare time.

      Jonathan Knepher:
      Do you think we've swung too far yet on that data collection, or do you think we're still reaping benefits because you read about all sorts of data breaches of this data, and now all sorts of entities are collecting all sorts of new types of data that never would have been allowed before. Like, just think of, like, I go to the mall and they capture my license plate and I'm driving home and they're capturing my license plate. Every stoplight along the way, it's like, that's a lot of data.

      Margaret Cunningham:
      Yeah. I don't always believe more is better in that case. And I personally think that sometimes we're hoping so desperately to have concrete meaning from data that we figure if we can just get more and more, we'll be able to make machines understand like people understand. And that's funny to me because I got to tell you, I misunderstand a lot of stuff and I make some fairly questionable decisions just like every other person. And the way that we shape attention mechanisms, the way that we learn from the past, what we think might be important in the future, is all built through filters, both for people and for AI. So again, like, I'm very much a build with purpose person versus get everything you possibly can and then make sense of it. Because I think there's a lot of danger in that type of fishing where you're fishing for meaning without being conscientious on the front end.

      Rachael Lyon:
      So did you have a question, Jon? Because, you know, I could go. I meander a whole other pathway.

      Jonathan Knepher:
      No, go for it. Go for it. I was kind of thinking about the next. The next phase here, so go for it.

      Rachael Lyon:
      What I'm finding interesting, and you were talking a little bit about this before we got started, Margaret. I love to see the different business reactions right. As companies are facing these threats. And you know, you had mentioned Coinbase, which I'd love to talk a little bit more about. And I know that your Darktrace team did a little bit on Scattered Spider, which I'd like to also dig into. But I think it's fascinating right. When you start looking at how attackers are approaching. And I think one of the things that you had said in a previous podcast was we don't take enough time sometimes to recognize what's right and when people are making the right decisions or doing the right things.

      Rachael Lyon:
      And I thought Coinbase was a really great example of that. If you want to share a little bit more about that with our listeners.

      Margaret Cunningham:
      Yeah, sure. I think I want to say this happened a few months ago, I think, like May.

      Rachael Lyon:
      Ish timeframe. April.

      Margaret Cunningham:
      Yes. And One of the contractors who had access to customer information, actually I think it was a few contractors were bribed by an external attacker to provide that data to them. And Coinbase was offered the chance to get that data back for, I think $20 million. And instead of doing that, they offered a $20 million bounty on the hackers and I believe did their best to make their customers whole and were very communicative about that situation. There are a lot of things at play there. One of the risk factors for insider threat is financial distress and sometimes we have outsourced people and jobs in a way that may not support, you know, living wages, and I don't know that that's the case there. But those factors do come into play on who's susceptible to those types of outreach from threat actors. But not engaging with the threat actor, not paying them out and actively pursuing them as criminals instead of, you know, kind of falling to the wayside and saying, oh gosh, like let's just fix this, let me give them money.

      Margaret Cunningham:
      I think is a, is a great approach because, you know, even with things like ransomware, the more you pay, occasionally it turns you into a larger and larger target, even by industry. So if, if people are finding that health care always pays out ransom, guess what, healthcare is going to be hot. So again, many layers of human component and a lot of long game strategy on how to respond to these types of threats.

      Jonathan Knepher:
      How does transparency fit into that? It sounds like the Coinbase incident was fairly transparent. I think we all fear that a lot of this goes on that we don't even know about.

      Margaret Cunningham:
      Isn't that fun?

      Jonathan Knepher:
      Yeah, exactly. Do you think it's helpful that there was more transparency around this situation with them putting out that bounty? Is that helpful for the good guys or does it not make much of a difference?

      Margaret Cunningham:
      I think it's a great example. I think it's a wonderful example. But if you look around at other industries that are considered high risk and for critical infrastructure, things like that, we have different types of expectations. Aviation, completely different types of reporting rules. Transparency, visibility on near misses, as well as crashes of all types. Nuclear, totally different, we really have different sets of expectations. Other industries like healthcare try to create near miss databases or hazard tracking systems. And in cybersecurity we don't do that.

      Margaret Cunningham:
      We don't love sharing data, we don't love collecting it, and we find that we end up paying a lot for cyber insurance or all these different things where that weakness or vulnerability isn't a shared learning, it's a dirty secret. Yeah, I was actually like trying to do some weird research. I know you're shocked. And I was digging through the Re See database, which is like a critical infrastructure database, and it's gone like nobody maintained it.

      Rachael Lyon:
      Right.

      Margaret Cunningham:
      So there are all these pockets of like Isaacs that share data or communities that are trying to do this, but the incentive to participate is limited. And so the consistency, the quality of the data and the access to it also limited.

      Jonathan Knepher:
      Well, and is it decreasing? Right, like if that data is no longer available? Are you seeing this contraction in transparency rather than expansion?

      Margaret Cunningham:
      I feel like it's getting worse, but that's just me kind of going on like, you know, finger in the wind. But, but ultimately I think there's been a lot of fear in being transparent and we don't as an industry have that type of psychological trust in sharing or mutual benefit for sharing, which is.

      Rachael Lyon:
      Kind of seems counterintuitive to me, Margaret, because if you're sharing the information, we can all like, right. All, all votes rise in that situation. I don't know, like, what's it going to take to turn the, turn the tide of secrecy? You know, I don't want to show, show you my cards, but at the end of the day this could actually help millions of people if I just open the kimono a little bit.

      Margaret Cunningham:
      I do think that there are a lot of really cool open source projects out there and I, I don't know enough about them to, to like put some on blast and be like, oh, these are the most amazing things. But there are many people in the security community who obsessively collaborate and share. I think it's more that.

      Rachael Lyon:
      Is it more on the down low? Sorry to interrupt you, but it seems like it's kind of like these are people I know and trust, so we're going to kind of do it behind the scenes. Not in a more open fashion though, right?

      Margaret Cunningham:
      Yeah, I think it's more that like, for like private industry. So anyone who doesn't have reporting requirements or like anyone who's not like a publicly shared company where they like really have to, you know, be upfront, sometimes there is that sort of collective, like some bad stuff happen, let's keep it to ourselves. And that part of the data picture that doesn't get touched by open source, that doesn't really get touched by like our active, engaged security community is where like we have some blinders on. But you know, occasionally we do get a little, a little peek.

       

      [20:28] Social Engineering and Scattered Spider: Human Hacking Goes Mainstream

      Rachael Lyon:
      So could we dive a little bit more into like social engineering though? Because AI and social engineering this is like my favorite topic ever and I want to explore all facets of it. But we can start with scattered spider because I know that that's a recent thing and you guys have done some research into that, I guess, for the benefit of our listeners. Do you want to share a little bit more about this kind of ransomware as a service? And I believe they were attack vectors, were like voice calls and SMS messages, telegram, things like that.

      Margaret Cunningham:
      They have a lot of fun with.

      Rachael Lyon:
      What they're doing.

      Margaret Cunningham:
      And I will say they bring youthful energy. If you know about scattered spider, they tend to be very young, they're like teenagers, 20 somethings, many English speaking, which in many cases is advantageous. But I will also have to shout out our threat intel team who does the very hard work on all of this. They've written some really cool blogs on the topic, but ultimately they go straight for it with social engineering. They have very coordinated attacks. They tend to work like sector by sector. So there was some fun things with casinos. They've hit insurance, they've hit airlines, and it tends to be almost like you see it in one space and it sort of escalates.

      Margaret Cunningham:
      They mess around with MFA fatigue. So just.

      Rachael Lyon:
      And that's a real thing? Yes. Click, click.

      Margaret Cunningham:
      Yeah, fine. Like fine. I'm on a spot.

      Rachael Lyon:
      Stop already. Yeah, exactly. Yes.

      Margaret Cunningham:
      And they go through help desks and they, they vish and they chat with people and they, they do a lot of different strategies that allow them to gain access and then use tooling that exists within an infrastructure. So we all think, you know, access control and identity is wonderful. But when you're almost having someone masquerade as a user who is supposed to be there, it becomes much more challenging to understand that something has gone awry. And this crew tends to move very, very quickly and they use flexible approaches that adapt, so they're not using the same tricks every time, which makes it increasingly difficult to understand it, especially if you're not using more of a sophisticated behavioral analytics approach. So it's been really tricky for people and very devastating in terms of the damage done to companies.

      Rachael Lyon:
      And this is a bit of an evolution, right? I mean, generally there's kind of like this spray and pray approach, right. For a lot of attackers, we're just going to do the same thing and throw it out to the world and see who we can get. And I think we saw that right, with Colonial Pipeline, if I'm digging in the archives there. But if they're adapting and evolving and it's like, was it polymorphic or whatever? You know, that kind of weird changing thing. Like how do you defend against that?

      Margaret Cunningham:
      Well, we're probably going to win some and lose some. I would say like, just like everything, there's not a perfect way. But this is not something that we can address by looking for threat signatures and then pushing some model that's going to find it. Because ultimately it's going to be some type of behavior that they know is going to fly under the radar and it's not going to look like some signature that you know is malware. It's just not going to look like that. So they're navigating quickly through these environments and covering their tracks most of the time. And they are using kits and they are being coordinated and they feel very empowered. They have an entire community of like friends who are working together and coming up with fun ways to do this better.

      Margaret Cunningham:
      And I know, right? I'm sitting here, I'm like, sounds like so much fun. Don't worry, I'm not, I'm not journeying.

      Jonathan Knepher:
      Like a, to the dark side.

      Margaret Cunningham:
      I'm like, don't worry, I'm not, I'm not going to change sides. But, but as someone who like I, you know, even in the early days before we had the same type of compute available, we were always trying to find different ways of understanding anomalous behavior. And it's been really difficult in security because there are so many things going on that if you pay attention to every small change, you can't do it physically as a human being. And so I do think that we're finally pushing towards better types of behavioral anomaly detection that can be very, very sensitive. So something like, like a little change in behavior that might reflect a threat actor gaining control of an account and understanding more quickly what like a unique connection or different type of access means for an account based on a lot of other variables and criteria. And so that context used to be very difficult from, you know, a scalability of software perspective or a compute perspective to do. And now because we've come up with a lot of different ways of optimizing and types of hardware, we are able to pay attention to so much more context around these incidents that I feel like we have a chance. But I don't think it's going to work if people have a straight up rules based, you know, this is bad, this is good perspective on identifying threats.

       

      [26:35] Business Process Review: The Non-Technical Defenses That Matter Most

      Jonathan Knepher:
      What, what though should like normal people and normal companies be doing to, to detect and protect themselves here, Right? What kind of misconceptions are around like their own, you know, defenses that they have in place.

      Margaret Cunningham:
      Yeah, you know, a lot of it has to do with processes, which is really not very technical. So understanding how people are interacting with various parts of your technology and systems is very, very important. It is not always common sense who has access to things or who could touch something that has a high impact if it goes sideways. So understanding those business processes can be important because it can be worth it to put friction in place or a double check in place. It might be a human double check. It might be different types of authentication. But if you can deeply understand where those risky processes are, you can start kind of hardening your defensive position when you think that it would be a great attack vector for somebody like scattered spider or something else. So most of the time we're like, okay, like this group has access to this, this group.

      Margaret Cunningham:
      You sort of like hash it out. But I think it's time that people take a little bit of a deeper look into some of the processes that they can change. It's very human. It's a little time consuming, I'll say.

      Rachael Lyon:
      And we are going to pause our conversation here. That's right, this is part one of our discussion with Dr. Margaret Cunningham, and we'll pick back up for part two next Tuesday. As always, thank you everyone for joining us for what is always an insightful conversation with security leaders from around the globe. And be sure to subscribe on your favorite podcast platform to to the Point, and you'll get a fresh episode delivered every single Tuesday. Until next time everyone, stay secure. 

       

      About Our Guest

      TTP-EP-337-M-Cunningham

      Dr. Margaret Cunningham is the Technical Director, Security & AI Strategy at Darktrace, where she advises on AI security strategy, innovation, data security, and risk governance. She provides technical and strategic guidance to ensure enterprise security solutions evolve in response to emerging threats and customer needs. In this role she collaborates closely with security leaders, customers, and industry partners to advance AI-driven security solutions and best practices.

      A recognized expert in human-centered security and behavioral analytics, Dr. Cunningham has spoken at major industry conferences, including RSA and Infosec, and her insights have been featured in leading cybersecurity and business publications such as The New York Times, The Wall Street Journal, BBC, CyberWire, and Dark Reading.

      With deep expertise spanning AI security, risk analytics, and behavioral modeling, Dr. Cunningham is a strong advocate for responsible AI and human-centric security design. Before joining Darktrace, she was the Principal Product Manager for Global Analytics at Forcepoint and Senior Staff Behavioral Engineer at Robinhood.

      Cunningham holds a PhD in applied experimental psychology and has been awarded multiple patents on human-centric risk modeling, security persona development, and behavior-based threat detection.