Ir para o conteúdo principal
Background image

Modern SOC Struggles: AI, Data Fragmentation, and Human Limitations with Monzy Merza - Part I

Share

Podcast

About This Episode

Why does the security operations problem stay unsolved no matter how many tools and frameworks the industry throws at it? In Part 1 of this two-part conversation, Crogl co-founder and CEO Monzy Merza joins Rachael Lyon and Jonathan Knepher to unpack the daily reality inside the SOC: data scattered across SIEMs, SOARs, and data lakes; alert volumes climbing 18 to 28 percent year over year before AI even enters the picture; and analyst expectations so wide that no other profession would tolerate them.

Drawing on his time as a U.S. government researcher, nearly a decade at Splunk, a stint leading security GTM at Databricks, and an intentional return to a hands-on SOC role at a Fortune 100 bank, Monzy explains why "put all your data in one place" is a myth, where AI actually delivers value (and where it just adds friction), and why every action an AI system takes must be transparent and auditable. It's a candid look at the physics of security work and what it takes to capture institutional knowledge in a world where data is permanently distributed.

Podcast

Popular Episodes

      Podcast

      Modern SOC Struggles: AI, Data Fragmentation, and Human Limitations with Monzy Merza - Part I

      FP-TTP-Transcript Image-Monzy Merza-08July2024-780x440.png

      [00:00] Welcome, Monzy Merza

      Rachael Lyon:
      Hello, everyone. Welcome to this week's episode of the To the Point podcast. I'm Rachael Lyon, here with my co-host, Jon Knepher. Jon, good morning. Okay, so I have something that might be.

      Rachael Lyon:
      I don't know if you want to talk about it, but I was reading an article about the government looking at the private sector for potentially offensive cybersecurity.

      Jonathan Knepher:
      Yeah, well, that is an interesting take on this.

      Rachael Lyon:
      Yes, yes, it's been a topic I've been fascinated with for years, but also a dicey one.

      Jonathan Knepher:
      Yeah, I think there's a lot of precedence on like acquiring exploits in zero days, but to actually have the private sector on the offensive, I'm not sure I'd want to be that vendor selling that service.

      Rachael Lyon:
      Yeah, it doesn't.

      What the article I read, it didn't sound like a lot of people were, you know, urgently signing up to be that vendor either, but I just thought it was interesting in terms of cyber news and that particular frame. But without further ado, let's introduce this week's guest. Please welcome to the podcast Monzy Merza. He's a cybersecurity leader and researcher with deep expertise in security strategy, threat intelligence, and go-to-market execution. He's the co-founder and CEO of Crogl, which builds AI-powered knowledge engines for enterprise security operations centers, or more easily said, SOCs. Previously, Monzi served as VP of Security Go to market at Databricks, where he incubated the company's security business. Prior to that, he spent nearly a decade at Splunk in security research, product strategy, and evangelism, ultimately leading as VP Head of Security Research. He spearheaded research initiatives adopted by thousands of customers, shaped Splunk's 1 billion-plus security portfolio, and significantly expanded its industry influence.

      Rachael Lyon:
      Wow, what a career. Monzy, welcome.

      Monzy Merza:
      Thank you. Yes, great to be here.

       

      [2:30] The SOC's Three Realities

      Jonathan Knepher:
      Yeah, thank you for being here, Monzy. So in other areas, you've described the security operations as a problem the industry keeps trying and failing to solve. What does that actually mean to someone sitting in a SOC right now or running a SOC today?

      Monzy Merza:
      I mean, I think for them the reality is very clear. One, their data is everywhere. Two, the alerts that they get come from lots and lots of different sensors and lots of different sources. And three, the expectation on the operator is just ridiculous. A security operator today is expected to know what an email header looks like. They're supposed to know how to write a filter for TCP dump. And they're also expected to know what the corporate policy is for a particular kind of utilization for AI services or for, you know, remote services. So that's a, that's a pretty wide, that's a pretty wide band.

      Monzy Merza:
      I use a sort of a crude analogy that we don't expect the baker to be a cardiologist, but we expect, you know, these crazy things from security analysts that are so diverse in terms of their, in terms of the expectation for their work and also expectations on their competency, tool competency, and domain expertise.

      Rachael Lyon:
      What I find really interesting about your company, Monzy, is basically it's a collective of people who've been in the industry recognizing a problem, and you know what, we're going to come together to actually solve it. And is that kind of the thinking behind it? Before you actually stood up Crogl, you left, left a VP role, and took an individual contributor job at a Fortune 100 bank to sit inside a sock. Was that kind of part of your information-gathering strategy before launching Crogl?

      Monzy Merza:
      Yes. It is rooted in humility and rooted in a core attitude of service to the community because it's, you know, I was a researcher in the US Government for almost a dozen years, and then going to Splunk, still doing research work, but very customer-forward, right. And, and, and developing products. And so I lost that that day to day hand to hand combat of what, what the work is, the physics of the work. And it's one thing to write a blog about it or talk about it or talk to a couple of people and it's another to do it to wake up in the morning and to work on an alert or to get stuck because the tool is not working or the data is delayed or you know, it's, these are very, you know, the, the physics of the work are there's, there's no better teacher and there's no know teacher than a team that is struggling through that together. And that was very, very intentional. And even the folks at the bank, you know, when I asked them to say, hey, you know, I, I want to, I want to come and work at the bank. And they said, they said great, you're an executive at Databricks.

      Monzy Merza:
      How big of a team do you want? And I said, no, no, I, I, I want a team. I, I want a keyboard, and I want to work on the problems because these problems are hard, and I'm just confused as to why we haven't been able to solve them. And I have my really want to create a company, and that's what I'm trying to do. And they said great, you know, that's what you want to do, knock yourself out, and we'll give you the opportunity and to, to come and learn, and we'll get, you know, you'll get paid, we'll get the work out of you. And what you do with that work afterwards, that's your problem. And so it was a, I'm, I'm really grateful for the folks at HSBC for giving me that chance and, and just you know, really seeing what happens on a day-to-day basis and how really complex it is in the world. And it's not just that things are not simple. Like, you know, I hear these things, you know, the security tool is going to replace people.

      Monzy Merza:
      You either don't understand security, or you don't understand the work, or you don't understand people. Like, how can you say that? And because it's not trivial, the work will shift. I mean, maybe we'll get into that, but the work will ship. But you know, just to somebody to come in and make that blanket statement, it, it is, it is very naive because they don't understand the physics of the work.

      Rachael Lyon:
      Right. And how can you? Right. Unless you're actually deep in the trenches and then you see the cracks in the system, you know, and kind of how those issues are being funneled up. So that's great. I'm a big fan of that as well. Unless you roll up your sleeves and get in the mud, you're never going to know how to, how to move forward.

      Jonathan Knepher:
      So.

      Monzy Merza:
      Yeah.

       

      [07:05] Industry Speak vs. Operational Reality

      Jonathan Knepher:
      Can you talk some more through about like the volume of data, all the different tools that you have to know, and you mentioned earlier about all of the disparate skills we expect everybody to have. Like, like talk some more about what this actually looks like being a SOC analyst and why, why it's so difficult.

      Monzy Merza:
      Yeah. So I'll contrast this in two ways, in the sense that one is there is sort of the industry security, industry speak, and then there is what the security operators, SOC analysts, and practitioners living on a day-to-day basis, the industry voice says, put all your data in one place. You look, you listen to any number of participants. And I'm guilty of that. I worked, I worked at a company that says, "Put all your data in one place", which was Splunk. I worked for another one that said put you all your Data in one place is Databricks. And these are great companies; they have very good products. The reality now coming back to the operator data is not in one place.

      Monzy Merza:
      Every significant organization, we call them high consequence organizations at Crogl, every high consequence organization, whether it's an electric utility company, we have customers there, or whether it's a big bank, a Fortune 100 bank, we have customers there. Their data is in lots of places. And they have Splunk, they have Databricks, they have log analytics. One customer that we have has three SIMs and two SOAR platforms. Same organization, right? Same team. There are reasons for that, but that's the reality of what the security practitioner is working with. So, for someone then to come in, and now go back to the industry, for instance, someone to come in to say, oh, here is a really nice, clean framework on how you ought to conduct security operations. Well, immediately those frameworks break.

      Monzy Merza:
      And so the security operators now are contended with a competency challenge, where as a security practitioner, I was expected to know how to write a query in Databricks, I was expected to know how to write a query in Splunk. I was expected to remember how long the data lag is in a certain region, when the data would be, the log data would be created, and it takes a little bit of time for it to traverse and get somewhere so that it is searchable. And I'm also expected to be the domain expert. I'm expected to know how that specific EDR platform works, what the log type looks like, what would a threat look like, what is my playbook for, you know, the manual, human-written playbook for my organization to respond to an alert. So there is a lot of complexity. I remember, like one of the things that used to happen is that when there would be an alert, invariably one of the first messages was in Slack or something to say, who knows how to work on X, or who remembers where the query is or what. And it's like, we've all seen that, right? Exactly right. And it's like, well, Bob's really good at that.

      Monzy Merza:
      Bob's not here today, but Sally might know. And you go to Sally, and Sally's like, well, yeah, I kind of tried this thing, and it sort of worked. And well, here's the snippet, and see if this works for you. You know, and then you kind of iterate over that. That's the reality. And then the other reality is the pressure side of this, which is the work just keeps increasing. And where is the work increase coming from in modern, you know, I've talked to lots of Fortune 500 organizations, and the numbers vary anywhere from 18% to 28%. Just the year-over-year expectation of growth in alerts.

      Monzy Merza:
      Those are large numbers. 18 to 20, that's just organic growth. This is, this is before, this is before we started to really adopt AI in business applications. And so people say, " Oh, it's just, it's no big deal, we're just going to have some AI." Well, think about this, think about this from the perspective. Take the, rewind the clock back, right? Let's say I take you back to 2010, and people are starting to talk about this thing called the cloud. And there is no Amazon guard duty at that time. There is no whiz, there is no creating alerts.

      Monzy Merza:
      There is no ORCA or lace work or any of these tools. There is no, there is no. The Capital One breach hasn't happened yet. And it's, and I come to you, and I said, you know, there's going to be these things, and you're going to get an alert one day that's going to say S3 bucket open to the world. And you're going to have to do something about that. And you're like, ah, it's fine, it's cloud, it's the same. It's the same as on-prem. It's going to be a couple more alerts, we're going to be fine.

      Monzy Merza:
      But fast forward to today. It was, we finally realized it was a net new terrain. AI is the same way. It's a net new terrain. And businesses are starting to use AI tools and technologies. They're already starting to see things in terms of transitive trust issues, in terms of tools that people are not. There's a whole shadow IT that's, or shadow AI, that's happening in organizations. So there's just a whole lot of this happening.

      Monzy Merza:
      Right? And so 28%, that's a conservative number. That's kind of like a naive number. I, I think these number of alerts are going to go, gonna grow by an order of magnitude. And if security teams are already pressured to manage what, and deal with what they have, they're not gonna have the, they're just, there's just no way to scale this out. And this is why, you know, we kind of get into these discussions about using AI to help security teams better and, and so that they can, they can keep up and get those tools working. But you still need those foundational principles that even when you do that, you have to make certain assumptions where you have to make the assumption and that the data will be everywhere. You have to make the assumption that investigations will require different techniques, and you can't have static, brittle playbooks that people have to execute on. You have to work under the assumption that the work has to be collaborative in the sense that when Bob does the work and when Sally does the work, and then when Alice comes along, Alice is going to benefit from the work that Bob and Sally had done without making the call.

      Monzy Merza:
      Right. Typing something in Slack. Right. So that's the way we are going to get ahead. And that's been Crogl's approach: how do we capture the best competencies from the team? How do we work in a world where data is distributed, and data is not normalized across multiple platforms, and a world where every time someone does security work, they get credit for it, and their work is immediately sensitizing and helping another colleague's work, and the machines work simultaneously.

       

      [13:57] Where AI Actually Helps: Boring Work and Tools That Get in the Way

      Rachael Lyon:
      Digging a little bit more into the AI element, and you kind of touched on this a little bit. But I think some have the view, oh, it's a silver bullet that's going to make things easier, and I can focus on other strategic priorities. But you're also seeing a lot of commentary come out. It actually creates a lot more work. There's like, was it a brain swap or whatever they call it? But it's, it's a really fascinating time for AI in terms of challenges, opportunities, and kind of how do you see that impacting security operations teams right now, and kind of how do you see that evolving in the future?

      Monzy Merza:
      I'll just use the example from a Crogl customer rather than kind of, you know, pontificating from my perspective. And so what the customers are starting to recognize with Crogl is there are certain types of tasks that nobody really wants to do. I think we're all done with the 17,000 user-reported phishing emails, and no one really wants to work on that one, but it needs to be worked on. There are some usages like that, and it has to work for the organization within that data-fragmented world and within that non-normalized world. And we just got to work that problem. So there is a series of problems like that that can simply be taken off the table because they're well understood, and the human can inspect the AI's work and say, " Hey, I agree, disagree, or modify some things, and then move forward."  So that's, that's how customers are using Crogl.

      Monzy Merza:
      The important value that's needed There though from a, from an organizational perspective, is that work has to be transparent, it has to be auditable. So the tool has to have that cap. On the one side, there is value here, on the other side, there's a requirement on the AI builder, in this case Crogl, to have a, and Crogl does this very auditable, very transparent capability. So that's critical, and so that work can be offloaded. There is a second set of work where the human has a bunch of ideas, has intuition on what the work should be, but the tool is in the way. And I'll, I'll repeat, the tools are in the way. We found, as practitioners, the tools are oftentimes in the way. It's like, why does this thing not just do that, right? And, and, and why do I have to type this out over and over again? Or why do I have to click on three buttons to get somewhere? The tool, the tool is just in, the tool is in the way.

      Monzy Merza:
      And, so the concrete example for Crogl, for a Crogl customer there, which is why they're using Crogl, not to have the tool get in the other tools get in the way, is the example of a CISA advisory. And so a new CISA advisory comes out, Volt Typhoon and 20-some odd pages, really, really well-written document, and has a lot of detail. Well, it takes a long time to extract that information out, figure out where it applies to your organization, write the appropriate queries, execute them, reconcile all of this stuff, put it back together in a report, and then give it to a or somebody who's been asking for it now for a week and a half. And, and it's, and then that's what happens, right? And so for the Crogl customer, that task now takes a few minutes because they don't have to remember the schemas, they don't have to remember the query language, they don't even have to remember which system the email data is sitting in. But they know that this is an advisory and they have to work the advisory and they understand that it's, you know, the core pieces. They can look at the work that Crogl did, they can look at the, and they can map these two pieces together, and they can do it in a much faster way because they have an idea on how to do it. So it is, it is that shift of accelerating the human on the one side takes the boring stuff away because nobody really wants to do it anyway. And on the other side, enable the security teams to exercise their intuition as best as they want to exercise it by removing this competency barrier, which is more often than not tool inflicted, it is not user, it is not the user's problem.

      Monzy Merza:
      You have to do a certain thing on the tool because the tool is designed that way. Right. And, so we're eliminating, so customers are eliminating both of those barriers of the mundane and the tool or the tool barrier, then accelerating the analyst so that they can, they can do things as best as they see, and they can inspect it, and they can exercise their intuition. We had a customer who used Crogl for a fraud detection use case. We were so excited. We were like, oh great. And then it was like, wait, we didn't build a fraud detection product. And, this is.

      Monzy Merza:
      No, no. So, you know, the person on the team, you know, it's called Bob, like Bob had this idea that maybe Crogl can be utilized for detecting, for detecting fraud. So he, he was able to, you know, hook up LinkedIn and some Zoom data and some information from documents, in this case, resumes, and was able to figure out that a particular applicant was actually a bot trying to infiltrate the organization, or like a fake employee use case and a very high value use case. So that's where we want to get to. Right. That's a real business value use case that Kogel customers are starting to get value from.

       

      [19:33] The Data Lake Myth and the Case for Auditable AI

      Jonathan Knepher:
      So, a couple of minutes ago, you talked about the desire to collect all the information in one place and some of the other security tools that kind of push you to normalize everything into a common schema and a common repository. Talk some more about the dangers of that and what's actually happening to customers when they do that, and whether or not they should.

      Monzy Merza:
      Yeah. So I think the desire is a, is based on a false assumption, like illogical illogical assumption. Because the, because the idea is that put all your data in one place. And again, I'm not. I was the guy who was saying that for a long time. And I just want to say for the record, I was wrong. And, it's because what, what we told customers, hey, look, you want to do sophisticated investigations, you want to accelerate your team.

      Monzy Merza:
      Just, you know, you put all your data in one place, and everything's going to be fine. So what's the false assumption there? The false assumption is this whole thing is predicated on that One, a single system is capable of absorbing all kinds of data. Second, when the data gets there, the data will be usable by any number of people and will be usable in an easy enough way that A person with any degree of competency and domain knowledge will be able to utilize that data. So that is the, that is the, that is the underlying assumption. The reality is when that, that organizations get on this never-ending treadmill when, when you get, when you get past the first sort of demo use case in a, in a data lake. Now you get on a treadmill of data hygiene, data cleanliness, data transport, data reduction, and dealing with how fast data gets from the place where it's submitted to the place where it's queryable. Training of users who have to memorize those schemas, have to memorize those query languages, whether it's SQL or KQL or SBL or Spark SQL or whatever it might be, and remembering what to use when, under what circumstances, there's. So this is all like, like it's like friction, friction, friction, friction.

      Monzy Merza:
      And also from a, from the leader's perspective it is, it is delays, delays, and from another leader's perspective is even worse as money, money, money, money, money burned. Right? And, so that's, that's the reality of what's happening in these organizations, and what are, so they get on this constant thing. Well, I've talked to some organizations where, like, yeah, we are using three SIMs because one is our real production SIM that we're using today. One is the one that we were trying to migrate off of, but we couldn't really get out of it completely. And the third one is the one that we're trying to move to because we're unsatisfied with our prior two, and so now it's like things are everywhere, and that just at the same layer because underneath the SIM layer there's all these data sources, right? And all these different storages, whether you know, cloud storage, on-prem storage, and lots of different data lakes, data stores, lake houses, and everything else. So that's, so that's what's happening. And so the dream was never really got achieved, and we kept saying, you know, just put it in one place is going to be fine. So, but there was another assumption, right, which was if you do put the data in one place, then the data would be normalized.

      Monzy Merza:
      Well, there's so many variety of data sets, and the data producers change their formats because they're making the data richer. Then the data formats themselves change, and so you never get to get off that treadmill. And so things are always marginally useful, and you invariably just end up going to lots and lots of places. So if that's the case, and this is why the first problem at Crogl, the first problem we worked on was filing the patent for the ability to analyze data, no matter where it lives, and no matter what the schema is. And we have that patent now. And because that was critical for us, because we didn't just say be okay in a world that has multiple data stores, we said assume a world where data will be distributed across multiple lakes and multiple schemas. And that's the world. And we have to thrive, we have to thrive in that world.

      Rachael Lyon:
      So I think that's a really fascinating point because you built Crogl so that every action the system takes is fully inspectable and auditable. And today, you can't escape that agentic AI governance guardrails conversation. So as a design priority, what are organizations missing if they don't take that approach?

      Monzy Merza:
      I think there's a couple of like, natural things, right? That's why I like to go to, you know, focus on the physics of the work. If they don't take that approach. One, you can't build confidence in your security operation because you don't know the work that got done. So even if the AI system closed a bunch of tickets, for example, and you don't have a way to go back and inspect it, then, then you don't, you can build confidence. So that's sort of the defensive posture for that sort of a thing. The offensive posture for this is to say, well, that's my data. There was, there was work that was done, there were actions that were taken, there was human intelligence applied to that data set. And that's my data.

      Monzy Merza:
      I want the ability to learn from it. I want the ability to improve my operations. I want the ability to do fundamental changes in my risk posture as the consequence of analysis. So if that data is not auditable, that data is not transparent and easily available to you, well, you can't have those additional values of this, of this data set. So those are the two sort of broad sort of base principles of things, of things that get missed out, and then so. And as a consequence, you're also losing the institutional knowledge, right? So people often talk about, well, if Bob leaves or Sally leaves, we lost institutional knowledge. Knowledge and in the, in the case of the AI is the same thing in the sense that if you don't know what's in there and what was done well, then it's always leaving and you're paying this like this, like the absolute worst case scenario. It's like you're paying Bob and Sally and the institutional knowledge is leaving that's what's, that's what's happening when your AI system is not transparent or your AI system is not auditable because you can't, you can't get derivative value from utilizing that system.

      Rachael Lyon:
      And I hate to do this, everyone, but we're going to pause today's discussion right here and pick back up next week. Thanks for joining us this week. And as always, don't forget to smash that subscription button, and we'll see you next week. Until next time, stay safe. 

       

      About Our Guest

      Guest-Monzy-Merza

      Monzy Merza, Co-founder and CEO of Crogl 

      Monzy Merza is co-founder and CEO of Crogl, the only autonomous knowledge engine for security operations that investigates every alert and continuously learns the environment with speed, consistency, and depth. 

      Previously, Monzy held senior leadership roles at hypergrowth enterprise data companies. He led Cybersecurity Go-To-Market at Databricks, where he incubated and scaled the security business, and oversaw Security Research at Splunk, helping shape strategy across the company’s billion-dollar security portfolio. 

      Earlier in his career, Monzy served as an applied cybersecurity researcher at U.S. Department of Energy weapons laboratories, developing advanced offensive and defensive security capabilities. He has since advised Fortune 500 companies and government organizations on strategic security initiatives.