
Why Human Judgment Still Wins in the AI-Driven SOC with Monzy Merza - Part 2
Share
Podcast
About This Episode
Crogl Co-Founder and CEO Monzy Merza returns for part two of his conversation with Rachael Lyon and Jonathan Knepher, this time making the case: AI must produce some level of error to count as intelligence at all, and the real bottleneck in the modern SOC is no longer data generation but human selection in a world of AI-driven abundance.
That same abundance is reshaping the attacker side, where agentic systems now let a single operator run campaigns that once required teams of specialists. Monzy unpacks what that shift means for defenders, why he believes SOC headcount will grow rather than shrink, and the three questions every CISO needs to answer before integrating AI in their security operations.
Podcast
Popular Episodes
Podcast
Why Human Judgment Still Wins in the AI-Driven SOC with Monzy Merza - Part 2

[00:00] Building Trust in AI: Why Some Error is Essential
Rachael Lyon:
Welcome to the Point Cybersecurity podcast. Each week, join Jonathan Knepher and Rachael Lyon to explore the latest in global cybersecurity news, trending topics and cyber industry initiatives impacting businesses, governments and our way of life. Now, let's get to the point.
Jonathan Knepher:
So I hope this isn't too difficult of a question, but I have to ask, as we're building in these AI things, right, like we see out there, like open source communities now, not wanting AI generated cases because they don't trust them. I think we've all seen cases where we're building dashboards and things with the AI tools and sometimes the data is just made up. How are you getting confidence out of using AI tools when folks in the SoC, I mean, arguably some of the most important decisions people are making, how do you assure that it's valid, what data they're relying on?
Monzy Merza:
Yeah, I think there's a couple of different things right in your question. One is around the reliability or the confidence of the output. The other one really points to the volume of, let's say requirements in case of, in case of tickets or feature requests that are AI generated. So that's a volume problem. And the other one is the ability to consume the output from an AI generated system, which is your, sort of your dashboard example, right, that you can create umpteen dashboards and it won't and it, you know, like, what good is it? So I think those are sort of three, three different facets of that. I'll start with the, with the one about the confidence first. The, the confidence piece is that we have to, this is what, this goes back to transparency in many ways, that if the system is inspectable and auditable and the system's work is transparent, then somebody can go back and inspect that work and build confidence within, within bounds around that, around that system. Now there is, I'm going to say something controversial here is like, I think we're oftentimes having this expectation that the AI system has to be perfect.
Monzy Merza:
So there is a, there is a spectrum mean just being completely like lost and confused and then having small errors. And I think we have to come to grips with, with the fact that there is a distinction there. We can treat both of them the same. And I would, and I would argue this is a controversial part. I would argue that the system must have some error in it because if it doesn't, then it's a probabilistic system, it's a generative system. Well then if you remove all of the error out of it, then it's just a rule based system. And then the system can't prompt you or can't go in directions that were previously not assumed. And so that error is very critical.
Monzy Merza:
And we work with erroneous entities every day. We have three of them in the room right now. We all make errors, slight errors in judgment. We, so, but sometimes we are very forgiving to ourselves and we call it, we call these things leaps of faith and, or sometimes we are very sort of high thinking and we call them intuition and you know, or we call them aha moments and it's, you know, and so we do that and we have figured out ways to manage, to manage that. So without getting sort of too philosophical about it, I think, but from, just, from a general principle from people who are buying products, I think it's important to ask the question, you know, yes, it makes errors, but be, but be open about the kinds of errors that you're willing to tolerate and then demanding and requiring your AI providers, Crogl included, to share with you under what circumstances those errors occur and how to deal with them. But I believe that you should have a system that creates errors or you should have a system that has some amount of error, otherwise it's not going to have intelligence. And maybe somebody will come in and correct me on that assumption. But that, at least up until now, I believe that to be, to be correct.
Monzy Merza:
So now we go to the second one in terms of going backwards in terms of our ability to consume the output. And there's now so much text that's being generated that we have a difficult time consuming it, or we have so much, we live in a world of abundance. Whether you look at it from how many tweets happen every week, or how many blog posts happen, or how many podcasts are happening, they're just more and more and more and more of everything. And so the problem now becomes, it goes back to the human selection problem again, that we have to choose. Which is another sort of core principle of why I believe that the AI is not going to be replacing people anytime soon in any significant job function. The job might change, but the decision maker, the choices have to be made by human being because we are choosing to do something and we're choosing to deploy AI capabilities. I think we're a little bit away from just AI, just kind of running and making other AIs and printing the robots too and doing all that. We might get there someday, but we're a ways away from that.
[05:29] Where Human Decision-Making Sits in the World of AI
Monzy Merza:
And in the meantime, the human being has to make a Choice. And so our thought process and rationale on what to consume has to modify. Because initially it used to be that the barrier was can you make this beautiful power bi dashboard for the boardroom? And you would go collect all. But now building that, building any number of dashboards, using any tools is, is really, it's like almost instantaneous, right? So the question is, is this really the dashboard you want? So the problem is now inverted, right? And so that makes human decision making and our need to have good rationale and have really to understand the outcomes that we want become even more, become even more important. So that's how you solve that absorption problem. It's true in the security operations centers now as well, because we take Crogl for instance. Crogl can analyze thousands of alerts a day and then the question is, well, who's going to take action? Who's going to sit there and then look across this whole thing? Even when Crogl gives you a clear recommendation to say, well you need to continue to add a certain kind of data or remove a certain tool from the environment because it's redundant or it's not useful, well, somebody still has to evaluate that and say are we going to do this? And I would argue that Crogl's output is very accurate. And those kinds of use cases, you ask it to bake it a cake, you're not going to get a really good answer.
Monzy Merza:
And this is going to say I don't know how to do this, but you ask it to investigate a CISA advisory or investigate an alert is going to go and figure out that alert A is related to alert B. That's related to the same ticket that Bob is working on, same one that Alice is working on. And these two people should be talking to each other. And this is the conclusion of this. And here's the report. Now someone has to act on that. So and it becomes real in the security operation environment very quickly. So that's the second piece is that decision making becomes critically important and as humans we don't get a way out of it.
Monzy Merza:
So for all of us who are critiquing the system, I think have a great opportunity to also figure out, well, how are we going to deal with really good quality output and understand how we're going to action it for a good purpose. For that's mapped to the business. Now to your volume question on the third one where you know, open source projects saying we're not going to take requirements or these requirements, look AI, AI generated. To me that problem also has to do with partly human generation, but partly on how software is going to get developed as we go forward. We use a lot of AI in the development of Crogl itself, but we also have human review. We have really, really strong mechanisms of writing the requirements for the product itself, like human rights, the requirements, because there's so many features that you can create. But to what end? And what's, and what's the goal? And so I think this, this volume issue is really about the focus of, of the human who is making the decision to say, well, you know, does this thing go into the feature set or not? Or does this particular feature get uploaded or downloaded or ignored? Like, or is this, is this even part of the problem for a product? Like, I'll give you a concrete example, and in today's terms, it is fairly straightforward for Crogl Incorporated to create a ticketing system, case management system, you know, and we can add that as a product feature. We're not going to do that.
Monzy Merza:
We are, because we're not, we're, we're not bounded as a decisioning team, bounded by what can be built. We exist to serve what the customers want. Customer already has a ticketing system. It doesn't do certain things for the customer. What they want Crogl to do is to solve these very difficult data and competency and domain knowledge problems and collaboration problems and accelerating the use case problems. Those are the problems the customer wants us to focus on. And that's what we want to serve. And so, so I empathize with some of these teams that are getting these AI generated requirements because now somebody has to sift through, maybe an AI has to sift through it, but even so, still the volume just keeps going up and up.
Monzy Merza:
Right. So, yeah, so I know that was a very long winded answer, but it's hopefully you see the three facets there that we have to divide this thing up.
[10:07] Offensive AI and Cyberspace as the Fourth Terrain
Rachael Lyon:
Absolutely. I do want to bring it back to what I consider a very interesting topic as well. And tying into our opening about offensive AI use. As we know, whatever tools we have, the attackers have, just like the defenders. And in security we talk a lot about shifting left, how do we get ahead of the threats? And as we look at the AI landscape, what is your perspective there on offensive strategies and implications perhaps?
Monzy Merza:
I mean, I think some, I forget who said this, but I heard this somewhere and the person said that only the government has the monopoly on violence. And so when it comes to offensive, offensive capability, historically the government has had the authority to do that. And so that's one aspect of it. So then once we get past the authority, you know, question mark, that somebody does have the legal authority to engage in offensive capability. Now the question is, you know, how, how can you leverage offensive capability for, for a defensive purpose? And I've lived in that world and you know, been an offensive operator. And, and so I think that's a, in some ways, if you go a little bit higher abstract level, governments hiring private industry to do offensive work is not new. We have a defense industrial base in our country that has existed for a very long period of time ever since essentially you could argue from the beginning when the United States was founded in smaller forms earlier and then eventually right around World War I in a very sort of orchestrated and well funded and organized way. And we have players in that space now that make everything from jet fighters to submarines and all these things and very sophisticated weapon systems.
Monzy Merza:
So if the, and, and so what's the core base principle in my mind is, well, it's a question of terrain. And, and in the nuclear weapons world, we were always taught, we used to talk about the triad, which is the air, land, sea, capability for, for nuclear capability for, for the United States. And so it's, those are the three terrains where, where we operate and engage with, with enemies or engage in areas where we want to have influence or we want to protect our interests. And so that's one way to sort of rationalize the problem is this is the terrain problem. What does this have to do with AI? Well, if we assume, and I believe this is the cyberspace or the digital world is a terrain. And so if that's a terrain, then you want to protect that terrain and you want to ensure that our interests are managed in a way that produces good outcomes for us. So for me it's a natural progression of wanting to manage a terrain for good outcomes for our country. So that's the way I look at it.
Monzy Merza:
So then the question is, okay, who is most competent in that terrain so that we can get an accelerant as quickly as possible so things don't get out of control. And there are certain organizations in the AI space or in the digital world, right, that, that have built these, these tools before or those tools can be dual purposed in order to be utilized in, in that, in that space. So that's kind of, I don't know if I answered your question, but that's, that, that's how I think about it.
Rachael Lyon:
It's a, it's a tough question to answer too. I mean, it's It's a slippery slope I think once you start opening these kind of doors as well.
Monzy Merza:
Well, I think that's why legal authority is so important. And it's one thing to say, well we should not use a tool for XYZ or whatever it is, whether it's fire or whether it's AI or whether it's a wheel. Right. So whatever it is in the spectrum of human development, it's one thing to say, well we shouldn't do it for this reason because the, the challenge with I'm just going to opt out shouldn't use it for this reason is that we don't, we don't control the behaviors of others. And, and so we become we. So yeah, I mean this could turn very quickly into a very deep philosophical conversation, but at the end of the day it's like somebody will do something with it. And, and if we don't have an understanding of that system then, then, then that, then you know, we, we, we are then dependent on the one who's wielding the weapon. So that's, that's, you know, sometimes these are not maybe the most comfortable or the nicest conversations to have.
Monzy Merza:
But there's plenty of history to see what happened when somebody chose or did not choose to make use of a certain kind of technology.
Rachael Lyon:
Definitely.
[15:07] Agent vs Agent Activity
Jonathan Knepher:
Yeah. And I think even, even ignoring like the state use, right. Like there's, there's still attackers using AI driven tools and agent driven tools to attack industry today. Is, is this going to evolve into an agent versus agents battle? And, and if it does, like what, what do we, and what do all of our listeners do on the practical sense to defend themselves? Like what, what's the strategy and the mechanism?
Monzy Merza:
Yeah, I think we have a couple of early examples already. I mean the, the biggest one is the Mexican government attack that happened probably about a month, a little over a month ago at this point where I think somebody stole in the order of 150 million records of Mexican citizens. And that particular attack was one, one user, one agent that was, that was used, one one agentic system, I should say multiple instances of agents across in a very sophisticated, in a very sophisticated campaign. And, and so we are already now we have this one good example for the Capital One breach to happen. Post cloud was, was, was many years. I don't remember the exact number of years, but it was, it was, it was more than four here, here we are in, in a less than two year or one year horizon and on this, right, so we're gonna, we're gonna get, we're Going to get more and more. And the cost of launching a sophisticated campaign is now an order of magnitude lower than what it was before. Just think about how you can deploy an agent or agentix system.
Monzy Merza:
You no longer have to be an expert in understanding certain kinds of data. You don't have to have an understanding of knowing how to disassemble code or how to craft a particular package. You may have some intuition around how this whole thing needs to be structured, but you don't have to have all of those competencies. So the cost is lower where before you would have to hire all these different experts and different domains to pull something together and hide yourself and go in a long campaign. So yes, so the cost is way lower. It's more accessible. And so you can imagine when that happens, then there's going to be more bad actors coming to the party because they will want to take advantage of what's happening. Which goes back to some of our earlier discussion in the conversation where I said the volume of attacks is going to go up by a lot.
Monzy Merza:
And because your footprint is changing the AI tools that are being utilized and other tools that are being utilized, that footprint is expanding the cost to attack. The footprint is really, really cheap. And so it's a very simple economic argument. It's supply and demand kind of a situation is like it's game on from a threat actor's point of view. So now we fast forward to, okay, what do we do on the defender side? Well, I think the defender has to get in a race in two ways. One, we really have to work to understand how these AI systems work. Not just the AI security tools, but the AI business tools, the business users, the things that the business users are using on a regular basis, how do those systems work? And historically the security community has been pretty good at this. You know, we, we learned about the Internet, we learned about, we learned about operating systems, we learned how the cloud works, how mobility works, how high speed networking works.
Monzy Merza:
And so we have to educate ourselves. But now we have this net new challenge so we don't get to get, you know, skip that development stage. We have to learn on how these tools, how these tools work and how these technologies, how these technologies behave. The second thing is that we have to be ready from a perspective as we learned that, to imagine for our own selves, what would the threats to the organization look like. This is also why I believe the number of the volume of threats is going to increase is because the footprint is changing. And so the kinds of alerts that are going to happen. Going back to our cloud example, the kinds of alerts that are going to happen are alerts that we haven't seen yet, or the kinds of violations that are going to happen are violations that we haven't seen yet. And we're going to want compliance policies on them or we're going to want these things.
Monzy Merza:
So we have to start to imagine that piece right now. And then we come to the third piece, right, which is the usage of AI tools themselves for the defenders to utilize. So it's a much later stage, right. And then we can start to see, well, what tools can we use, what are the requirements for those tools. But now that we have this knowledge of what we imagine the threat to be and what we understand the technology to be, now we can have some intelligence around what kind of tools we use and how we, and how we utilize it. So in summary, it's a very long winded way of saying there is, at least in this case, there is not anything significantly new here. We go back to people, process and products. It is a mantra that we have all learned over the course of the last two decades in security practice.
[20:13] Why SOC Headcount Will Grow, Not Shrink
Monzy Merza:
But what has changed, the detail has changed significantly. And this is also why I believe that the number of people that will be required in security operations is going to increase. It is not going to decrease now. Their job is going to change. Bob will no longer be working on the 17,000 user reported phishing email. But we need Bob and we need Alice and there's going to be, we're going to need more of them because there is, there is more to, there is more to do. I hear this argument from a number of people like, well, we are, we're firing our tier one or our tier two staff. Well, that's great.
Monzy Merza:
And I can, I can kind of see the rationale for that. But at the same time the argue the argument of, well, we're going to replace all the people with AI. Those two are mutually exclusive because you need to shift Bob's work now to work on a different thing. And you're not going to get some net new person outside of your organization who's going to ramp up very quickly to understand your environment and be effective just because they have some AI skills that institutional knowledge still has to be there.
Rachael Lyon:
Agreed. So you, was it March, you hosted the first AI Soc summit, which I imagine this would be some really fascinating conversations. What was the most surprising thing that you heard from practitioners in the room?
Monzy Merza:
I think the biggest thing that we saw was there is this big desire, despite all the hoo ha about people being skeptical. There is this big desire for security practitioners to use AI technologies. And I would say that because, I mean, in some respects, you look back at it, of course, that makes sense. People are constantly burdened by all the work that they can't keep up with. And so one of the things that we did was we held a hackathon that every attendee had access to use Crogl at the AI SoC summit. Even though we had other people who could be argued were competitors to Crogl. We didn't close the door on anybody. And it was completely ungated because we wanted to see what people really are trying to achieve, what their expectations are.
Monzy Merza:
Because we hear all of this stuff just in a bubble or in a vacuum where just a vendor pounding their chest about how awesome it is or what the world should look like and all this other stuff. So we really just wanted to put a stake in the ground for the community to have access to something and for the community to have a conversation amongst themselves and with anybody that they wanted to have the conversation with. So that was my biggest surprise was there was huge attraction to say, okay, I'll use it and I'll see what it does, or I want to. And we had lots of hallway conversations, not just about AI issues or errors or things that won't work. We spend a lot more time in the hallways and in the. And in the sessions talking about the opportunity of how it will help defenders get ahead of their current state that they're in.
[23:26] Three Questions Every CISO Must Answer
Jonathan Knepher:
So, Monzy, I kind of want to bring all of this full circle, right? We've talked a lot about kind of everything involved for our CISOs and SOC managers that are in our audience. What are the key things that they need to ask themselves and be prepared for when they go looking for integrating AI in their SoC.
Monzy Merza:
The first question is, do you want to leave the current condition that you're in? It's a very, like, it's almost philosophical, but it's a very important question to ask because I've been in conversations where almost feels like somebody's questioning whether they should or shouldn't do this. You have to do this. There's just. The data is overwhelming already. The volume of attacks, the degree of competency required, all of those things combined, the data sprawl that you have in your organization, the growth of the business, it's all demanding that you got to change the way you operate today. That's. That's number one. So you have to come to that conclusion.
Monzy Merza:
And if for some reason you're not coming to a conclusion, then you, you can just free yourself of the rest of all of the hyperbole around this discussion. So that's, that's number one. The, the number, the number two thing is you have to choose for yourself is how do you want to live in that future state? You have to have a fairly clear picture for yourself of what that future state looks like. Is there more people? Is it less people? Do you want to manage your own stuff? Do you want to have a SaaS provider do this? I'll say this out loud, for example, if the expectation from an organization is to have a SaaS service deliver this capability. Crogl is not your answer because Crogl is a customer managed solution. We very intentionally built this for high consequence environments like electric utility companies, financial services institutions, the Department of War, these very critical high consequence environments that prioritize privacy, auditability, transparency, full control of the system. And that's how it's very difficult to build a system like that that can be customer managed and be very AI native and very powerful. But we did that.
Monzy Merza:
But if you, it's. So that's who we built it for. So that's, that's the second thing is you want to be clear in your mind that this, how do you want to do it? And our customers want to do it in a customer managed, highly secure, highly private way. Not everybody's like that, so it doesn't have to be. So it's. But our customer is clear that this is the way they want it. So that's the, that's the second thing. And the third thing is you want to have a clear argument for yourself is how are you going to maintain and manage your teams and how are you going to grow those teams to get the outcomes that you want? I mean without people, none of this happens.
Monzy Merza:
And you may have a more focused team, you may have a team that does different functions. It should do different functions because the tooling has changed and so you operate. I remember once upon a time I'm going to show my age. I had a telephone that you had to crank to generate the electricity. And then, and then somebody would pick up the phone on the other end and we would say, I need to call Alice. And they would literally, the PBX operator would plug in and do that. Right Now I just say, hey Siri, call mom. Right? And so it's a different mode of operation.
Monzy Merza:
So I'm still making the call, but the mechanism is very different. So you need people to make decisions, but the mechanisms will be different. And so you want to think about what are those kinds of decisions. So it falls into those back, you know, back to the three buckets of what are the sort of outcomes that you're looking for and then what is the people, process, product mix that you want to take to achieve that. Now the good news is there are a lot of good tools out there. There's a lot of good capability that's evolving. Not all of it is awesome, but there is a lot of good capability that's evolving and you have choices. And this is the, this is an important point for organizations that, that believe that they're forward leaning or they are well resourced is that this is that organization's opportunity to influence the roadmap for companies who are starting to solve these problems.
Monzy Merza:
We have really extremely deep relationships with our customers. They live in our slack channels. They give us instantaneous feedback on what they want or how they and we are able to change a roadmap for them. And so that's the opportunity for high consequence organizations is to find companies like Crogl in this space regardless of the area that you are focused on to get the outcomes that you want. Third thing that I will. Fourth thing, or whatever number we're on now, I would say is this is kind of a controversial thing is I've talked to lots of organizations who have told me they're building this in house. They're building the their future of AI SOC or future of Agentix Security. They're building this in house.
Monzy Merza:
I think that's a bad idea. And it's not because I'm selling something. It's because even us as sellers, we don't roll our own cryptographic algorithms, we don't write our own operating system as businesses. Yes, some businesses have the capability, they're really well resourced to extend products and services. But you want to hold someone accountable. There is a lot of gnarliness. We've been on this journey. We found a program in 2023.
Monzy Merza:
There is a lot of complexity to this work and while some team can get to a 60% or 70% solution or you get some inkling that oh look, I was able to type something in a prompt and get an answer. I was even able to stitch a couple things together. That doesn't mean that you can build a robust product that your business can rely on. There might be maybe a dozen teams in the world who can do that sort of thing. But you don't want to do that. And by doing that, it's important to learn. So do that. To learn how these systems work.
Monzy Merza:
But know that that's why you're doing it, because to build a complete system that will pass your regulatory compliance requirements six months or 16 months from now, or to have auditability and when Bob wins the lottery and leaves and have continuity and as the space evolves where Bob can't keep up, but those of us who are living in it every day and inventing things in this space every day, the space just keeps moving faster and faster and faster. Right. And so that's, you know, a little bit of an editorial, but I've seen a lot of people say that to me and, you know, it's not a negative because I think I'm better than them. But it goes back to the focus point. We are focused and other companies, even our competitors, are focused on delivering a capability that is that the institution can rely on. It's bewitching to think you can make it, build it yourself. 99. Point.
Monzy Merza:
I don't know. 5, 9. Number of organizations in the world can't actually do that. Things move too fast. There's way too much detail here that is not obvious when you sit on a ChatGPT console.
Rachael Lyon:
Yeah, it is a heady task to navigate through AI transformation, particularly right now with. I mean, it's seemingly changing day by day by day and how to stay ahead of that. It's very daunting, for sure. Well, Mansi, I'm cognizant of time and I just want to thank you so much for these amazing insights. It's really helpful for our listeners to hear what's going on in the field, in the trenches, what people are thinking about, because it definitely helps them move forward their own strategies and AI transformation journeys. So thank you, thank you, thank you
Monzy Merza:
for having me on the call.
Rachael Lyon:
And John,
Jonathan Knepher:
For a fresh episode every week, smash that subscribe button.
Rachael Lyon:
That's right. Every single week, just smash it. So until next time, everyone stay secure.
About Our Guest

Monzy Merza, Co-founder and CEO of Crogl
Monzy Merza is co-founder and CEO of Crogl, the only autonomous knowledge engine for security operations that investigates every alert and continuously learns the environment with speed, consistency, and depth.
Previously, Monzy held senior leadership roles at hypergrowth enterprise data companies. He led Cybersecurity Go-To-Market at Databricks, where he incubated and scaled the security business, and oversaw Security Research at Splunk, helping shape strategy across the company’s billion-dollar security portfolio.
Earlier in his career, Monzy served as an applied cybersecurity researcher at U.S. Department of Energy weapons laboratories, developing advanced offensive and defensive security capabilities. He has since advised Fortune 500 companies and government organizations on strategic security initiatives.
Listen and subscribe on your favorite platform