Insider Threat & Privileged Users - Ep. 115

Elements of Insider threat programs, the role Privileged users play, and how #Sunburst has affected insider threat programs. With Mike Crouse, Director Insider Threat Strategies, Forcepoint, and Jared Quance, US Gov Insider Threat Program Manager .

Episode Table of Contents

  • [00:45] Insider Threat Strategies and Programs for the Government
  • [09:29] Heightened Awareness for People on Insider Threat Issues
  • [14:14] A Good Alert Means Instantaneous Notification
  • [21:01] Control Human Behavior Properly
  • [27:56] Insider Threat From Compromised Active Directory Exchange Accounts
  • [35:23] The Ability of the Insider Threat Programs
  • About Our Guests

Insider Threat Strategies and Programs for the Government

Carolyn: Today we have Mike Crouse, who's Director of Insider Threat Strategies for Forcepoint. We also have Jared Quance, who's an Insider Threat Program Manager for the government.

Carolyn: We're going to talk about what a successful insider threat program looks like. How the privileged user factors into that. Jared, I want to know about Sunburst, SolarWinds, that massive breach that has so many names right now. Has it affected the way you do things in your insider threat program?

Jared: Has this recent SolarWinds issue impacted my job? No. Have we worked on it? Yes. My lead engineers checked our SolarWinds amputation and determined that we don't have any issues with it. It's affected in the sense that we've had to do some work. But it hasn't changed the way we do anything at the office, not yet.

Jared: I wouldn't have expected it to at this point. But I expect a few months down the road, it will have some implications. We will have to make some changes to some things. I just can't see what those are yet.

Carolyn: I know that a lot of things that happened once the adversary got in. They came in through privileged user accounts, they created new privileged user accounts. If they had breached your networks, is that something that you would have been able to detect? Like the creation of a new privileged user account by a supposedly privileged user?

Jared: It depends on what you mean by your. There is a network that me and my team run, so the answer would be yes. We would have absolutely detected that instantly.

Far More Privileged Users

Jared: Now, as far as the multiple enterprises that we support, where I work. I'm unsure that they would have been able to detect it. I would venture to say yes, but not in a particularly timely fashion.

Carolyn: Why would you be able to detect it and not every other program out there? That seems like a big deal to be able to say that.

Jared: The network that I run only has about 15 users, so that's really easy.

Eric: If you're an enterprise though, with hundreds of thousands of users, it becomes massively more complex because of the scale.

Jared: It does and you have far more privileged users. If we see a new privileged user popup on our network, we'll recognize that that name's not even right. That account shouldn't even be there. But even in an organization that has something small like 20,000 users, what do they have?

Jared: Probably, 20 admins and still have a lower likelihood that one name is going to be recognized if there's an alert set up that a new super user or a new privilege user was created.

Eric: I've worked with organizations where I'd say administrators. Let's consider an administrator. Administrator is a privileged user, others also, but it's out of control. They've had thousands of administrators in different components of the business.

Eric: Because people needed to administer this server or this system. That just seems like an unfathomable problem to me. What's another two people, two users when you have 12,000 privileged users?

Michael: They don't control it, by the way. We talked about it for some, and even when somebody gets access as a privileged user. Like you give somebody access, when you come into an organization.

The Ability to Control Accesses to Limit Insider Threat

Michael: The first thing you do is say, "Okay, what accesses do you need especially if you're an administrator?" Okay, they set you up then you access this. Then you change your job or your role, sometimes organizations, they don't revoke those access.

Jared: They don't rescind that.

Michael: They just keep on adding. Before you know it, I'm an administrator. I have access and I don't even need them. It's that ability to control what accesses each individual has. Being able to not only bring them back to reality and say, "Hey, you're changing a role. You don't need this access to database one, two, three, anymore. Just bring it back."

Carolyn: The recent survey that Honamin did around privileged users, they surveyed government IT shops, insider threat program shops. They found that a third of the respondents said that, I'm looking at my stats right now. Many did not need privileged user access.

Eric: They didn't need it, but they had it.

Carolyn: It never got revoked.

Eric: From your perspective as an operator in this space, the recommendation would be, create fewer privileged users. Keep it tight, when they don't need privileges, remove them. How would you from an insider threat perspective go about thinking about that problem? And how do you monitor them as they do expand or you don't see that as much?

Jared: I started moving mine. Hopefully, I would snag an idea that I'd be able to respond with. First off, every job has its own version of the operator. IT guys generally are the operators in the IT space. Most operators that are successful, have a shoot move mentality. They don't have a clean battlefield mentality.

The Risk That Exists With Fake Accounts

Jared: That’s why you end up with dirty, active directories with tremendous amounts of un-recycled IP addresses, defunct workstations, old accounts. You can probably go into our enterprise and find some fake account called Captain Kirk. It was an actual account on our network at one time.

Eric: Privileged user or just user?

Jared: A regular user, but still, it was admins having fun. Just for the hell of it. They want it to be able to pull it up in the global address list. You can see that there's a risk that exists there with a fake account. So that's a bad idea to have, even if you're just having fun.

Jared: To the privileged users, I would say keep it tight, but I think there is a balance. It's going to be necessary that you always have people that have more access than what they presently need. As a manager, you need your people to have as much access as the worst problem you can conceive of facing in the future.

Jared: The worst problem could be that only one of your 50 people is available. Tier 4 is your highest tier. You have a tier four problem and only one person's available. It's the newest, most junior person. Now you need that person to have the ability to solve a tier 4 problem.

Michael: It's that mission convenience versus security. It's that ultimate struggle about, "Hey, I've got a mission. I've got security and who wins?" In the remote, it's not helping it. COVID's heightened it. Or made that more like, even with COVID you don't have Mike Crouse sitting beside Jared.

Heightened Awareness for People on Insider Threat Issues

Michael: We're admins that can say, "Hey, can you give me access to this now? I got to get on the phone, I got to call Jared. Well, Jared is on leave or Jared is not available. I can't get them on the phone. Maybe he's in a Zoom call. We have so many Zoom calls, he can't talk." I have my boss busting my chops going, "Hey, I need access. I need access to this data. Give me that now. The remote is just causing more of a concern, heightened awareness for people.

Jared: I don't have technology in mind. If there’s a way to create tasks-specific accesses for privileged users, that would be very helpful.

Michael: Or even time-based. You've got an hour to do this and then these permissions dissipate. I always think of it right in the virtualization world. When virtualization came around, it started in the dev test environments. But you could create servers, you could create resource pools. You'd create all these environments, but I don't see the cleanup as much in the virtualization world.

Michael: As you’d like as far as cleaning your house on the back end, cleaning the battlefield. I see the same thing with privileged users. Once I granted it to you, I'm not going to necessarily rescind it. Because I don't remember to do that, I don't remember to shut these virtual machines down. I'm not using them anymore. It expands the attack surface and I think it's a hygiene issue.

Jared: It's interesting that you're the first person I've heard say something like that. I had the exact same thought, which was virtual profiles. I’ve privileged users sitting around who don't necessarily have privileged access to anything.

Resolve Issues That Require Privileged Access

Jared: They're just earmarked as privileged users. Because they're my folks that are going to resolve these issues that require privilege access. As that task comes around that needs to be completed, I can create a virtual profile for you.

Jared: That’ll respond specifically to the needs of the task. When the task is over, either by time or completion of a checklist of events then your virtual privileged profile disappears and boots you out. Star Trek technology is on it, but I'm not thinking of the tier, I'm thinking of solutions.

Eric: It's almost equivalent to working at Avis and leaving the keys in all the cars. Leaving them running because they were used at one time. And not shutting them off when no one's in them or using them. We secure those vehicles, we secure things appropriately when we're not using them anymore in the physical world.

Eric: I don't know that in the virtual world. Whether it's creating user accounts or VMs, or systems or whatever it is, our hygiene is just good enough. I don't know. I haven't spent enough time thinking about it.

Michael: We might have to baseline. One of the questions I always ask is, are we baselining just like we do a user? Or are we baselining privileged users activities based upon what they do, not based upon what their role is? If I'm looking at a user that says, "I see this user, his role is X, but their activity is Y."

Michael: Then as we talked about before, how does their activities change? Because my activities could change what I'm accessing and not accessing, but my role could stay the same. Are we baselining privileged users properly just like we're doing regular users?

Detecting New Privileged User Accounts for Possible Insider Threat

Michael: Look at the SolarWinds or activities like that. The idea is that you have to identify the change. It's not so much the activity, but you'd have to identify the change. Whether it's the change in network baselining, whether it's change in a user, whether it's change of privileged user.

Michael: The most difficult thing is the baseline in individual or baseline of privileged users or admin. You can see if he or she changes their role or he or she changes their activities over time. To me, that's the hardest thing to do.

Carolyn: We can do that, we can baseline people now. So why didn't we baseline when these new privileged user accounts were created? That's not normal. When I say we, I mean everybody besides me. Why wasn't it detected that these new privileged user accounts were created? And who owns that?

Jared: The answer is, where our user accounts were created and what tools are used on that device, on that software to alert you that a new account has been created? How interesting is that?

Eric: Does the director have the ability to wipe the log showing that was created if they're a privileged user? They may have access to delete records from the logs.

Jared: Now, that's particularly interesting. Because when you delete logs off of a server, you create a log that you deleted the log. Anybody who's smart in writing alerts for servers, now, this gets into auditing enterprise architecture, which is so boring. Nobody wants to do it. It's so much data that nobody wants to do it. It's this large looming issue that nobody's really dealing with.

A Good Alert Means Instantaneous Notification

Jared: Somebody probably is, but largely I consider it to be not being dealt with. A good alert would always be instantaneous notification of the production on an enterprise piece of hardware of that specific event. All the logs were just dumped. The dump logs log was created. That's one that anybody who's doing log analysis would be looking for to be notified of instantly.

Eric: I find that boring.

Michael: Is that data being shared across different stakeholders? That's my point is, for example, the log.

Jared: Whose job is it to audit the auditable logs, the security logs, the administrative logs, et cetera, of any given enterprise? How many pieces of hardware out there are generating these millions of logs per however long it takes to make a million logs a week, a month per piece of hardware?

Jared: On any given enterprise running a 50,000 user base, that's, let's say, a military type. How many physical servers are we talking about? And how many logical servers are we talking about? Then, each one of those has its own auditable set of logs.

Jared: Do you know which ones are interesting? Do you know which ones you can completely eliminate from review and whose job is it to review that one? Why I'm telling you this, the very first time I asked the question, which is, we're talking in 2005. I asked the question, "Who's auditing these three servers that are TSSI servers over here?"

Jared: A particular person said, "I'm actually doing that right now." I said, "Oh good, because I'd like to see the results of your audit." This person said, "Well, come have a look." It was on a printed Excel spreadsheet. It’s about that tall.

An Insider Threat Case

Eric: Hold on. That tall being two feet of paper print out?

Jared: With a highlighter, and I just went, "Okay."

Michael: That's the old insider threat days, Jared, that's what we used to do before we had technology. We used to have a bunch of packet logs that we would bring to CISO. We’d go, "Hey, here's somebody moving data outside your organization. This is how it goes." "What do you mean?" He goes, "See this packet here."

Michael: Then you leave through another 20 pages. "Oh, that packet over here is the same packet there." Leave through another 20 pages. "Oh, that packets over there." You gotta stack it up. The network logs, and router logs and switch logs and go, "Oh, here's your insider threat case." You show it to somebody and somebody goes, "I can't make any meaning."

Eric: It's 2020. We've come a long way. Haven't we Jared?

Jared: We scoff at these ideas when we talk about them anecdotally like this. But at least those people were doing those jobs. The worst story that we should really scoff at is we say, "Hey, who's auditing these servers?" Everybody says, "Not me, I don't know or no one." Good for them. I ended up taking the entire enterprise boundary IP schema.

Jared: All of its connections with packets going out to specific locations, very, very specific protocols. It was such an amount of data. I took it out to an unnameable place and loaded it up into a virtual reality system. Tossed on the goggles and started looking at the Tony Stark array.

Carolyn: You did not do a minority report. I love that.

Pushing Things Around a Massive Spider Web

Jared: I pulled a Tony Stark and I actually found what I was looking for, which was amazing. It took all this data that was 40 feet of paper, put it into a VR system. Started pushing things around in this massive spider web and found exactly what I was looking for. It was interesting.

Eric: I was going to say, we talk about AI and machine learning a lot on this show. This to me is a perfect example where machine learning would really come into play. Automating that grunt work, if you will, no longer printing it out. Where human and machine teaming could figure out what's going on and where the interesting areas are to look.

Carolyn: Tell me that's where we are now, Jared or Mike.

Michael: Machine learning is taking a little bit. There's a dark reading article or something. It said, machine learning or AI is really kind of replacing your tier 1 SOC analyst. You know that analyst that used to look at a screen and used to look at each event. Each incident and kind of triaging and triaging.

Michael: Now what you're seeing is, machine learning and analytics taking that triaging somebody is doing, then providing an outcome. Or result, or an indicator or a risk to the tier 2 SOC analyst who is responsible for deep diving investigating. Saying, "Hey, this is something that we really need to take a look at."

Michael: As machine learning and AI matures we reduce the false positives. That’s one of the things the poemas study said. One of the big issues they had is having too many false positives. That’s one of the big things. Organizations are dealing with too many false positives.

Too Many False Positives Can Become an Insider Threat

Michael: We see this in commercials. I would love to see if Jared's seeing the same thing. Too many false positives we're dealing with within our technologies and how they're addressing that. That's one of the biggest risks to any effective program, being able to kind of get through that noise. Jared, do you see that in yours?

Jared: We're far more capable of dealing with false positives. Using things like data scientists to create algorithms and to create machine learning. I won't go as far as artificial intelligence because I would argue that that might not even exist today.

Eric: That's why I stayed with ML.

Jared: That comment wasn't for you in regard to me. I've decamped. Maybe it wasn't me and I just took it off the table. We're pretty far along with using machine learning to reconcile high rates of false positives. Especially to replace our old system of, what did you say the false positive rate is? 95%, get rid of it. Get rid of all of the data, get rid of the signature, get rid of everything.

Jared: 95 is too high, I'll lose the 5%. That was the old way. As far as what Carolyn asked about using machine learning for reconciling that log issue, that's a big ask.

Jared: You have to presume that. So the data you’d need to train a machine learning algorithm with, supervised or unsupervised. They’d have to come from a network that you presumed already completely sanitized. Then it would have to be populated by users that you’d already presume are all good guys. So you'd have to set that up outright.

Control Human Behavior Properly

Jared: I'd have to call somebody like MITRE, who knows how to control human behavior properly. Prior to the instantiation of an experiment just to get them to set up the people properly. So I could set up a forensic network clean, then load up a machine learning algorithm that I hope works. Then task people to do simulated stuff. Start teaching this machine what it can get rid of to try and handle the false positive issue.

Jared: Other than that, all I can do is supervised learning of past events. It’s where I've noticed that these criteria were met in this particular situation. One of those criteria was these audit logs, et cetera. Then, if I'm able to detect or aggregate that data so that the data can come into a backend. And then be pushed to this ML and flagged, then I'm cooking with gas.

Carolyn: It still sounds really manual.

Jared: Everything is manual at first.

Eric: There's no perfect, there's no magical answer here.

Carolyn: Right, but at some point, there has to be some level of automation.

Jared: You get to it. But nothing's going to rescue you from the massive amount of work you have to do to get there.

Michael: You still need eyes on it. Also, you see me determine what data is useful. To me, there's a lot of data that's really not relevant. We have so much data up here in a big funnel. Probably, maybe 30, 40% is actually reasonable data that shows business results or outcome results.

Michael: This other is noise up there. That takes an individual to determine what is a useful data that can drive an algorithm.

The Way to Solve an Insider Threat Problem

Michael: Drive machine learning that can give you the results that you need. You just can't start throwing data at the problem. Throwing data at the problem is not the way to solve the problem.

Jared: In a similar fashion, throwing problems at a name like ML, isn't a solution either. Throwing problems at a group that you arbitrarily call cyber isn't a solution. It doesn't excuse any actions going forward. An issue like SolarWinds or any other issue that may crop up in the future.

Jared: The question is always going to be, what would have been the quickest way to detect this? Can we implement that now? We talked a lot about these logs, but is that the most efficient and effective way for detecting this? The answer is, I don't know if we've gotten there yet. But if the worst thing they did to us or the critical juncture was the creation of fake personas in the privileged user space.

Jared: The answer is if we're not detecting new privileged user account creations, why? Let's do it tomorrow. If we're not detecting those that are there, we need to detect those that are there tomorrow. And if it's important to us, we'll do it manually, if necessary often in every single day.

Eric: In a lot of cases, organizations did not detect the adversary coming in through the breach in the case of Sunburst. Maybe others were going to discover it in the future. As an insider threat expert, we are hefty. Once you recognize that the adversary is in or likely inside your network, how do you think through that problem? Where does your mind go? What are the first things you do?

The Adversary Is in Our Network

Eric: The cyber team comes to you and says, "We've got a problem. We don't even know, we believe the adversary is inside our network." You say, "Well, how'd they get there and when?" They say, "Well, we're not sure yet and 10 months ago maybe. Maybe as long as 10 months ago." Where does your mind go next? What do you think through that problem set?

Jared: Like an interrogator. I have a lot of different things in my background. I'm not a trained interrogator per what the Marine Corps would allow me to say from my past with them. But I have been trained in interrogations. Everything that I do to assess a situation starts with interrogation and interview skills. I have been on the receiving end.

Jared: Where we believe this is what's going on for things that actually happened and things that didn't happen. If you come to me and say, "I believe the adversary is in our network." I say, "Why do you believe that?" I can give you a million scenarios. I got called up at the department level and said, "You need to get down here to us. We have to brief you on this classified breach."

Jared: I go running down there and they say, "This is actually happening." This happened in the federal government in somewhere around 2008 or 2009. "You have an active breach in your network." That's what they told me when I showed up. I said, "Okay, I'm going to need more than that. You're talking to the action arm here."

Eric: Just give me a little bit more.

We Have an Active Breach

Jared: They said, "Well, if you come to our meeting, next Thursday, that we have on all of our Thursdays, we'll explain more." I said, "You don't understand. I just brief people with stars on their shoulders that we have an active breach." They said, "Yeah, you do." I said, "Well, I need some, some evidence of that. Why do you think that?"

Jared: They said, "Well, again, we don't have time to discuss that. We're on our way to a meeting." We couldn't overcome the fact that their need to get to this meeting seems to exceed my need. To understand enough information about this breach, to be able to go start working on it. I never got to the root of that in that moment, but a week later on Thursday, I did.

Jared: It was simply that this organization had a philosophical disagreement with the intelligence community over the origins of a particular intrusion set. And so, they thought that some DNS queries were indications, full positive indications of adversary implantation. That's one scenario where somebody has come and said, "We believe we've been infected." That's why I think the part of the detection phase lasts so long.

Jared: Because people that are actually trying to reconcile it need to get to a core attribute of what you've asserted. To be able to put any work into it. They know what their tools are. If you can at least tell me, I've got a beacon coming out of my network. That's going to at least let me start like, "Where's it calling to? Give me the call address or scan the boundary, give me something."

Insider Threat From Compromised Active Directory Exchange Accounts

Eric: Let's go to Sunburst. Jared, we know the adversary got into our network. They're radically here. We know that they've compromised active directory, exchange accounts and O365 accounts. Where do you go next? How do you start working on that problem? What do you think about it?

Jared: If I already know that they have created privileges or accounts, I start with that. So I already have a list that I can actively query of all the people with privileged user accounts. I can look back in time and see that that number's changed, because it's all digitized. Because I've dumped and periodically re-dumped all of the user account data into my own database.

Jared: I can just easily run a differential on that, pull a new load of it today. Run the differential and see what's been created that's new. Just as a point of caution, eliminate call down to the net defense guys. Say, "You need to eliminate all of these accounts." They may say, "Oh, Johnny Stevens was physically in here two days ago. I made that account myself."

Jared: I'll still say, "You need to freeze that." That's starting point one. If I'm on O365, that means I've outsourced to the cloud, which is a word that I hate. Maybe I'm with Amazon, maybe I'm straight up with Microsoft. I'm calling them. That's my second step.

Eric: You're really looking at the account creation timeframes and narrowing it down that way.

Jared: I don't know specifically what I'm looking for. All I know is that you've told me that I've been popped. And here are some of the activities that indicate that I've been popped.

Telemetry Built Around Users

Jared: Here are the general areas I need to go look. User accounts, well, privileged user accounts, O365 has been popped. That does not actually mean anything to me. "Well, they've infected O365. Okay. Well, I better call Microsoft."

Jared: If you're going to give me general information, I don't mean to you, I mean, anybody. You give me general information, then my reaction is going to be just as general, but with a sledgehammer. "I don't know which privileged user accounts have been infected, turn them all off."

Michael: Really, you don't even have technology to do that, but you'd have to telemetry, don't you, Jared? You have telemetry built around these users, you can go back and pull your user telemetry to look at a person.

Jared: For every five minutes that I spend dealing with that? Right, they have an additional five minutes of the network. It's a matter of how critical is this and how broad or how abstract is the description of what's happened to my network.

Jared: My reaction is, if I can take the action that I want to take, if it's just a recommendation or whatever. My reaction or my recommendation is going to be equivalent to the risk posed, or the threat posed. Or the vagueness of the information that I have about the attack.

Carolyn: It comes back around to that old dwell time thing. We're going to have to end here, even though I could keep talking about this probably all day. I want to end with a few rapid fire questions. I'm going to ask them to you first, Jared. What have you been shown on TV and loved?

Rapid Fire Questions With Jared

Jared: What about I binged on TV and loved, lately? Game of Thrones period. Last binge there, I loved it. Cobra Kai, binged, loved. I'm an old fan.

Eric: What about karate kid?

Jared: Yeah, Mandalorian.

Carolyn: They turned it around. I'll tell you what? Second season turned it around for me with Mandalorian, but okay. What are you reading right now?

Jared: The notes I made for that now, like generally Bertrand Russell, Analysis Mind.

Carolyn: Do you have cybersecurity or an insider threat you must read?

Jared: I haven't written it yet. No, I don't.

Carolyn: If there’s one thing you could change within your insider threat program within cybersecurity. Just like wave your magic wand as Eric would like to say, what would it be? What would you change?

Jared: I would change the amount of external influences that could modify what I do.

Eric: What do you mean by that?

Jared: I would get a synchronized message coming out of the law offices as they were. All the lawyers would generally have the same tale to tell. I wouldn't have this constant dichotomy if you have to worry about what the judge is going to say. Versus you have to worry about the court of public opinion.

Jared: These are two things that are never going to be the same thing. So, which am I pandering to while I'm working? Instead of having one group of lawyers say that, and the other group of lawyers say that. One go for the judge and one go for the court of public opinion, I would just have them come up with pick one.

Remediating Hacking Issues or Insider Threat Issues

Jared: Or pick a narrative you want me to play to from a legal perspective. From a policy perspective, I would have the same thing. I’d say this before you leverage this policy, why don't you accept the burden of ensuring that it's consistent with all of the other policies out there? Why is it me on the user level that constantly has to reconcile?

Jared: I'm not a policy expert, except that I have to follow a bunch of policies. I don't even write very many of them. So why is it now incumbent upon me to take this policy that you created? De-conflict it with all the other ones that are out there and answer for how I'm going to navigate them all. Why didn't you do that from the outset, why is that not your charge?

Jared: I would have these external influences sync themselves prior to arriving on my doorstep. People still talk about moving quickly. Getting onto the target quickly, assessing the situation quickly, remediating hacking issues or insider threat issues. What takes most of my time is dealing with external influences coming in that haven't been precinct. Having to spend inordinate amounts of time meeting with people to counter influence these things back out.

Carolyn: This comes back to the fundamental question on the survey about regulations that the industry has.

Michael: I was thinking about that same thing. There was a question on, what is the biggest struggle. They talk about increasing the number of regulations and industry mandates. That’s a struggle I think all the programs have. The changing with the different views, even the different views of administrations. Jared is going to be dealing with that again.

The Ability of the Insider Threat Programs

Michael: As we change the administration over to a new administration, new people are coming in. There's going to be new lawyers, new views, new interpretations of laws and regulations. That's going to impact the ability of the insider threat program. Maybe not to the extent we think it was, but that's just more work. That's from the user practitioner point of view that they have to deal with that takes some of this form.

Jared: Let me squeeze this in because you hit the nail on the head. My lead technician, my lead engineer just emailed me last week. He told me that the current form of the NDA, the National Defense Authorization act that hasn't yet been signed. It has language in it that obligates us congressionally to cover down on mobile platforms.

Jared: Insider threat monitoring on pro-mobile platforms. I could speak for an hour on that and I'm sitting here going. Now our obligation is a 180-day window to report back to them how we're going to deal with the problem. It may just be a scribble that says, "Can't do it."

Michael: You might as well just want to hold up a little theme as a dollar sign that goes, can do that. Now, and put a little dollar sign up there.

Jared: It's actually going to say, you need to design a new mobile device. We're going to crash everything that you try to put us on.

Eric: That's a real problem. I've spent a lot of time there and mobile is really difficult to monitor.

Carolyn: We're rapid fire now to you, Mike. What have you watched on TV lately that you love?

Michael: The Boys, Amazon prime.

Rapid Fire Questions With Michael

Carolyn: That was awesome. What are you reading right now?

Michael: I just finished reading Mike Tyson's autobiography back in the day. It's just like Tyson or whatever it is. That was pretty interesting just to see all the dealings. He's been a newsy, had that fight that was just with Roy Jones Jr. which I actually watched. I thought it was entertaining. It's entertainment value, so I picked that up and just kind of refreshed what a weird life he’s at. I thought that was interesting.

Carolyn: What about a must read for cybersecurity or insider threat practitioners?

Michael: I don't have a must read, I don't read books per se. I go and read articles that come out and get published at findings, podcasts like you have for cyber. I’d be lying to say I have a must read because I kind of look on the web. I kind of absorb through the web.

Carolyn: Well, it's kind of the nature of cyber and insider threat.

Michael: It's changing.

Carolyn: You've got to go for those fast hits. That makes sense.

Jared: The book's going to be titled, Nobody Knows What They're Doing.

Eric: Nobody's figured that out, Jared.

Michael: Jared has to figure it out. I'll tell you, if Jared wants to write a book, I will co-author it and we'll have fun.

Carolyn: That'd be awesome. Mike, you wave your magic wand, what's the one thing you would change?

Michael: Within any cyber threat programs or risk and of all these? There needs to be a tighter coupling between the people that are doing user activity monitoring and insider risk programs and cyber. There are too many stovepipes, not within the government, but also within commercials.

Duplication of Technologies to Solve One Mission

Michael: There isn’t enough communication, they don't share the data. They don't share lessons learned from what each of them did. There's duplication of technologies that could be used to solve one mission. I would love to get rid of the stovepipes, get rid of those avenues. Get a little more communication between them.

Michael: Allow the insider threat program to see into the cyber program. A lot of cyber programs to see into the insider threat program and use that business outcomes and drive to one mission. You're protecting users and network, data and systems. They shouldn't be self-piped. They should be together.

Eric: To Jared's point, we've got a breach, "Hey, I will get back to you next week."

Carolyn: Back to Jared's point, I keep thinking about Tony Starking and that just made my year.

Michael: Here's a flying soup that seeks to fly around.

Carolyn: Well, thanks so much you guys. This has been a great conversation as always. Thanks for joining us.

To The Point Cybersecurity was recently named one of the 30 top Federal IT influencers  2019 & 2020  because of fantastic guests. We are always looking for great thought leaders to interview. Please email me with guests you would like to have on the podcast cford@forcepointgov.com

About Our Guests

Michael Crouse, Forcepoint Director of Insider Risk Programs. Mike works closely with top company decision-makers. He lends key influence in helping them improve employee security behavior by changing the way people think about security; developing new cybersecurity policies, procedures, and technical approaches; and generating real-time, actionable data. Derived from employee behavior and industry baselines. Mike has over 25 years of experience supporting Commercial and Federal Organizations. Starting with the National Security Agency (NSA) and expanding his career with various commercial companies.