Get a Break from the Chaos of RSA and Meet with Forcepoint at the St. Regis.

Close
Episode
71

The Intersection Of AI And Cybersecurity

The Intersection Of AI And Cybersecurity

The intersection of AI and cybersecurity with Steve Orrin, CTO of Intel Federal.

Episode Table of Contents

  • [01:17] Where Do AI and Cybersecurity Fit Into the Government Today
  • [05:19] The Pre-Work Required to Successfully Run AI and Cybersecurity
  • [09:54] The Difference Between AI and Cybersecurity in Terms of Accessing Large Data
  • [14:30] Doing the Non-Sexy Work of Data Curation for AI and Cybersecurity
  • [18:55] The Need for a John F. Kennedy to Build Stronger AI and Cybersecurity Foundation
  • About Our Guest

Where Do AI and Cybersecurity Fit Into the Government Today

Carolyn: Hi, welcome back to To the Point Cybersecurity. This is Carolyn Ford, standing in for Arika Pierce this week, and I am joined by my cohost, Eric Trexler.

Carolyn: This week, we are joined by Steve Orrin, federal CTO at Intel.

Eric: Steve and I go back a good ways. We used to work together at Intel when I was at Intel McAfee, so it's great to be speaking to Steve again.

Carolyn: Well, let's jump right in. We've got a topic today that interests me a lot, and it's quite the buzz word everywhere. The intersection of artificial intelligence and cybersecurity. Steve, with where do you think AI fits into the government now, and where is it going to be in five years?

Steve: I think people are evaluating AI in a variety of different areas because it is the hot new topic. It is the thing that everyone wants to get value out of, and so you're finding it show up in a lot of places. In tactical environments where there'd be an intelligence, surveillance, reconnaissance use cases.

Steve: Whether you're trying to do object recognition and change detection in the field or from afar. To the other complete other side of the camp to logistics management and operations efficiency. We're seeing AI being applied. One thing that's important to start with is a clear definition that AI can mean a lot of things to a lot of people.

Steve: The best way to look at it as AI is an umbrella for various kinds of machine learning and machine intuition that covers everything from the CNNs and DNNs, so the neural networks and convolutional networks, to classic machine learning and analytics.

The Biggest Challenge AI and Cybersecurity Are Facing

Steve: If you use that bucket, we're seeing AI and machine learning be applied quite literally everywhere. The biggest challenge is how do people get value? How do you translate a really powerful pilot where you've been able to demonstrate some good functionality into a scale or into a distribution? So that you can actually get the government-wide or mission-wide value out of it?

Steve: We see a lot of folks have been playing with AI and applying it to different use cases. Cybersecurity is a key area where it's really hot right now with looking at how can AI help both the blue teams and the red teams really catch up with the threat actors and the threat adversaries?

Eric: Steve, are you seeing this in practice? I'm hearing a lot of people talk about it. In fact, one of the things I observe a lot is AI is going to be the solution to all of our problems. Then I remind people, "Well, if we can use that from a network defense perspective or a cybersecurity defense perspective, the adversary can use it from an offensive perspective also." Who comes out on top of that equation?

Eric: Are you seeing it in practice?

Steve: I'm seeing it. Like I said, I see a lot of it in pilot or in labs.

Steve: So, we've not seen wide-scale adoption and deployment of AI solutions in the government yet. I think that the successful use cases have been what we call the pilot program, whether it be a lab exercise and a little innovation cell. Or in some more advanced organizations, they've deployed it for one problem area.

Crossing the Chasm

Steve: There's some really good examples throughout the government around. And by the way, object recognition, so your autonomous vehicles or your Tesla is AI. Being able to do automatic recognition of buildings, trees, people, troop movements and things like that is being used in these point solutions or point deployments. What we haven't seen is the, to use the term to crossing the chasm yet of where it's gone wide-scale adoption where you're an average fighter.

Eric: Production, essentially.

Steve: Exactly. But there are groups getting value out of it today.

Carolyn: Is it just not ready? I mean, is the object recognition really the only thing that's ready at this point? Why isn't it?

Steve: I think there are a couple of things. What it comes back to is something that Eric said in the very beginning. That it's the sexy new thing that they just want to use everywhere. Proper deployment of an AI and machine learning solution is a process. It starts with making sure you're asking the right questions so that you can get the right answers. A lot of times people just want to sprinkle AI pixie dust on any problem and say, "It will solve my problems." And that's not the way it works.

Eric: People, they'll write rule sets around firewalls or something and call it artificial intelligence these days. I'm like, "Wait a minute. We've been doing this for a couple of decades here." That's a ruleset. If you see on this port, stop it. I mean, that's not AI.

The Pre-Work Required to Successfully Run AI and Cybersecurity

Steve: That is not AI, but if I wanted to be able to use AI I need to be able to train that model or that algorithm on a set of given data and then apply it to a problem set. It's a different way of approaching the problem. A lot of people think that I flip a switch and AI turns on. There's a lot of pre-work that happens to get to successful AI.

Steve: People go play with a camera and do object recognition forgetting that millions of images have been trained in advance of flipping that switch. When you start applying AI to new problems, whether it be cyber defense on the network, malware analysis, operational efficiency for alerting, all of those things require massive amounts of training, data labeling, data curation. In order to get to a point where you can start to see that value. The myth a lot of people forget is that AI can provide those values, but it's going to take work and it's going to take the process. There's no skipping ahead of the line.

Carolyn: What I hear you saying is my sci-fi loving brain that wants to go immediately to the human-cyborg is not happening yet. But is it coming?

Steve: In the cybersecurity space, I like to say that we are really at the toddler phase of AI. We're just learning how to walk. We have ways to go before we're driving the car and asking for the credit card when it comes to cyber applications of AI. In other fields, we've seen massive innovation and really amazing, almost sci-fi-like capabilities being deployed using AI.

Understanding How the Brain Makes Decisions

Steve: Whether it be the human to machine interface side, and you talk about cyborgs. There's lots of examples out there of brain to prosthetic interfaces that are being used today where we're using AI to understand and model the brain's functions in order to be able to translate that into machine code.

Steve: On the flip side, there's some great research coming out of the University of California San Diego. It’s around understanding how the brain makes decisions and comes to conclusions and building AI models that are cognitive in nature. So that we can be more like a human in our computers as opposed to trying to do it the other direction.

Steve: We're seeing that future you want to see of the cyborg and of the AI that's able to really take control and do things could be coming. I don't think we're ever going to get to a point of the Terminator kind of Doomsday sci-fi, but I think we're going to get to a point where AI becomes a part of our daily lives.

Steve: In many cases, it's already there, you just don't realize it. There's a lot of AI that goes behind things like Alexa and your Tesla and things like that. It's where it starts to really become interesting is when it starts integrating throughout your life. I think that's where we're going.

Eric: So, Steve, why in cybersecurity? Hot field, tons of money flooding into the space, highly unprofitable. You would think that somebody could cross the chasm, and really make a difference. Why aren't we seeing it yet?

AI and Cybersecurity as a Complex Environment

Steve: Two reasons. There are probably more, but the two that come to my mind is, one, cybersecurity isn't a simple question and answer. It's not, is this a tree or not? We all know that cybersecurity is a complex environment of being able to detect what is bad, what isn't. And being able to connect that back to changing environments and changing threat actors.

Steve: The fact that we have an active adversary as opposed to a tree that's standing still but I just have to make sure I don't hit. On one hand, the problem is much harder. So when we start thinking about what questions we want to ask the AI, the answer shouldn't be, is this malware? Because then we're not getting the true value. We need to ask harder questions.

Steve: The flip side is, and this is something that I've actually said in multiple places, we need data. Data drives AI. Most organizations don't share threat data with each other. They'll share IOCs, which is really nice and important for security operations. But if we can start sharing full tactics, full campaign information, those captures in a way that meets the legal and liability requirements we can then train these AIs to actually start detecting some of this stuff.

Steve: For all stuff on siloed information, the AI we apply to those kinds of cybersecurity will always be hampered until we get better algorithms. The innovations where someone can come in and really do something interesting is being able to do research into what they call the incomplete dataset problems.

The Difference Between AI and Cybersecurity in Terms of Accessing Large Data

Steve: Where you don't have access to large data, like most of the AI we build today in the world. How do we then do that for cybersecurity where we don't have good data?

Eric: Do you think it'll happen?

Steve: I think it absolutely will. I think that given enough time and motivation, we will get there just like we have in autonomous driving, in the bio-interfaces and other places. I just think that it's going to take work and effort, and it's not happening fast enough for some people.

Carolyn: It seems like to me the better we get at using AI for cybersecurity, it stands to reason that it's going to become the hacker's friend. I mean, it's going to make it easier for people to pretend to be me using AI.

Steve: Absolutely. Having an active adversary, the techniques that we use will be used by the adversary. But here's the thing to remember, that happens today. So, it's not going to change the game. We just need to get that much better at deploying it. What can make things easier is that we have what the hacker doesn't?

Steve: The hacker doesn't until they've compromised your system, have full access to your system and your infrastructure. We have better data. If we start using the data to drive our AIs, we'll have better-trained algorithms because we'll have a broader range of things to differentiate against that the hackers will not. At the same time, we have to recognize is that they are already using these technologies.

The Adversarial Networks

Steve: We're seeing things like the adversarial networks be used for malware generation. It's been demonstrated at the DEF CON conference and then the Grand Challenge and other places. It's an exciting area of research. The adversaries will use it, but we have to take the little advantage we all have in the industry.

Steve: We have the data, we also have the infrastructure, we know what the IP mappings are. We know what the ports are supposed to be doing. If we labeled the data and drove that to the AI, we will get better algorithms than the adversaries that are dealing with incomplete data set problem on the outside.

Eric: Do you see the government solving this problem industry or kind of together?

Steve: Absolutely this is a government-industry partnership. We all are in the game together, and we're seeing great examples of industry-government collaboration. I'll point to one, which is DHS has a program called IMPACT, where you basically sign up for it. Then you get free access to data sets that they'd collected across multiple government capture the flags.

Steve: Open source data sets that they published across a variety of different AI use cases, and security being a key factor there. There's an environment where they're giving away the data to organizations, to researchers in order to be able to better train their algorithms and their models. We're seeing examples of universities that have been funded by the government to create datasets open sourcing those data sets.

The Most Pernicious Effort of the Technology Revolution

Steve: It's going to take an industry-government collaboration, and we're already seeing some examples of how that collaboration is leading to better research. And for companies to get access to data sets that, especially startups, that don't have 20 Fortune 500 companies to go ping and ask for data from having access to the DHS data sets is really helping them.

Steve: There's absolutely hope, but it's going to take work. There's no magic here.

Eric: Yes, I don't see this as an easy problem to solve, but it's one we need to do. We have to figure this out. I was reading an article this morning, from NSA General Counsel Glen Gerstell. He talks about another challenge. "What is surely the most pernicious effort of the technology revolution flows from the global border destroying nature of technology and cyber."

Eric: He goes on to talk about how easy and cost-effective it is. There's a line that kills me. "It is almost impossible to overstate the gap between the rate at which the cybersecurity threat is getting worse relative to our ability to effectively address it." And my mind goes to AI.

Eric: We have the money, we create the greatest weapon systems and life-saving systems and everything else in the world. We should be able to stop this problem. We're just not.

Steve: Yes. It's going to take time and concentrated energy and recognizing that there's not a quick win. It's not a switch that you just flip and magic, you get AI.

Doing the Non-Sexy Work of Data Curation for AI and Cybersecurity

Steve: But I think where he's hinting at with that is that unless we dedicate the time and energy. Do the non-sexy work of the data curation. One of the interesting things is that a lot of people focus on individual models.

Steve: I'm going to look at multiple malware samples looking for changes. The sensor fusion, data fusion, being able to look across different types is an area of research that's ripe for cybersecurity because it is a complex problem. It's not a, is it a tree or not kind of problem.

Eric: But the industry is so fragmented. I know you're at Intel, which is a very consolidated industry. It's so fragmented here. Do you think we'll get there? How do we get there?

Steve: It's going to take innovation from the vendors in the industry working together. It's going to take some disruptors coming out especially in the cybersecurity space, I think disruptors are going to be the thing that sort of kick-starts a lot of collaboration.

Steve: The security industry itself has had really good examples over time of collaborating, but there are oftentimes where everyone's, "I'm a firewall, you're your virus vendor. You're something else."

Eric: Right, we're very segmented. We don't look at it from a problem or an outcome perspective.

Steve: Exactly.

Eric: This is the widget. This is the tool I create. Coming from Intel like you, I mean, I look at the CPU as the starting point for the art of the possible. You can get it to do almost anything. Cybersecurity, this is what I do.

How Can We Come Together in the AI and Cybersecuity Realm

Steve: That's where we can learn from companies like Intel and others that are cross-industry and build ecosystems. See how, in the cybersecurity realm, can we come together to solve that bigger problem. We've done this a few times in the past.

Steve: There was a major effort back in the early days of web security where all the web security vendors got together. They said, "We need to get a better definition of the threats, of the mitigations, so we're not confusing customers.” And ended building a set of standards that all the vendors then adopted to help better communicate.

Steve: We've seen the same thing happen in the antivirus world. I think that AI, just like we have an AI initiative, it's an executive order. We should have an AI initiative for cybersecurity where we all worked to try to get to that point that you made which is let's get to its outcomes. What can we all contribute to drive those outcomes?

Steve: Whether it be some of the data labeling sharing, classification sharing, best methods for how do you curate datasets from customers. Ultimately, every security vendor will have their value that they bring to the table. But you're right, we have to change the thing towards outcomes as opposed to just widgets. I love the way you put that.

Carolyn: Is there talk in the government to create an AI group like that that you're talking about?

Steve: For security, I have not heard one yet.

Eric: No, I haven't either. Everybody talks about it for cybersecurity, but there's no unifying component.

The Absence of a Unifying Component to Work for AI and Cybersecurity

Eric: There's nobody who is saying, "We're going to do this." And I would say the same thing in the industry. Everybody talks about it, "We'll go to RSA in a couple of weeks."

Eric: Everybody's talking about it, but there's no unifying component that brings us all together. And says, "This is a hard problem impacting this world. We as an industry are going to come together and solve this problem."

Eric: It's not like it's cancer or it's something where multiple people are all coming together with the same interest and they just want to solve the problem and save lives. I don't know. I'm not seeing it, Steve.

Steve: You bring up a good point. Like we had the Moonshot for Cancer and a couple of others. There are groups like the JAIC, the Joint AI Center, in the DOD that have been stood up to tackle, from a DOD perspective, the big problems around AI. Part of their charter is industry collaboration both through the DIUX, as well as through things like DARPA.

Steve: There may be an opportunity for a Grand Challenge around AI. I think that's something that we can all advocate for, that they have the essential location. They have the rules of engagement to allow them to connect to both academia and industry. Maybe it's the JAIC that takes on at least the first shot, at trying to do something collaborative in AI.

Eric: Maybe they are the lead component. That's a good point. We have the capability, it's marshaling the resources.

The Need for a John F. Kennedy to Build Stronger AI and Cybersecurity Foundation

Eric: We need a John F. Kennedy to say, "Hey, we're going to put a man on the moon in the next 10 years. We're going to address this problem in the next 10 years." We just seem so fragmented, my time in this space.

Carolyn: I brought up the cyborg stuff, but it's very real. Like you said, Steve, it's happening now. It's a little terrifying to me when you say we're not quite to the sci-fi Terminator stage yet, or maybe never will be. Honestly, as I read these articles about the human interface to prosthetics, that seems pretty out there to me. If we don't get something to guide us and govern this, it terrifies me. As a citizen, it terrifies me.

Steve: Well, you're one of the few. It's important to note two things. The brain talking to prosthetics, think about the direction there. The brain's talking to prosthetics, not the other way around.

Carolyn: But if somebody hacks that interface and makes prosthetics do something that I don't want them to do.

Steve: That is the foundation of the fact that security has got to be built into all of these technologies. Actually, it's not a prosthetic problem, it's an IOT problem. Your prosthetic is a connected device.

Steve: All of us need to do a better job at the fundamentals there. One of the things that we've found is a lot of people ignore IOT security until it's way too late. This is both a design-in, the features need to be there. Then the applications that leverage them need to take advantage of them.

Major Attacks Coming in Through the IOT

Steve: There's hardware features in place today in most IOT devices to flip on security. Oftentimes, it doesn't from a cost of time to market perspective. Or people don't think, "Well, my IOT device isn't important enough. There's no reason why a light switch needs to have security." And that pervades throughout the IOT industry.

Steve: So, better security built-in and another part of the story is that making sure you have a good supply chain for those devices and things that go into devices. So that you don't have a prosthetic hacked or a router hacked or any of those kinds of devices.

Steve: We know that a lot of the major attacks are coming in through the IOT as the foothold. Whether it be the classic one from years ago of the HVAC system to even more modern attacks are coming in through routers, modems, or other devices.

Carolyn: You just touched on a topic near and dear to us, the supply chain. Unfortunately, time has beaten us here, so Eric, do you have any more questions before we wrap up?

Eric: Steve, what do we need to do to secure the supply chain? I know you do a ton around supply chain security.

Steve: The most top thing we need is visibility and transparency. You can't secure what you don't know. The number one thing is for the vendors in the supply chain to provide transparency. And for the consumers of these technologies to demand that visibility.

Steve: Once you have that visibility, you can start to then apply policies.

Applying Security Controls on AI and Cybersecurity

Steve: Not everything needs to have a 100% trusted, secure supply chain. But you need to know what's in your supply chain to make that value proposition or that policy decision. So, absolutely transparency and visibility is a foundation.

Steve: The next piece is for those things whether it be highly regulated industries, nations, national security level applications, mission-critical healthcare, you need to be able to apply security controls to each step of it. So that you can verify each component as it comes together into a system of systems and then be able to verify them on launch and at runtime.

Steve: Not only did I system boot, but there are all the components that led up to that point, do their job correctly. This is a firmware challenge. Being able to verify your firmware and get visibility into the current version that you're on, the correct non-vulnerable version of your firmware. It's the software that connects to it.

Steve: There are lights of hope here. There's an industry consortium that have come together around transparent supply chain. Intel is one of the leaders here working with all of our ecosystem and our competition as well. We're all coming together because we have to solve this problem together.

Steve: We're seeing some really interesting companies out there looking at things like firmware, supply chain and firmware security. Those are really starting to raise the light. These are some foundational things we can do to get better at supply chain.

Eric: I'm going to add one more to your list. You've got to care. Suppliers, consumers, whether the consumer is an individual, a household, or an actual corporation, you've got to care about this problem.

Getting the Common People to Care

Eric: What I hear about when you talk to common people, whether in a business or just on the street when you're talking to them, that they don't understand the problem.

Eric: They don't see the challenge in the problem, and they certainly don't care just yet. When you explain it, they get it. But until then, we've got to get some awareness. I'm glad to hear the industry is stepping up though and working the problem. A couple of years ago, there was an unknown issue.

Steve: And things have really started to progress. There's still a lot of work to be done, but I think we're starting to see the interested parties come together. Like you said, working with customers that care to help get the best practices in place so that customers, as they start to care, don't have to start from scratch.

Eric: Carolyn, we covered quite a bit today, including the futuristic stuff. I love it.

Carolyn: We did. We'll have to get you back, Steve. Thank you so much for being on the podcast.

Steve: My pleasure.

About Our Guest

Steve Orrin

Steve Orrin is the Federal CTO for Intel Corporation, a position he assumed in 2013. Steve has held architectural and leadership positions at Intel, driving strategy and projects on Identity, Anti-malware, HTML5 Security, Cloud and Virtualization Security since joining the company in 2005.

Previously, Steve held technology positions, as the CSO for Sarvega, CTO of Sanctum, CTO and co-founder of LockStar, and CTO at SynData Technologies. He is a recognized expert and frequent lecturer on enterprise security and was named one of InfoWorld’s Top 25 CTO’s of 2004 and, in 2016, received Executive Mosaic’s Top CTO Executives Award.

Steve created the Trusted Compute Pools Secure Cloud Architecture and is the co-author of NIST’s IR-7904 “Trusted Geo-Location in the Cloud”. He is a fellow at the Center for Advanced Defense Studies and a Guest Researcher at NIST’s NCCoE. Steve is a member of INSA, ISACA, OASIS, IACR, and is a co-Founder and Officer of WASC.

Twitter- @cyphersteve

Listen and subscribe on your favorite platform