Get a Break from the Chaos of RSA and Meet with Forcepoint at the St. Regis.

Close
Episode
23

AI in Cybersecurity: Balancing Digital Transformation and Trust

AI in Cybersecurity: Balancing Digital Transformation and Trust

In today's world of technology, terms such as artificial intelligence and machine learning are thrown against the wall like spaghetti to see what sticks. But what advances really being made with these digital transformation technologies and is government ready to adopt cutting edge solutions to meet new and emerging threats in cybersecurity. In this week's episode, Milos Manic, professor of computer science and director of the Virginia Commonwealth University's Cybersecurity Center joins the podcast to discuss the Autonomic Intelligent Cyber Sensor (AICS) he and his team have developed with funding from the Department of Energy to detect intruders, isolate them and even possibly retaliate against them.

… and don’t forget to sign up for upcoming episode alerts!

How to Listen

Episode Table of Contents

Episode Introduction: Digital Transformation

Arika: Hi and welcome back to To The Point Cybersecurity, I'm your host Arika Pierce, and joining me as always is Eric Trexler, how you doing Eric?

Eric: Good morning Arika.

Arika: Good morning. So Eric I know you just got back from RSA in San Francisco and I heard that you could almost play a drinking game in terms of how much people were talking about AI and machine learning in cybersecurity, is that true?

Eric: Yeah another year, another set of topics. I actually was studying a 25 years of RSA, a study I saw that goes up through 2015 by Wade Baker. He kind of studies the hype cycles if you will, and AI and machine learning has to be at the top at this point. It was cloud, it was the word cyber, but AI and machine learning have to be at the top.

Arika: Okay. Well that's what we're gonna talk about this week. I know we've talked about it a bit in the past, but what's interesting is that we do have a guest, we have Professor Milos Manic, who is the professor of computer science and director of Virginia Commonwealth University, or VCU as most of us know it, of their cybersecurity center. So thank you so much Professor Manic for joining us this week.

Milos: A pleasure to be with you today.

The Solution: Automatic Intelligent Cyber-Sensor

Arika: Well the reason why we invited you to join the podcast is because, as Eric said, we hear a lot about AI and machine learning right now, especially in terms of the cybersecurity industry and the opportunities that there are for these cutting-edge technologies. You and your team, you've actually developed a solution called the Automatic Intelligent Cyber-Sensor in which, my understanding, you've developed it with Department of Energy, and it's being used to identify and divert hackers and also deploy virtual decoys. So some fascinating work that you've been doing.

Milos: Correct, thank you very much Arika. So it's really a part of a larger suite of tools and techniques that we've been developing over the, say, decade or longer with DOE and DHS.

AI has always been a part of the cyber game, and you know over time we've realized that there's only so much you can do with traditional techniques.

Now of course we probably wanna go back and define what really AI or machine learning is in context of cybersecurity. So when we talk about AI people think of movies, people think of autonomous vehicles, people think of many, many applications. The bottom line is the ability to learn something. The moment you give a machine the ability to learn, the immediate question is can you control what has been learned, and how it's gonna be used? So I'll stop there and you'll probably have a next question coming.

The Difference Between Machine Learning and AI

Eric: So the difference between machine learning and AI is in machine learning ... how would you characterize the difference? Let's go there.

Milos: Well in last 10 years, depending on circle of people, I've seen three terms. One is computational intelligence, another one is machine learning, and another one is AI. So AI has probably been the longest around. Machine learning may be second longest, and computational intelligence the most probably recent. Computational intelligence includes all the techniques, the algorithms, and typically refers to neural networks, fuzzy logic, and genetic algorithms. Evolutionary approaches. Machine learning incorporates many many others. Depending on whom you ask it'll be [inaudible 00:04:25] vector machines, it'll be even decision trees, so it'll grow wider. AI however will probably include anything from human factors, from androids, from everyday partners in life if you will. So it'll be more overarching umbrella of all of the above.

Eric: Okay, so AI is really a catch-all for the subcategories underneath it?

Milos: Yes.

Eric: To some extent.

Milos: Yes, yes, [inaudible 00:04:59]. And it's easy to pronounce and it's been around longest, so people can relate to it.

Eric: And it's very interchangeable as we saw from RSA, and when we meet with customers and other vendors and competitors, you have to have AI in your marketing material, regardless of how you apply it, it's interesting.

Milos: Yes, yes, yes.

Arika: AI is the new black right.

AI is All Around

Milos: Well it's been around for a long time so it may change what it means to people, but it's probably overwhelmingly becoming part of any kind of device, anything we as humans interact with. The computer is getting many shapes and forms, from embedded systems, from phones, from iPads, to cars. So in that sense AI is really all around, from a basic printer next to you that's making decisions to maybe more important mission critical control system in a, say, nuclear power plant.

AI versus AI

Arika: So Professor Manic just on that same note, so you're saying it's all around, it's being used in so many different contexts. So you obviously are working on a solution that's using it to prevent, divert hackers. But are the hackers using AI as well? Are we going to be moving into a space where it's actually AI versus AI? The good guys versus the bad guys in some ways?

Milos: Right, so this is the story as long as humanity. Recently we heard comments from, you know, Elon Musk and Stephen Hawking and Larry Page from Google, and Bill Gates, they're all raising red flags how AI may be the biggest event in human history and it may play both the good role and bad role, but a lot of people are concerned with this. But I will just go back to any technological advancement, and as such something that makes translational difference, it can end up in hands of bad guys.

So AI is no different than any other really technological invention. It really depends how you use it, on defensive or offensive side.

Now the other story here is in order to protect better your assets, you probably need to play both sides. To do some probing and investigation to realize where the vulnerabilities are. So it's not bad to use it in offensive purposes, if that strengthens your defensive capabilities.

Will the government embrace AI for its overall cyber strategy and protection?

Arika: And do you see, I know you're doing work in conjunction or in partnership with Department of Energy, and sometimes government isn't viewed as being cutting-edge in terms of embracing new solutions, especially as it relates to technology, but do you see government embracing AI, machine learning, in terms of its overall cyber strategy and cyber protection?

Milos: Yeah that's a good question. I think it really depends what we are hearing and what is really happening. I'm fully convinced that government has embraced top cutting state-of-the-art technologies a long time ago. They may not be publicly advertised through regular news outlets, but I do believe and perhaps know that these techniques have been embraced a long time ago.

The Questions Of Trust

Milos: Now trying to deploy these techniques and argue the benefits of doing so, some questions inevitably arise. Those are the questions of trust. I know this is one of the things that you probably wanted to touch upon. This has nothing to do, again, with cybersecurity per se, it's just a normal step in adopting technology that you cannot fully understand. Not that I'm trying to defend AI, but there are many other technologies that we use every day without necessarily knowing intrinsical parts of how it works, right?

We are not experts on rebuilding a transmission in a car, yet we drive it every day.

But the cars have been around quite some time and we do have some experience with those, and transmission doesn't really make so many important decisions as AI does. So this is a whole new paradigm where human factors and just a psychological step in communicating and trusting something that is not human, that's a big step that's ahead of us and it's not going to be a small or short step. It's probably gonna take a long long time, but I think we are making some really interesting ... leads the way.

Human brain has not been developed to accept huge information

Eric: It's interesting you talk about a car. I look at autonomous driving, I think you're getting much more into artificial intelligence where the car is actually taking information in and making some level of decisions, especially as you get to like level four.

Milos: That's right.

Eric: Which is a whole lot more advanced than how the transmission works or how the engine works and interfaces with the transmission. Once again though the human driver isn't an expert on the capability, the intelligence capability of the car.

Milos: Correct.

Eric: It's so advanced, and it can be very powerful ...

Eric: ... it can also be catastrophic right? If something goes wrong.

Milos: Yes yes. So not that human brain is not capable of making all these decisions, the trick is human brain has not been developed in a way to accept such huge information and data influx that's coming from all these sensors [crosstalk 00:12:16] ...

Eric: And the speed and everything right.

What Differentiates Humans From AI

Milos: ... [crosstalk 00:12:19] and [inaudible 00:12:19] right. So there's cameras that are taking really high-res images. The car is talking to a grid and talking to traffic lights and so on. We are not capable of dealing with so many data sources at the same time and processing them as well. However we are really good at making decisions that are, unfortunately, still not easy to encode in an algorithm. That's where some of the aspects that differentiates us from humans, such as ability to like something, to forget something, to hate something if you will. These are the ability that makes us human.

Milos: Now the big question is does this help or not help when it comes to quantifiable decisions? I like to throw in example when I'm teaching AI class or talking to people about this. Say if you are in autonomous car, and autonomous car needs to make a fast decision. It needs to hurt someone. Just because there's a mother pushing a [inaudible 00:13:41] with a baby in it, and then there's elderly lady on the side, and then there's couple of schoolkids on roller blades, and there's just no way out. You cannot save them all. What do you do? People provide different answers to this. Now I'll ask you, what do you think the machine should do?

Eric: I would be very bad to answer that because I guarantee it'd upset most of the audience.

Why Insurance Agencies Put A Number To Everything

Milos: ... So the insurance agencies have figured out a way to put a number to everything. There's a cost right? Human life, disease, a house, everything. So they will find some kind of a quantifiable number. Sometimes the decision will not help the ability to sell that car. Perhaps there's one life in a car, and then 20 others around the car, but you would never sell that car if you would say well I will sacrifice one life for 20 others.

We have this emotional aspect about doing something that machines don't, and finding the bridge between that because we cannot always look in terms of numbers, but we cannot look at decisions in terms of emotions either only. So some kind of mix will be something we need to figure out, and I don't have an answer to that, but we'll have to get to that point.

Lawsuits to anyone with anything to do with programmable logic

Eric: It's interesting because I think, as long as the driver isn't inebriated or distracted, texting and driving or whatever, if you're following the laws, you're doing the speed limit, and you're presented with those situations, in most cases I believe a jury would probably understand that it's an accident. In the case of artificial intelligence, I guarantee there are lawsuits against the car manufacturer and everybody else with anything to do with the programmable logic that went into that artificial intelligence or that interface for the car, because they made a decision, not in the heat of the moment ...

Milos: Exactly.

Eric: ... I swerved, I did the best I could, right? No it was premeditated, it was thought about, we used the insurance company data to determine the lowest cost or the lowest risk or damage, whether it's individuals or whatever it may be. It's interesting if you think about it on that plane almost.

Autonomous cars are not as smart as human drivers

Milos: Yeah but yet we still live in the mental age so to speak of machines where we are superior to those machines. That's where Google stats show that most of the accidents happen when a car driver, human car driver, assumes that the autonomous car is not as smart as human driver, and starts doing something that is illegal. Starts passing on the other side or something like that. Accident happen because the autonomous car was not able to predict that the human will do something ...

Eric: Erratic.

Milos: ... totally illegal.

Arika: Right the human intervention, yeah.

Eric: Right because it will only do what it's programmed to do, but in most cases, from what I've read at least, we'll stay on cars for a second, automated driving is much safer than actually having humans drive.

Milos: Absolutely.

Are we going to get to a point where the autonomous car is thinking at the human level?

Eric: As it evolves. So in most cases it's better. Do we ever get to a point where the automated driving capability of cars, it's programmed by humans right? Humans are putting in the logic that defines what decisions the car will make. The weighting if you will. Do we ever get to a point where the car is actually thinking at the human level or beyond?

Milos: Exactly ...

Arika: Or that we trust that it is, isn't that also where the trust piece comes in, that we actually believe that we're all at the same level?

Milos: Right, right, right.

Arika: From an intelligence standpoint.

Milos: So it goes back to the definition of intelligence as ability to learn. Now once you create that ability to learn, then that machine can learn. The moment it start learning it can learn the good things, it can learn the bad things, and it'll continue learning based on what it experiences. Now ...

Applying the same concepts in the cybersecurity world

Eric: So let's take this back to cybersecurity then. How do we apply the same concepts in the cybersecurity world?

Milos: I'm glad you asked that. This is the tough question. Very recently, I'll jump into I think the core of it. We look at the specific cyber problems, actually very well-known cyber problems, and we apply state-of-the-art deep learning, machine learning techniques, and show that using different techniques, very different techniques, we can achieve near-perfect results. In other words, using different techniques we can solve the problem. That's just, this is what in research we've been doing for decades.

Milos: But what we've been looking at very recently, last five years or so, is was that decision made for the right reasons? In our mathematical world it boils down to some dimensions. You're looking at some feature of a packet, you're looking at some another feature of a packet and so on. If you look at some features, you will make correct decision. If you look at some other features you may make correct decision as well.

The problem is tomorrow the attack is not gonna follow the predefined path. It will be different. Then some algorithms will fail and some others won't fail.

And I think we can easily translate into how we as humans deal with everyday problems. We may all be right about some decision, but for different reasons that may be emotional, that may be political, they might be societal. We may agree on something, that doesn't mean that the path we reached that decision was right. So the question is did we make a right decision for right reasons?

The Prediction: How AI Will Evolve

Arika: Yeah well I was just gonna ask you in terms of what you foresee in the future, I mean it seems like there's so much obviously that will happen on the technology side, but also there's so much that has to happen on the human side. What's your prediction in terms of, you know, five, ten, 15 years where we will see this all evolve, especially in the context of cybersecurity?

Milos: So I was at a panel just last week, it was panel on safe and secure AI, and that question was raised, and I said I cannot predict five, ten years. I will try to predict maybe a year ahead. There's two things. Computer science, electric engineers, this problem went way out of our domain. We have to include human factors, we have to include a science on how humans make decisions, and try to learn from that. We'll have to figure out how to quantify what's a good decision for good reasons. We'll have to figure out explainable, trustful intelligence. We'll have to figure out how to define what explanation, what level of explanation is sufficient for us to accept machine as part of our ... as peer in everyday life.

Miniaturization: A very important aspect of technology

Milos: I think a very important aspect of technology, which is a little bit more predictable, is miniaturization. IoT, we were doing the heavy-duty algorithms we were deploying on GPU units, expensive servers, and it's moving to a very small footprint, devices that will be part of our phones, that will be part of our hearing aids, that will be everywhere. So I think both technology hardware and software will evolve, but the human will probably become one of the dominant aspects in very near future.

Eric: Interesting, yeah Arika I'll take a stab at that, I think within five years, certainly ten, a large number of the cars we've talked about today will be driving themselves, autonomous driving.

Arika: I can't wait for that.

We'll be no better off than where we are now

Eric: I mean nor can I. But on the cyber side I predict we'll be certainly no better off than we are now, probably worse off. I think from my perspective one of the differences with autonomous driving, we're trying to do something for the good. We're trying to automate something to make the world better, to make life easier. On the cybersecurity side we have humans that are still fighting us, that are trying to do nefarious activity, whatever it may be. I think it's a different problem, it's a more complicated problem, and one we need this type of science and research to help us solve. I don't know that we're there in ten years.

Milos: Well in the world of everything, talking to everything, this enormous connectivity, cyber is gonna be a big part of autonomous vehicles as well.

Eric: Great point right, so now we know how to drive safely, until the human inserts themselves again and says I wanna create problems in that capability.

Arika: Or just the hacking of the car.

Eric: That's exactly what we're talking about right?

The Balance That Is Very Difficult To Strike

Arika: There was an article the other day that the electronic scooters are now being hacked, so it's fascinating. Well ...

Eric: Could you imagine zipping down the street at 30 miles an hour on a scooter and somebody hits the brakes on you? That would be a bad day.

Milos: Well it's always this balance that is very difficult to strike between vendors and manufacturers that want to sell a device that's easy to start using. You don't want to spend three hours setting it up. But if you just started using it without security embedded into it in some way, and if you don't ... Simple example if you just select a very, very easy password, no technology can help you. It can remind you. It can force you but you can still override it like we're trying to override autonomous cars that are apparently or obviously way more advanced than we are.

Explaining What A Neural Network Has Learned Toward Transparent Classification

Milos: We have to learn to live with this and accept, but how to accept something you don't understand, that's old as human race. I'll mention something, I just got an email this morning, our pre-print article on ResearchGate was the most read article at whole VCU and I'll read the title, Explaining What a Neural Network has Learned Toward Transparent Classification. This is what we are trying to figure out, how to make these algorithms transparent to humans.

Arika: Well sounds like we have a ways to go. So thank you so much Professor Manic for joining us this week on the podcast. The work that you're doing is certainly fascinating and I think, in whatever context, be it automated cars or cybersecurity, I mean AI, it will be just interesting to see how it transforms technology in the digital age.

Eric: It makes you think.

Milos: Thank you very much. Thank you very much, pleasure to be with you, exciting chat.

Arika: Thanks so much, and thanks to all of our listeners this week. Please continue to tune in, and please give us a rating on iTunes as well as subscribe to the podcast. Thanks and we'll talk to you next week.

Listen and subscribe on your favorite platform