Skip to main content
Background image

Navigating the Maze of AI Governance: Insights on ISO 42001 and New Regulations with Walter Haydock

Share

Podcast

About This Episode

In this episode, hosts Rachael Lyon and Jonathan Knepher are joined by Walter Haydock, founder and CEO of Stackaware. Walter brings a unique perspective from his time in the Marine Corps and Homeland Security, and now leads the charge in AI governance and risk management.

Today’s conversation dives deep into the maze of AI regulation, focusing on the newly emerging ISO 42001 standard and what compliance really means for organizations. Walter unpacks the complexities facing companies as they navigate a patchwork of state, national, and international laws—highlighting the challenges and opportunities presented by Colorado’s groundbreaking AI legislation and Europe’s evolving approach.

Podcast

Navigating the Maze of AI Governance: Insights on ISO 42001 and New Regulations with Walter Haydock

Rachael Lyon:
Welcome to to the Point Cybersecurity Podcast. Each week, join Jonathan Neffer and Rachel Lyon to explore the latest in global cybersecurity news, trending topics and cyber industry initiatives impacting businesses, governments and our way of life. Now, let's get to the Point. Hello, everyone. Welcome to this week's episode of to the Point podcast. I'm Rachel Lyon, here with my co host, Jon Neffer. John. Hi.

Jonathan Knepher:
Hello, Rachel. It's another beautiful summer day out here. How is it for you?

Rachael Lyon:
Okay. Yeah. Rub it in. Your San Diego living south. It's like 100% humidity and 100 degrees in Texas right now, but, you know, it keeps my skin supple.

Jonathan Knepher:
Excellent.

Rachael Lyon:
Gotta find the silver lining somewhere. Well, I'm really excited to welcome this week's guest. Please welcome to the podcast Walter Haydock. He's the founder and chief executive officer of stackaware. And what I love about this is, you know, he, he found a need in the system and he's addressing it. He launched the company after seeing teams waste money on fancy software tools and time on trivial issues while missing the biggest risks. And AI, of course, adding much more fuel to this fire. Earlier in his career, he served as a professional staff member for the Homeland Security Committee of the US House of Representatives, as an analyst at the National Counterterrorism center, and as a reconnaissance and and intelligence officer in the Marine Corps.

Rachael Lyon:
What a career. Welcome, Walter.

Walter Haydock:
Rachel, John, thank you for having me on.

 

[01:34 ]Unpacking ISO 42001: The New Standard for AI Management

Jonathan Knepher:
Excellent. Well, Walter, I kind of want to jump right into this because this is a topic that, you know, it's come up many times, as you can imagine, on our podcast, an area that we've have a lot of interest. And so maybe if you could just start off with, you know, what is ISO 42001 and what does compliance mean?

Walter Haydock:
Yeah, absolutely. Happy to dive in. ISO 42001 is an internationally recognized standard for building an AI management system. And that is a series of policies, procedures, technical controls that let you manage the risk of deploying artificial intelligence. So that's what the standard is. Compliance can take a variety of different forms. So you can do self attestation or readiness where you use the standard as a framework to build your governance program. And then you can actually get an external audit from an accredited auditor who will certify you that you are following the requirements of the standard and grant you a document that you can show to customers and other stakeholders proving that you have undergone external assessment of your AI management system.

Jonathan Knepher:
Can you dig a little bit deeper though? Like, what are the Types of like controls and things that would be in place here.

Walter Haydock:
Yeah, happy to talk about that. ISO 42001 has two main components. They're the clauses 4 through 10, which primarily focus on administrative controls. They include requirements such as having an AI policy analyzing your internal and external issues, whether they be business incentives or regulatory requirements, developing a measurement and monitoring program, implementing an audit program, doing management review of key AI objectives and requirements. And that is the first part of the standard. The second part of the standard requires selecting and implementing a set of technical controls requirements related to AI. So those involve examining the governance or the data provenance of the AI systems that you're using, the models that you're using, if you're training or fine tuning models, looking at the preparation and conditioning that's applied to the data beforehand, having a method for reporting concerns from external parties as well as internal whistleblowers, and then ensuring that your vendors, suppliers and customers all understand the roles and responsibilities of each party in the AI ecosystem.

Rachael Lyon:
So I have a question for you. It might be a two parter. I'm notorious for doing this, Walter. It's interesting to see the dichotomy between perhaps European approach to AI regulation, America, US approach to AI regulation and when we talk about America. As we know there's a lot of state by state level privacy right regulations, but not a national policy per se. And it looks like Colorado has some state level AI regulation Artificial Intelligence act of SB 205 taking effect in February of next year. I guess the two part is how do companies navigate the path forward when you do have state, national, international regulations, all very, very different coming into effect? And what should companies be thinking about specifically to the Colorado AI regulation coming online?

Walter Haydock:
Yeah, that's a great question. It's basically the question, Rachel. So definitely a good one. At a high level. The United States is taking a very different approach than Europe, but a similar approach to how it is in terms of data privacy legislation. So there was a proposal in the recently passed reconciliation bill, aka one big beautiful bill to ban state level AI regulation for 10 years that was stripped out by the Senate at the last moment before passage. So now it's going to fall onto the states to regulate if they want to regulate at all. So that's driving an already expanding web of AI specific regulations.

Walter Haydock:
You alluded to Colorado and happy to talk about that. Colorado SP205, their artificial intelligence act is clearly based on the European Union Artificial Intelligence act in that it creates a tiered system of high risk and not high risk systems and requires companies that are developing or deploying them to apply a variety of controls to meet the law's requirements. Interestingly enough, one of the requirements for deployers is to develop a risk management program that meets the requirements of ISO 42001, the NIST AI Risk Management Framework, or another recognized AI governance standard. So happy to drill down more on that, but just wanted to address the high level issues first, but happy to talk about any of those things in particular.

Jonathan Knepher:
Do you think that Colorado's like leading the way here? Do you think a lot of other states are going to follow or do you think it'll kind of end up being kind of de facto where, you know, folks just try to follow the state that has kind of the most restrictive rules.

Walter Haydock:
I think what you're alluding to is a high watermark approach and a lot of organizations are following that with respect to gdpr, the European Union General Data Protection Regulation. So organizations that operate in multiple jurisdictions are basically just trying to follow GDPR requirements because they're relatively certain that they will be able to follow other jurisdiction requirements if they use gdpr. And in the US I think we will see that to a degree, but not entirely. And let me give you a concrete example. So Even prior to SB205, New York City so the five boroughs passed a law regarding the use of automated employment decision tools. And those are essentially AI powered HR systems for hiring and recruiting. And I have already seen some AI HR companies specifically disable their features for candidates located in the five boroughs of New York. So that company does not appear to be taking the high water mark approach, but rather selectively eliminating functionality for its system for given geographies.

Walter Haydock:
And I think we may begin to see that. I think terms and conditions are going to get especially complex and difficult and require that you attest that you're not in any of a certain set of geographical locations or you're not a certain type of data subject to use a certain system?

Rachael Lyon:
Yeah, just don't get it, just don't take it, don't have to deal with it. Aside from ISO 42001, what else is on the horizon that folks really need to be paying attention to? Because there's a lot swirling out there and it's kind of hard to kind of organized thoughts on, you know, as we look to the future, how do we plan for the future?

Walter Haydock:
There are certainly going to be specific requirements that are rolling out from the various US state level regulations. Colorado SB 205, Utah for example, has rolled out some very specific legislation regulating mental health chatbots. So that is getting very precise. States like Texas are taking a more light touch approach. They do have AI governance regulations that they have passed, but it's more focused on prohibiting certain beyond the pale use cases. I'll call them. And then in Europe, what we're seeing is, well, I mean, I love my European friends, but sometimes their appetite for regulation can exceed their ability to implement it. And we're already seeing that with the general, the GPAI Code of Practice that the EU released, which isn't even approved yet by the, the, by the European Union government.

Walter Haydock:
But according to the EU AI act, it's already in force. And that's just the tip of the iceberg when it comes to the regulatory backlog, so to speak. Because according to the EU AI act, there is supposed to be a officially recognized one or several harmonized standards which allow companies to achie a a presumption of conformity with the law under certain aspects of it. But that standard has not yet been published and the high risk provisions of the EU AI act come into force less than a year from now and that standard is not scheduled to be published until the end of 2025. So we're going to have something of a crazy situation where companies will have at most and that's assuming that the EU actually meets its own deadline, which it's shown difficulty in doing, seven months to implement a new standard that no one actually knows what it is yet. And I think the European Union made kind of a big mistake in basically looking at ISO 42001, saying we see that, we acknowledge we like some pieces of it, but we don't like other pieces of it and then we moving to produce their own standard. So frankly, I think one of two things are going to happen. I think there's going to be potential for a delay in the implementation of the high risk AI requirements.

Walter Haydock:
Now the EU has been, has been pretty clear that they're not going to delay implementation with respect to the general purpose AI requirements. But you know, frankly I just, I just don't think that it's going to be practical to roll out the high risk requirements in August of 2026. So that's going to be one thing that could happen or there could be kind of a immediate last, last ditch effort to somehow bring the harmonized standard for the EU, align it with ISO 42001 or at least make it look very similar to it, such that companies that have already taken steps to comply with ISO 42001 will have a head start and be able to meet the law's requirements in August of next year.

Rachael Lyon:
That's crazy.

 

[12:16] Transparency as a Strategic Imperative for AI

Jonathan Knepher:
Yeah. Given like, that uncertainty as well as like all of the differing regulatory requirements, how does transparency work into this from like the individual companies, like putting their governance in place? And how, how can companies like basically gain the trust of their customers? I know a lot of people are still very apprehensive about the AI use in, in ways that they can't, they can't see and they don't understand.

Walter Haydock:
Yeah, I think that transparency is key to an effective AI governance program. However, company culture and history needs to be a big part of how you implement that. So my company, Stack Aware, we've from day one been what I believe to be highly transparent in everything that we do. For example, our asset inventory, a version of it is updated live on a website that anyone can go to and see every system that we have in use. Now, most companies aren't like that. Most companies don't even know what AI systems they're using. But that is how we started from day one. Obviously small organization, it's much easier to do that.

Walter Haydock:
We've posted the majority of our AI management system documentation online, so any company can inspect our policies and procedures with respect to the use of AI. Now, if you were a big organization that has some history and some baggage, potentially just moving right to that could potentially be a bad idea because you might put out some dirty laundry before it's, before it's cleaned up, so to speak. So a phased approach to transparency for those older companies might be more appropriate. But for companies that are just starting the AI journey, plan for transparency at the beginning, because it's going to make things easier and also, you know, it'll help help external forces keep you accountable as well.

Rachael Lyon:
Interesting. I would love to pivot a little bit because I told you before we got started, I really want to dig into this quote you mentioned on another podcast. And I think all of this ladders up to this idea that organizations are building digital empires on sand. Can you tell our listeners a little bit more about that point of view?

Walter Haydock:
I think a lot of companies are moving hard in the direction of implementing AI systems and also on top of a broader effort to digitally transform those organizations that aren't there yet are still trying to do it without having an effective governance framework in place. I see a lot of indexing heavily on tools, technologies and systems without the correct processes in place and even before that with the right people and culture in place. So that is unfortunately going to create an unsteady foundation for a lot of these organizations because they're not going to have the right puzzle pieces in place before they start building, you know, building their advanced AI architectures on top of them.

Rachael Lyon:
Right. And you also talk a lot about the three layers or three risk layers of AI, which I thought was really interesting. And is your sense in kind of the companies you're talking to are people kind of getting a full grasp or kind of wrapping their heads around these three layers? And also are you kind of helping them with practical ways to demonstrate responsible AI, particularly when you have audits and other evaluations or public disclosures, they have to be in line with.

Walter Haydock:
The three layers of AI risk that we analyze at Stack Aware are those related to models, applications and agents. And a model is the code and weights that comprises an algorithm and it doesn't do anything on its own. But there are some important characteristics from a governance perspective. The applications are the models integrated with supporting infrastructure like databases, APIs, user interfaces, things like that. And then agents are AI applications that talk to other potentially AI applications or deterministic systems. So those are the three categories of AI risk that we analyze from a governance perspective. I'm already seeing a lot of difficulty in separating these concepts logically. And unfortunately, even the ISO 42001 standard, even though I'm an advocate for it, does itself conflate a lot of these characteristics and it makes it challenging for organizations to understand exactly what's required.

Walter Haydock:
So, for example, Stack Aware does not train or even fine tune any models itself, but we are heavy users of third party hosted models. And a question that comes up a lot is, well, how do I know what the provenance of the data is of these models that are operated by third parties, especially since they don't give complete disclosures of all their training data. And the way that we tackle it is say, well, we evaluate all the information that they provide publicly, we look at that and we compare it to an AI governance standard based on the use case of the system. And then if it falls below that standard, that's a risk that requires mitigation, transfer, avoidance or acceptance. So that's how we approach it.

Jonathan Knepher:
I mean, I think that's, that's an interesting case that's come up with us a couple times before. Like there's not a lot of truly understanding the data that's behind and the inner working of these models and the lack of transparency is there, Are there things to look out for? Right. I mean, yeah, we can read what they've told us but there's, there's got to be more, right? Like are, are some of these potentially backdoored by, by other state actors or other things we need to worry about?

Walter Haydock:
Definitely yes. So on hugging face, you can go ahead and download plenty of models that are completely open source but have very vague or opaque data providence. And some of them are posted by the Beijing Academy of Artificial Intelligence. So we've already seen that the Chinese government has either explicitly or implicitly enforced certain requirements on how generative AI models will respond to certain types of questions. We've seen that through testing of Deep seq. So it's not difficult to imagine that these models that I'm alluding to that are easy to download for free will have some sort of inherent direction that are pre programmed into the responses. So that's certainly one aspect of it. And then there's another aspect of it where there's standard supply chain security measures that you need to apply to any sort of open source code that you're bringing into your production environments in your role.

Rachael Lyon:
Stack aware. I'm sure you're talking to CISOs from a number of different sectors. I think you guys work a lot with healthcare organizations. If I saw, I'd be interested in your perspective and maybe by differing industries perhaps or where you're seeing some organizations or sectors moving a little bit more quickly with, with AI innovation while, you know, kind of remaining conscious of compliance with regulation or what are the differences that you're seeing or any of the trends that you're seeing here? Is any industry seemingly ahead of another?

Walter Haydock:
Health care is leading the way in some areas, I would say, and that's because they have both the data confidentiality concerns as well as the compliance concerns when it comes to rolling out new AI systems. And it's difficult to miss being covered by a regulation when you're operating in health care in the United States. So they have taken some proactive steps to build effective governance programs. And then also the relative reward of deploying artificial intelligence responsibly is higher in that industry, in my opinion because we have so many inefficiencies, especially in the US health care system. So healthcare, they are understanding that AI presents a high risk, high reward opportunity and acting accordingly. I'm also seeing in other industries, for example manufacturing and engineering, there's a fair amount of concern over intellectual property protection and that is driving the adoption of AI governance programs to ensure that these companies can protect their competitive advantage from being eroded through unintended model training or potentially even data breaches of their Proprietary systems.

 

[21:06] Practical Steps for CISOs: Building an Effective AI Governance Foundation

Jonathan Knepher:
So what advice would you give to CISOs when it comes to getting started? Like assuming they're not in these, these, these groups that already have a lot of regulatory controls.

Walter Haydock:
The first three steps I would recommend to CISOs when it comes to AI governance are one, establish a policy that is how you communicate the organization's risk appetite, what's acceptable and what's not. Number two, do a thorough inventory of all the systems that are in use. If there have been previously effectively shadow AI systems that employees are using, make sure to include those too. You can have some sort of amnesty if there hasn't been a previously enforced policy. Just make sure that you get a full list. And then once you have a full list, do a comprehensive risk assessment of all those systems to understand which ones are appropriate for the organization to use and help it accomplish its business objectives. And then the ones that are not appropriate to use, identify why not and make that clear to your employees.

Rachael Lyon:
So kind of coming back to employees and culture, we talk a lot about how can you foster and also create a secure culture of AI adoption. What are you seeing and how are like CISOs and legal teams building trust with employees, but also setting guardrails for use? I mean, are you seeing any really effective programs here that are versus the wild wild west that I think a lot of companies are facing?

Walter Haydock:
Well, obviously my customers programs are the best around, so we could just leave it there. No, I'm kidding.

Rachael Lyon:
It's awesome.

Walter Haydock:
So companies that are more effective are the ones that are clear in their expectations of employee use and that also make clear the organization is comfortable with some level of risk. Conversely, the organizations that have the most trouble are those that take absolutist positions without acknowledging the ambiguity, the gray area that exists. So some examples would be companies that say AI is banned, you're not allowed to use AI. So first of all, that's impossible. Every application has some sort of predictive or generative AI baked into it. Now, unless it's like TextEdit or Notepad. So if you're using anything more than that, you're using some sort of artificial intelligence. Second of all, I see companies put vague restrictions on use without elaborating.

Walter Haydock:
So for example, making sure that use is ethical. Well, I think we can all agree that AI use should be ethical, but what is exactly does that mean? So clarifying those types of restrictions, and then also with respect to intellectual property, I see a lot of provisions that say do not violate intellectual property protections of third parties. Okay, what does that mean? Does that mean don't train on third party copyrighted data, don't use models trained on third party copyrighted data. So being very clear about that is important to build employee trust.

Jonathan Knepher:
And how do, how does the CISO need to kind of evolve to ensure that they're meeting this, this kind of ever changing needs? Right, like there's, there's something new AI every month still. Right. We're in this rapid growth phase still.

Walter Haydock:
Your question about the CISO needing to adapt continuously is an important one and raises an even bigger question of should CISOs even be in charge of AI governance? And they have become de facto because it's related to risk and technology. Ergo, that is the CISO's remit, that is how it has developed and we're seeing this happen by default. However, there are some organizations that put AI governance under the responsibility of potentially privacy, maybe legal, maybe even data science teams, and even in some organizations, dedicated AI governance personnel who are managing AI risk from a holistic perspective. So that is a key question is who should be in charge of AI governance. Once you've decided who is going to be in charge of AI governance, then to stay on top of all the developments that are continually popping up, I recommend having a continuous monitoring program. And this means that rather than doing once annual risk assessments of all your models and vendors, having a continually updated view of them, which obviously is something that stack or offers to our customers and we're happy to do for you, but whoever's going to do it, make sure that it's getting done because vendors are adding new capabilities to their products all the time that are changing the risk picture for organizations.

Jonathan Knepher:
I think that's interesting that you brought up the whole like, you know, should this be under, you know, data privacy or legal? Do you have an opinion based on all the companies you've worked at where you see that working out the best?

Walter Haydock:
I think security teams generally have the organizational or process infrastructure in place to best handle some of these issues. Now where it gets challenging is when you're talking to the security team about doing environmental impact studies for artificial intelligence systems, which are required under certain circumstances for ISO 42001 for certain types of systems. So that's where it gets a little bit challenging. They might be understandably out of their depth in that side of things. Additionally, the security team may be a little more conservative than the organization overall in terms of deployment of artificial intelligence. To that, I would say it's important for security teams never to be the approval or disapproval authority for the deployment of new systems. I think that is never an appropriate role for a CISO or delegates. The business should be leading that and taking ownership of the decision, but also the risk when it comes to deploying any sort of information system.

Rachael Lyon:
Just curious, you know, because kind of looking at the landscape of an organization for something like AI, do you see like Chief AI Officer or some other role that perhaps reports into a CEO becoming the norm for particularly multinational organizations, but you know, just even, you know, kind of commercial mid market organizations, I mean, is that a way forward that people need to be thinking about? So you've got someone who understands the security and the business and the policy, all of that, kind of a specialist to help advise the other teams.

Walter Haydock:
In the medium term, we're going to see a lot of Chief AI Officers pop up all over the place across organizations. And I think this is going to follow the path of the Chief Information Officer officer. So as organizations became increasingly based on digital platforms, we saw the need for a senior executive who understood all of the cross functional needs to develop an effective roadmap for technology implementation arise. Now in the long term, I think a Chief AI Officer is going to go the way of the Chief Information Officer. Where we're seeing in a lot of organizations the CIO is not really existing anymore because technology has become so inherent to and built into company operations that there's no longer a need for it. So medium term I think there will be a lot of chief AI officers, but long term I think they will merge into the organization as a whole.

 

[29:10] Walter Haydock’s Career Path into Cyber

Rachael Lyon:
Yeah, that sounds logical. I'm ready to get to the personal questions, John, but he knows this is my favorite part because it's always so fascinating how people find their way to cybersecurity. And we have a lot of people on the podcast who have started their careers in the armed forces, government, et cetera. So I'd be curious from in your experience, how did your work in counterterrorism translate into and inform how you guide risk management strategies for CISOs today?

Walter Haydock:
Working in the counterterrorism field equipped me to deal with a lot of different data sources and understand the ways to parse them, the ways to analyze things and come up with actionable recommendations, which as an entrepreneur is key because you have essentially infinite data sources that you could consume. There are hundreds of Thousands of newsletters, YouTube channels, social media feeds that you could review and you could spend your entire life attempting to chip away at those. But the key thing is pulling out the signal from the noise and identifying the most important step to take and executing that Step in a timely and effective fashion.

Rachael Lyon:
Yeah, so like situational awareness, Right? Applying that same idea. Right? Yeah. And then being able to act on it when you have the information you need.

Walter Haydock:
Another important thing that I pulled from my military experience is decision making, which you alluded to. And there are very smart people in the private sector who are able to analyze information very effectively and come up with essentially unlimited opinions and potential ways of moving forward. However, those are effectively useless unless you actually implement one of those ways forward or one of those recommendations. So analysis is never lacking in the private sector. Decision making is often lacking. And the key between organizations that survive and fail.

Rachael Lyon:
That's interesting. It's so true. It's so true. I know we're coming up on time. John, do you want to close us out with the final question?

Jonathan Knepher:
I guess, you know, how. What, what's kind of the end summary for our CISOs? How should they. How should they be moving forward and how should they be interpreting all of this and keep themselves out of trouble?

Walter Haydock:
With artificial intelligence, the traditional approaches of risk management mitigate, transfer, accept, avoid. Those still apply. It's not magic. So having an effective governance program requires analyzing the situation and then making decisions to move forward. However, artificial intelligence brings with it some wrinkles. It has new dimensions. There are new capabilities that a lot of folks are not necessarily imminently familiar with. So if you have the resources in terms of personnel, in terms of training, budget, in terms of technology, then dedicate some budget to getting people smart on these tools and technologies.

Walter Haydock:
And then if you don't have the bandwidth to do that, then bring in outside experts, work with your peers to understand what they're doing, copy to the extent that you're allowed to those best practices and don't try to reinvent the wheel.

Rachael Lyon:
Nice. And I would love to get a shout out to your substack. Do you want to share with our listeners the content that you're sharing there?

Walter Haydock:
Yeah. So Stackaware's blog is easy to navigate to. It's blog, stackaware.com and there, on a regular basis, either myself or guest contributors are bringing the latest and greatest when it comes to actionable AI governance and security recommendations. So we have a policy at Stackware of dogfooding in that we always eat our own dog food and we don't create marketing fluff. We only take actionable guidance, stripping out anything that may be confidential or proprietary to our clients and then making that publicly available. We aren't worried about diluting our competitive advantage so much, and we use those actionable recommendations and guidelines to help educate our customer base and also just get the best practices out there. So feel free to subscribe.

Rachael Lyon:
That's wonderful. It's nice to have resources like that. They're incredibly valuable as you try to make sense of this kind of world that we're moving into. Walter, thank you so much for today's conversation. This has been really insightful and appreciate your perspective. I think our listeners are going to get a lot out of this episode.

Walter Haydock:
Well, thank you very much, Rachel. I appreciate it. And John, great speaking with you.

Jonathan Knepher:
Thank you.

Rachael Lyon:
Fantastic. And, and as always, John, what do we like to tell our listeners to do?

Jonathan Knepher:
Please smash the subscribe button.

Rachael Lyon:
And you get a fresh episode every Tuesday right to your inbox. So until next time, everyone stay secure. Thanks for joining us on the to the Point Cyber Security Podcast brought to you by Force Point. For more information and show notes from today's episode, please visit forcepoint.com podcast and don't forget to subscribe and leave a review on Apple Podcasts or your favorite listening platform.

 

About Our Guest

Walter Haydock Headshot

Walter is the Founder and Chief Executive Officer of StackAware. He launched the company after seeing teams waste money on fancy software tools and time on trivial issues while missing the biggest risks, all because of poor management and governance. AI only adds fuel to the fire, making harnessing it responsibly a top priority for every business.He was previously a Director of Product Management at Privacera - a data governance startup backed by Accel and Insight Partners - as well as PTC - where he secured the company’s industrial IoT product lines.Before entering the private sector, he served as a professional staff member for the Homeland Security Committee of the U.S. House of Representatives, as an analyst at the National Counterterrorism Center, and as a reconnaissance and intelligence officer in the Marine Corps.Walter is a graduate of the United States Naval Academy, Georgetown University’s School of Foreign Service, and Harvard Business School.