
Navigating AI Ethics: Human-Centered Design, Regulation, and the Price of Innovation with Erica Shoemate
Share
Podcast
About This Episode
As AI adoption accelerates, innovation is outpacing governance. In this episode, Rachael Lyon and Jonathan Knepher sit down with Erica Shoemate, founder and principal strategist of EN Strategy Group and former national security leader, to examine the ethical and operational consequences of AI-first decision-making.
From high-profile researcher resignations to “layoff remorse,” Erica challenges leaders to rethink automation strategies that prioritize efficiency over people. She explains why human-centered design must guide AI deployment, why policy should be built alongside technology rather than added later, and how monetization in generative AI raises new privacy and trust concerns.
The conversation also explores youth safety, deepfakes, and the global regulatory divide. Erica makes one point clear: humans own responsibility for AI systems. Organizations that treat AI like critical infrastructure, monitored, maintained and governed with rigor, are better positioned to build lasting trust while advancing innovation.
Podcast
Popular Episodes

35 mins
The War on Data, Cyberspies and AI with Eric O'Neill - Part I
Episode 352
January 6, 2026

22 mins
How AI and Third-Party Risk Are Transforming Healthcare Cybersecurity with Ed Gaudet - Part I
Episode 339
October 1, 2025

24 mins
The Evolving Cyber Threat Landscape in Healthcare: Insights from Fortified Health Security’s Russell Teague - Part I
Episode 332
August 11, 2025

44 mins
From Battlefield to Boardroom: Ricoh Danielson’s Lessons on Cyber Warfare and Digital Forensics
Episode 327
May 27, 2025
Podcast
Navigating AI Ethics: Human-Centered Design, Regulation, and the Price of Innovation with Erica Shoemate

Welcome, Erica Shoemate
Rachael Lyon:
Hello everyone, welcome to this week's episode of To The Point Podcast. My name is Rachael Lyon, and here with me today is my co-host, Jon Knepher. Hi, Jon.
Jonathan Knepher:
Hi, Rachael.
Rachael Lyon:
So I am so excited for today's guest, and we're gonna get to dig into some really meaty topics and, um,
Rachael Lyon:
I don't know if I should wait to make it the first question or if I should preface it now. Um, I think I'm gonna make it the first question, so I'm gonna create a little drama here. Uh, first of all, please welcome to the podcast, Erica Shoemate. She's an international bestselling contributing author, tech policy leader, and maternal health strategist and advocate. She previously served as a national security leader and analyst at the FBI and across the US intelligence community. Currently, she is dedicated to transforming policy landscapes across multiple industries.
Rachael Lyon:
Wow, Erica, what a background.
Rachael Lyon:
Welcome.
Erica Shoemate:
Uh, thank you so much for having me. Yes, it's truly an honor, uh, to be part of your show today and also to be able to have a bit of conversation with your audience. So cannot wait.
AI Ethics: Why Are Researchers Speaking Out?
Rachael Lyon:
Wonderful. So I already prefaced, I've already set up, I got to ask you this question, Erica, because you're an AI, you know, extensive knowledge, right, in, in, in AI. Um, and in the last week or so, uh, there were two kinds of notable departures from, uh, Anthropic and OpenAI, um, you know, researchers type. And, you know, one of them, when they left, they left out of concern of where AI is going. And one stated, humanity is in peril, which is pretty, pretty significant. I mean, what's your take on that? Because this is a hot topic right now.
Erica Shoemate:
Yes, a very hot topic. And it's funny that you asked this question because I was recently on a big AI human-centered design panel about a week ago here in DC. And some of the discussion was around AI and, you know, where we see it going and what leaders should be focused on in the, the whole kind of North Star of this panel discussion was truly around human-centered design. And I can honestly empathize with the researchers not having, you know, inside knowledge of either, you know, resignations or companies. What I do know and what I do understand based on my own expertise and, and research, and what I'm focused on is human-centered design, is that safety and ethics lens and the concerns that, you know, have been raised is is, you know, valid because those are things that keep me up at night when we think about, again, what are we allowing this technology to do? And are there humans that are actually monitoring these things? Because also to tie into this kind of big great resignation of these, you know, two, um, you know, top leaders and researchers, is the bigger piece around this is also laying off employees in general, which we've seen a lot of, and anticipating that somehow this technology is going to make these companies more efficient, more effective. And what I've said a lot, and even what I said recently, was that I think there's going to be a lot of layoff remorse, of employees. And we have seen it with some companies already asking some people to come back because they realize that no, AI actually cannot do some of the things that we needed to do. And then my second piece to this kind of safety and ethics, uh, discussion and concerns is who is monitoring the AI and auditing it if we're getting rid of the best, or the best of the best are saying no, my values are not aligned, right, with your profit chasing.
Rachael Lyon:
Yes.
Erica Shoemate:
Over humans. And then we, the people globally, should then say, okay, if these people are leaving, who's left behind? And who is then going to hold these leaders accountable? Because regulation is not keeping up, at least where we are. There is some global regulation that's like, you know, in EMEA and APAC, and, you know, those areas. But when you look at the Americas, it is definitely not keeping up nearly as much. And so those are the things that concern me at like a very 30,000, 50,000 feet level. It's like, okay, if these people are saying this, then what is really happening? And is that concern that is being raised— is it even more— should we be even more alarmed —right, right— than even what is being stated.
Rachael Lyon:
It's like, so what's the price of an innovation-first approach versus— yes, innovation and profit, right?
Erica Shoemate:
Because I think that it is totally acceptable to have profit and innovation, right? Sure. I think, though, that where I draw the line is that people, the humans, have to always come before the profit. And I think that that has been the miscalculation, is that the greed, the grift, is more important than the people who will be using the technology, as if people don't get a say in what they decide to consume long term. And so those are the things that I am really thinking about as a kind of theme to resonations. And also, I would be asking leaders, because of these two big ones, are you anticipating— and if you are anticipating, what is your succession planning for this? One, brand safety, not just PR, brand safety, people safety. And how do we mitigate the things that have been raised because people don't just walk away from jobs, they're walking away from something that is inherently not aligned with what they believe will serve the people long term. Right.
Jonathan Knepher:
Do, do you think that it's inherent to the technology, or do you think there are things that can be done to mitigate these risks? And is that regulatory, or, or where would where would those changes come from?
Profit vs People: Who Owns AI Risk?
Erica Shoemate:
Oh my gosh, I love this question. Um, again, it is like the hot topic, right, of the end of 2025 and now 2026. I think that, yes, at the end of the day, people create the technology. And as I've stated before, we cannot just simply point to the technology when it makes a mistake, that it was a technology's fault, because Humans created the technology, regardless of what it ultimately does as far as like modeling, remodeling. Humans own the product innovation. And I think that is important, before product even launches, that we understand— when I say we, we the company will understand who is ultimately responsible if and when, because it will be a when, something goes awry, and if we don't know all the things, because sometimes you want— because it is technology, right? At least, what mitigation strategy do we have in place? So to fully answer your question in a very pointed way, humans own the responsibility of the product. And from a regulatory standpoint, I absolutely believe that there should be at least general regulations that point to when there is the most egregious harm. Right.
Erica Shoemate:
And to pretend that we cannot agree on 50,000 feet level policy is asinine. And I would tell anyone that may be watching or listening that there is something that can be done. And the fact that we're still, like, literally having whole discussions around Section 230 here in the US is laughable.
Rachael Lyon:
Right.
Erica Shoemate:
In 2026. So that's just a perfect example too of when we don't want to make a decision, then we're asking who's accountable. I'm going to always say the human.
Rachael Lyon:
Right. And it's— you know, you, you keep hearing kind of rising concerns as well. And I've been seeing some things in coverage about, you know, AI being devious, right? Um, AI being deceptive. There was a Wired article where this fellow set up a pretend company with agents. And one of them lied to him about what they had done. And he called him out on it, called the AI out on it.
Erica Shoemate:
I do it all the time.
Rachael Lyon:
And the AI is like, uh, oh yeah, my bad. You know, I— yeah, you're right. That's not correct. I lied to you. I won't do it again. But yikes. You know, if you— I do it all the time.
Erica Shoemate:
Scale.
Rachael Lyon:
Yes. Yes.
Can You Trust What AI Tells You?
Erica Shoemate:
Oh my gosh. This is a— I love the point around this, right? Because I'm always— I call it, uh, I call him chatty catty, whoever, whichever AI assistant I'm like talking to for the day. I'm like, okay, that is not what I asked for. This is incorrect, and that is not even inherently close to what I asked for. And to your point, it's like, oh my bad. And sometimes I'm cackling because I can have humor because I actually understand the technology. But to your point, the seriousness of that is, what about the people who blindly go with whatever AI is outputting to them? Like, the reminder to anyone that is listening is you get out what you put in, right, when it comes to AI.
Erica Shoemate:
And yes, there's a little caveat at the bottom— AI makes mistakes— but let's be clear, that is clearly not enough because people have become so comfortable with the technology that there is a lack of checking, a lack of extra critical thinking, right? And I think that more actually should be done as it relates to the modeling and remodeling. And I think that also this is where, you know, you hear the conversation around bias, biases, right? If you don't know that the technology is not giving you the right information because it's not your lived experience, mm-hmm, right, right.
Jonathan Knepher:
Yeah. Oh, absolutely.
Rachael Lyon:
I'm curious on another level too, because this has also been bubbling up. You know, as we know, ChatGPT is going to start ads, and there's been a lot of discussion on that. And, I think one of the things when, you know, one of these folks left, one of the researchers left— dig in— you know, they had flagged kind of this concern about people tell chats, you know, their deepest, darkest secrets, their health problems, and all of this thing. And if you start monetizing information like these things, like, what are the implications of that? Because to your point, people, they treat ChatGPT like a friend, like a trusted— literally— confidant. And that information is now out in the ether, and it can be utilized or weaponized or, you know, whatever the case may be. And, how do you guardrail that?
Erica Shoemate:
I absolutely love this question. As someone who has worked in monetization and advertising, um, trust. Uh, one thing I also would say to your audience, I know you gave a, um, a kind of good high-level of where I come from. I do just want to like quickly share the audience because I think it'll give some good insight into what I'm, I'm bringing, is that to your point, I spent more than a decade in national security and intelligence working every different matter you could think about, from counterterrorism, counterintelligence, transnational organized crime violent extremist groups, things focused on critical infrastructure, national kidnappings, crimes against children, sex trafficking, human trafficking, all the things. And also threats emanating, um, from abroad, and also working in high-threat operations, um, areas abroad. Wow. And being not only the FBI and some other intel agencies, and then moving from that shift into big tech at places like Twitter, Amazon, supporting Meta, being an advisor for a lot of startups. I really come at it from a lens also of being a first-generation college grad from inner-city Memphis.
Erica Shoemate:
So, my— all of my professional and educational, put it all together with my lived experience, is a lot of times very different from the people that I have worked alongside, been with, because a lot of times, for your audience, you can't see me. I am a Black, um, woman. I— my whole world has been different, right? And so— and that's not to say that people, no matter where you come from, have your own walks of life. But to get deeper to your point, the guardrail piece is something that when you think about ads, I actually think because we're still learning AI and how it works and how it evolves, it has no place. It has no place in this environment that is very sacred. If you think about it, it's exactly what you said. People are telling their darkest secrets, trying to figure it out. Because for once in maybe someone's lifetime, particularly people I know from where I come from, where things are expensive— expensive— getting an attorney, getting access to justice, it's costly.
Erica Shoemate:
Like, it's not, it's not equitable. And I'm not saying that AI is the fix for this. What I am saying is you can at least ask this thing questions that gives you a pretty good synopsis of where to go, right? But if now you're monetizing the things that I am inputting that are honestly confidential in many ways to my life, mm-hmm, how can I actually trust you? This is not just like social media in the ethers of like today, you know, the day was— the sky was blue, and I had an amazing day, and it was rosy. People are having real conversations. And my question is even bigger on not only what are you advertising, what are you giving the advertisers exactly as it relates to data, the user data, data, what types, and what does that look like? And personally, when I'm in this environment, because usually I'm generally working, I don't want to see the ads, right? Right.
Rachael Lyon:
Yeah, same.
Erica Shoemate:
Yeah, right. I— it's— I don't like it. I think it's gonna be more problems than solutions, and we already don't even have policy, generally speaking, at least where these parent companies are, that's going to keep that environment as tight as it should be, as it's doing piloting rollout or whatever, because I'm sure it's going to be in smaller pieces. Yeah. However, I think it has no place when we don't fully understand how it, how it's going to operate in the next even 2 to 3 years.
Social Media Parallels, Youth Safety, and Parental Controls
Rachael Lyon:
It's like the early days of social. John and I were talking about that, uh, you know, just kind of the Wild West a little bit.
Erica Shoemate:
Yeah, I grew up on social. Yeah, literally. And, in the beginning, I loved it, loved it. It kept me connected. It— people I hadn't seen in years, and was like, are they okay? And being able to stay connected, it still has a place, right? But it's the monetization piece. I remember when it became really wild. I'm sure you guys remember. And not that it's perfect now, but it got really interesting for a while, right? Yes.
Erica Shoemate:
And a lot of cleanup happened. But why would you do that to technology that people are using legit in their everyday workings, their everyday lives? And even though they may have been working on this, the people who own this technology for a decade, most people are just now starting to even have somewhat of an interaction in the last 2 years, and some even more recent. I think it's, it's not going to be— it's not going to be good.
Rachael Lyon:
What's interesting there, and then sorry, John, I know you have a question, but I— It's okay. You know, I just do wanna— What I find so interesting as well, Erica, these leaders of these social media companies don't let their children get on social media.
Erica Shoemate:
No!
Rachael Lyon:
So, I mean, that seems very telling.
Erica Shoemate:
Uh— It does. As well, but— 'Cause they know the psych behind it, right?
Rachael Lyon:
Mm-hmm.
Erica Shoemate:
And— How it's designed, yes. Yes, and how it's designed and what is, what is in— It's addictive. It is, absolutely. Addicted.
Rachael Lyon:
I love my TikTok. I'm not gonna lie, they reel me in with the end of the videos. Yes.
Erica Shoemate:
And I don't even have an account, like, for so many reasons. I turn my notifications off on my social media, um, because I understand also how my brain now processes things. And as someone who has a late diagnosis of having a very neurospressif brain, just in the last couple years, I understand now why that has been very important to me. And for my own daughter, there are certain apps that she absolutely cannot have. She has begged for certain things. Begged.
Rachael Lyon:
Yes.
Erica Shoemate:
And I am like, absolutely not, because I know the, the good, the bad, the ugly, and the indifferent. And I am not willing to— and just recently, I'm so glad that I, in many ways, stood my ground because there's something that recently happened in the community where one of these apps did some real harm. Wow.
Jonathan Knepher:
Mm-hmm.
Erica Shoemate:
And it has not been good. Wow. Um, and it, it was just too close to home. And it was just a reminder for me that this is the right thing. I am standing on this, and this is why. And I think that as I remind parents to— when they don't understand these apps and their kids are asking for it, that they have to get their heads out the sand. And it's not for them to be experts on it, but you cannot not understand how these things work, how chats inside of some of these places work, and the codes that happen, and all these different things. And then this is what I say to society and to leaders who are operating these companies, and to lawmakers, I think that it is absolutely unfair and unrealistic to say that because a company has now provided parental controls, that is enough to not create regulation on these companies.
Erica Shoemate:
And this is why I'm saying that it used to be how you— all of us remember that our biggest protection was around kids. Everyone had a duty. Everyone, right, in society had a duty to protect kids, not just the parents. And now, in a society where we generally have two, you know, income households— if you're in a two-parent household, parents work, they're not stay-at-home parents, generally speaking. So you want them to work, provide, give these kids all this amazing stuff, and you want them to now have parental controls. And let's not be a tween or a teen that's going through hormonal changes and say, I need to check your phone, or I need to check your iPad, and I need to keep my sanity, right? And let's hope that you don't have the super smart kids to have another phone that doesn't have this information, because the kids have to share this access to the parents, and we know what a teenager is like.
Rachael Lyon:
Oh yes, yes indeed.
Erica Shoemate:
I think it is so not only unfair. I actually think it's irresponsible to put that much weight on the parent and the parent alone. And I think that we hold a greater responsibility, not just as a society, but as the lawmakers, as the leaders of these companies, to do more, to say more, and call a spade a spade.
Jonathan Knepher:
So how do you reconcile, though, the conflicting things here, right? Like, like, as I unpack that part of the conversation, right? AI convinces people its answer is right, whether it is right or wrong. Mm-hmm. We need parental controls. Right. We need regulation. But yet privacy and equity are still important. Like, like we don't want these parental controls to require every one of these online sites to collect all of our personal data just to prove we're old enough. How— what's the end game to reach all of these, these needs concurrently?
Erica Shoemate:
Oh boy, you do not want my like very direct response to that. So, um, I will say as much as I am truly a believer of privacy and, and, and data when it comes to kids, right? This is where I, I not only struggle, I am like, I would rather go harder if it's going to protect the kid and then scale back than to like do little and then kids are being still harmed. And then we're trying to now amend the law that it took 20 years to get, right?
Rachael Lyon:
Right.
Erica Shoemate:
And so when it comes to kids in particular— and when I say kids, I am saying the very young up until at least 15, 16, that's when you kind of start to kind of round out a little bit more as to like kind of what's happening in life. Yeah, it's not perfect, but that's generally speaking. That's when I say, okay, you have some more autonomy to then be able to say, no, I get to own this part of me, and this is what I can now have— say that mom or dad, you can have access to this. But short of that, at 13, you get no autonomy with your parents, right, of sharing. What? Like, at 13— and I was a very mature kid. Like, my mom literally would tell you she never had to worry about me, but that didn't mean that I wasn't curious. And my mom was also very open and transparent with me. Again, going back to parents, some parents sometimes head in the sand, or they don't fully understand.
Erica Shoemate:
If you are that parent— again, I'm not judging— what I am saying, right, everybody has a different way of parenting. And if that 13-year-old has a parent that is not fully in the know, they run a risk of potentially being in spaces and places that, if there were some more guardrails from a regulatory standpoint that they may have never had access to. And so in some ways, I do agree with the Aussies of the 16-year-old, you know, you can't be on social media up until then. And I know there's the argument of like, but what about the kids— let's say you're in foster care— who are being harmed? These are ways that they can have an outlet, right? I get that. I still think that at least from 13 to 15, there needs to be more regulations around our children, because that's when you're growing the most hormonally. You're changing the most, and the frontal lobe has not even fully developed, you know. So that's what— that would be my answer. I know some people will hate it, but I, I do strongly believe in that.
Rachael Lyon:
I'd love to dig a little more into guardrails and, you know, particularly around GenAI. You know, they— there, there are some, and I'd be interested in your perspective on what these are, but also where they fail. And, and I, I'll say, uh, a friend of mine, um, she was talking— I forget which one, maybe it's Claude or whatever— but she basically asked it, um, how could I start a cult, right? Just, just to see what would happen. She's a behavior, you know, psychologist, and the prompts went through such that they ended up writing a manual for her on how she could be a cult leader and start getting members, right? It literally was a play-by-play book on—
Jonathan Knepher:
In excruciating detail, by the way.
Rachael Lyon:
Yeah, exactly. I mean, Jon and I know this person, uh, pretty well.
Erica Shoemate:
Okay.
Rachael Lyon:
Um, but she was able to do it, and then she washed it through another Gen AI that helped her, you know, better frame it up a little bit.
Erica Shoemate:
Yeah.
Rachael Lyon:
Do the little nice things and— I mean, are we supposed to be able to do that, America?
Erica Shoemate:
I would think it would flag it. I don't— I guess the better question is for me is, like, what was the initial prompt, right? Right. Because I have done things to honestly test the guardrails, right, on like abuse, um, like, you know, global use policies of what you can and can't do. And sometimes I have been surprised what it doesn't catch versus what it does. I will say the— what I feel like some of the biggest guardrails have been around is things around political, right, conversation. It is quick to be like, I can't give you that, I can't tell you that, when, when I'm being honest. Like, sometimes the question is not even political, right? And it's very interesting. And this is— that is where it's tighter Guardrails, not surprised, but interesting, um, in this day and time.
Erica Shoemate:
But to create a cult, my question around giving this whole, like, manual is, one, what was the initial prompt, right? And two, was the cult framed as a cult?
Rachael Lyon:
It was. No, she was very definitive in what she was trying to accomplish. And, you know, I think she kind of massaged some of the prompts as she went along, where it's like, you know, I didn't give her what she was looking for. You see, you just change the language a little bit, and then you get what you need. Neat. And, you know, so it was a very lengthy conversation.
Erica Shoemate:
Okay.
Rachael Lyon:
That she had, but she was very, very definitive that, okay, I want to start a cult and, okay, how can I get started?
Erica Shoemate:
And, and then— and, and I'm almost interested, okay, when it comes to cult, like, yeah, cult, we usually socially— this is nothing good. I guess the other question was Is this cult? Are they talking about harm? Are they talking about, um, you know, what kind of mindset, I guess, is the bigger question. Because sometimes it could have been— I'm not flagging the word cult by itself, but if it was paired around other keywords that signaled harm, right, then that could very well be what the model was waiting for, again, with me not having seen it. Yeah, but sometimes, like, what I know— because again, um, I understand, like you said, the machine modeling of how the sausage is made, and some of the phrasing around it can sometimes trigger the models if it's going to flag it or not. So I, I think that that would be more so too, because code Again, code— I would signal not good. But depending on the model, I'm wondering if it was waiting to see if it had, like, harmful trigger words around it. Like, are we going to be doing things— this is so weird— but, like, related to, like, bodily fluids or— Right, right. You know, rituals.
Rachael Lyon:
Exactly. Yeah, sacrifices or whatever the case may be, right? Yes. Yes.
Erica Shoemate:
So then I'm like, it should immediately be flagging, right? But if not, yes, then that is a problem. And two, I'm always telling people when they see issues like this inside of these systems to then report it directly to the services as well, the platforms. Definitely. So this is very interesting.
Rachael Lyon:
It was Yeah. So how do, how do we connect
Jonathan Knepher:
together, though, like, responsible AI use, right? Like, because, like, take this, this scenario, right? Like, she was doing it, uh, as part of research. I think one could use the technology, you know, on an entertainment standpoint, right? Like, she could have been writing a book on this, who knows, right?
Rachael Lyon:
Right.
Jonathan Knepher:
But, but this connects together, like, where do you, where do you draw the line on on the ethics and the security of these platforms, you know, compared to, to what you talked about before. Like, people are using this in lieu of having an attorney, right?
Rachael Lyon:
Right, right.
Jonathan Knepher:
Is that, is that good or bad? Like, on one hand, like, there's an argument that, like, you know, well, you really should have a real attorney, but if the other option is you have nothing, like, it's better than nothing, isn't it?
Rachael Lyon:
Right?
Erica Shoemate:
Yeah, yeah. Nope, I think that this is— it is, and it is a very fundamentally very difficult, um, thing. But I think, again, this goes back to my, my, my thoughts around 50,000 feet kind of understanding. And like, that's not to say like, let's just do the bare minimum, but as we learn more, then we do more, right? And as it relates to this responsibility in ethics, I think some of the things that I think about are the risk teams around us, like from the, the integrity, risk, compliance, and, and, and how those teams are thinking about some of these issues. Maybe we slow down the launches when needed, because I know that there's this whole kind of belief that we should be launching the fir— be the first, the fastest, and being the first and the fastest, is that actually what we want and need?
Rachael Lyon:
That's the question.
Jonathan Knepher:
Is it?
Rachael Lyon:
Yeah.
Erica Shoemate:
And, the reason why I'm saying it in, in that fashion, right? Like almost like rhetorically is because what I always say to leaders that are, that are builders and building technology is that if you're trying to launch, I hope, and I would actually say you should require it policy should be being built— policy and governance should be built alongside this innovation as it is headed to launch, not in the afterthought, right? Because what I have seen from my experience is that policy, a lot of times, is built after the product, bolted on after.
Rachael Lyon:
Yes, yes, yes, yes.
Erica Shoemate:
And it's so reactive versus doing the, the assessments, the threat assessments around the risk assessments on these product features and innovation. And a lot of that is tied to profit, right? I know, right? This is where I, I would even challenge board members— some people who may be listening that are on boards— to think about profit is important. Like, a company needs to be healthy, but if the company is healthy, then financially healthy, then at what point is, is it okay to be like, I can wait a bit to be transparent with my consumers, right? And do the right thing by them. Because if I'm doing the right thing by them, the more likely they're going to want to stay with me anyway because they believe in my brand, right? So that if something— if and when something does go awry, but because we've been so transparent, they're more likely to stay in the game with me because we have done these things. And we will communicate early and often about what we are doing when something goes bad, because we know that, that cyber, you know, warfare, cyber attacks, they're inevitable in this day and time, right? But I definitely say to every single leader builder that we must be creating policy alongside the guardrails. And I think that that doesn't necessarily fix ethics, right? Doesn't necessarily fix all the issues. But what it does do is it gets— it gets the, the builders to think earlier. And often, like, you have to come in it with critical thinking versus trying to have of— I call it the Band-Aid policy to try to fix the issue that has been caused by the technology.
Erica Shoemate:
And, and is it better— I loved you on the last part of what you say— is it better to have something versus nothing? I depend— it— I would say it depends on what the something, right, versus nothing, right, is.
Policy in Innovation: Integrating Risk Management
Rachael Lyon:
Because that's, that's ultimately the challenge, right? I mean, there is this, I think, need to accelerate operations or operationalize AI in companies, right? And you don't wanna slow things down 'cause there's all this pressure. And you're, you're seeing out there some research reports about, yes, they're seeing increases in productivity because they are adopting these tools and, and workers are starting to actually work longer hours, right? Even though they're utilizing these tools. So it's, it's becoming this vicious cycle of, How do you sustain that over time, as you know, as well as you take advantage? So it's, it's kind of like this multi-threaded, you know, existential threat of, you know, yes, we need ethics in AI, but I also need AI to be productive and sell more products and scale. And absolutely, you know, it's— I— for leaders, I mean, what would be your advice to leaders on how to navigate this path forward?
Erica Shoemate:
Yeah, I think for me, one, I would challenge any leaders, especially the, the top decision makers who are deciding who gets laid off, who stays. Let's be a bit more transparent and say if you're laying people off, that this is not because you overhired in 2021 or 2022. What you're doing— you have to— I, I tell— this is, this is going back to the human-centered design piece. If you're gonna lay someone off in this overly saturated market where we know that finding a job, another one, will be very difficult, that if you have any level of a soul and morals and, and I'm gonna even put it even, even closer if you are someone who also had to pick— like, to literally work very scrappy to get where you are, and now you have a healthy company, remember when it was— what it was like when you did not have, right, a lot. And I'm sure— I am sure that at one point you said, I wanted to create a company that would provide jobs, that would provide people to have a decent life. So I would challenge one, let's not lay off people, anticipating that AI is going to fix it, until you know the AI is going to fix it. Because what I also see, Rachael, on the flip side, is yes, it is doing great things, but what some companies are also experiencing is that the AI is not— the AI that they have implemented is not having the impact that they thought it would have.
Rachael Lyon:
Right.
Erica Shoemate:
And it's actually from the scaling-wise, it's actually not having a better output than the company was having prior to this implementation. And that is where I always say a, a human is not replaceable. Yes, there are some mundane skills that AI should absolutely be taking over. However, when it comes to institutional knowledge, AI cannot, right, replace that because some things are just in our heads, and there are certain nuances about human connection and emotional intelligence that you may be able to train AI to do certain things, but it's still not human. And there's a nuance to this that says we still need humans as a part of this integration until we are— I won't say it has to be 100% sure, but we have a greater assurance that this technology is going to do what we want it to do. And even then, you may lean out certain parts of your operations, but then can these people then be moved into different parts of your organization? I'm only speaking to this specifically because of this overly saturated market of people inside of these, you know, organizations, tech companies, or alike, that are being laid off. And they will not find a job. They will become freelancers.
Erica Shoemate:
They will become trying to figure it out. And they also have families to feed. Unless you are in healthcare, according to the most recent numbers, particularly here in America, the job market is very tough. And I'm like, why would you do that to someone when you actually have the ability to keep people gainfully employed and make profit? Because you are now experiencing great profit margins that have your company financially healthy. And so that's kind of how I, if I had a conversation with a leader, those would be the questions I— questions and thoughts that I would come to the table with. And they may not like me when I leave, but I would absolutely be, be stating that upfront.
Jonathan Knepher:
Yeah, I think you bring up a great point there. I think we need to treat AI as a tool and not a replacement, right? Yes. AI in the hands of a skilled, a skilled operator, right, is powerful, but you can't just let it go, right?
Erica Shoemate:
You just can't let it go. And you can't even— and even to that CEO or COO or CFO, you can't just, like you said, let it go, take your hands off of it. And one thing that I recently brought up in a conversation was, you know, I mentioned earlier about me being focused on critical infrastructure. If we consider AI to be critical infrastructure— and in some ways it is, right? Cybersecurity, all those things, right? But if we treat AI like a critical infrastructure, um, segment, a bridge Mm-hmm. You would not, in normal scenarios, if it was a bridge, to allow it to just— to allow that bridge to just go unchecked, right? No maintenance, no real input. Because I just said that it is a critical infrastructure. And let's say, let's take it even further, it is a hard target. And for us in this space of cybersecurity National security, we all know what that means, right? And so if we are saying that AI is critical infrastructure, it's a bridge, right? And this connecting tool, we would have all types of extra frameworks, criteria, guardrails almost every single day taking a look to see what it's doing, not just how it's performing for, for my dollars, right? But what are the threats? What are the concerns? What are we doing to mitigate it? And so again, to kind of add on to the piece of what I'm going to be talking and asking the leaders, and they should be thinking about is if, if AI is part of your critical infrastructure inside of your organization, it is single-handedly holding your platform.
Rachael Lyon:
Well, that's the thing. I mean, it's— you're feeding all of your sensitive data into these AI platforms and just kind of trusting it's going to work out, you know, it's, it's not going to take on, you know, life of its own or make decisions of how to use that information. Um, you know, and they do— they, they do that independently, let's say, even when there's guardrails. Those guardrails can fail. Um, and that's pretty tough.
Erica Shoemate:
And they will fail sometimes. They have a way, and I've seen it, like, working some of the most egregious things. Sometimes it didn't catch the word. Going back to your, your point, it wasn't that the guardrail wasn't set up. Something with the coding— there was something that did not click. And it happens, right? There's a bug there that needs to be fixed. It did not happen. But if you aren't doing regular maintenance, we gotta treat AI particularly as it relates to, like, you know, companies' infrastructure.
Erica Shoemate:
Literally, like, it is a pearl. It is so delicate. Like, it's so precious. It values so much. It can hold so much. But we have to know where it's going, right? What is it doing? How is it morphing? And is it morphing into something positive? Is it being shaped into the necklace, or is it being shaped into just a bunch of nothing that no one can even make out what it actually is supposed to do? Because we want it to be the beautiful necklace and hold all the things together, but we know if that infrastructure is intruded in any major way, the whole thing falls apart, and now you are out of more money because you did not invest in the things that I am mentioning up front, and you thought that humans were expendable, right? And therefore you sold a dream that we're going leaner when the reality was you went leaner for profit versus really for a better overall product.
Rachael Lyon:
It's, um, it's definitely kind of what, what the context is.
Erica Shoemate:
It's the context. Most definitely. Context matters here.
The Future of Regulation: Prediction and Implications
Rachael Lyon:
So as we look ahead, um, you know, 2026 and beyond, I mean, what is, what is your, I guess, prediction in some ways of are we going to get there with regulations and you know, in the near term, or is it going to— kind of like social media, it was like self-regulation for a really long time, uh, you know, do you see that kind of being the reality for the next however many years? And, what would— what are the implications or cascading effect of that, do you think?
Erica Shoemate:
Yeah, I, I, I think that from, um, being here in the U.S. We won't see a lot of full-on hard policies in the next 2 to 3 years, which I think is unfortunate. I do love some of the things that are being rolled out at the state level. So you do have states that are doing different things, but again, if we're doing it by state, it's also very disjointed, right? Right. And— but we have to do something. So I'm not saying that states shouldn't do anything, right? But it would be nice to have a more holistic approach. And I know that there has been some recent, um, callouts for like naming new, like, um, executive, like, frameworks and things. But I think that we need more concrete, even at the 50,000 feet level of frameworks and governance that we're just not gonna get here.
Erica Shoemate:
It's going to be, to me, very similar. Again, going back to Section 230, where we talked about it, talked about it. Yeah, we got some, some little changes, but it's not as if there was ever a carve-out for specifically regulations on platforms, right, here in America. We did get that in the EU. We have gotten that in other regions of the world. And so I do see that the rest of the world is going to continue— I think we'll see the EU kind of leading and creating safer and better policies. Maybe it will be too much policy initially, and that will be some heartburn from a company profit standpoint. So I'm not going to sit here and say that, right, right, that, that is not the kind of implication of too much policy.
Erica Shoemate:
But I think some of that policy is going to be helpful to our youth and child safety, right? And in the long term, from an innovation standpoint, I will challenge lawmakers to think about how do we compromise in a sense of not stoving, like, that innovation off, like, it's— stoving it off. Exactly. And allowing people to continue to innovate. I think that it's okay to have a bit more risk when children aren't the end product, right? So I think that there can be a greater carve-out for that, but it doesn't have to be an all-or-nothing approach. And I feel like that's kind of where we are here in this world. I know deepfakes has been a thing. Yes, yes. Right? Particularly around youth safety.
Erica Shoemate:
And I do say I am glad to see here, um, even in America, like, not only discussions, but like real pushes to, to make policy changes. I know some people are like, well, you know, First Amendment, and like, well, right, but if it's not the person, then how are we saying that it's a problem, right? However, this goes back to
Rachael Lyon:
kids.
Erica Shoemate:
Um, if it's a child and that thing looks like them, what are we even talking about, right? It's literally putting yourself in a pretzel for no good reason, literally no good reason, right? Take it down. People, like kids, have committed suicide, young adults, because of what happened in their high school years or middle school years, are no longer with us, because the pain was too much from even just sexting and things of that nature. And so to say that, well, it was created with technology, so it's not technically them, but does it look like them? And are people saying it's them? Right, right. It goes to NIL, right? In many ways, name, image, and likeness. And again, the threshold for me is when it relates to our youth, this should be a no-brainer for anyone. Adults, then that's where we get into more of a nuanced conversation. But kids, they— they don't have the ability to decide what's out there about them and what's not out there about them. And so looking forward, again, I think that there is going to be some movement maybe in that regard, but as a whole, I don't foresee here in America us moving nearly as fast and as much.
Erica Shoemate:
There's some executive orders to push for faster innovation, right? And I always say it at what expense if there's no framework to balance that moving faster. And then around the world, I think we're going to continue to see more of a tightening on regulation, specifically as it relates to the carve-out of children, because children have been significantly harmed over the past 20, 25 years since the, you know, the dot-com, um, age and social media. Definitely.
Rachael Lyon:
Um, Erica, this has been so much fun. I, I want to be cognizant of your time, of course. Um, but thank you so much for this conversation. I've, I've really enjoyed it, as I'm sure Jonathan, uh, can see from all my tangential, uh, discussion points. Uh, but thank you. This has been— your perspective is, is incredible and insightful, and, uh, I'm really excited that our listeners have a chance to, to hear from you.
Erica Shoemate:
So thank you. Yeah, thank you so much for having me. And yeah, I really hope that your listeners enjoy it and hope that it was helpful. I'm sure you know, you may not always agree, but hopefully can understand the perspective, um, that it was coming from and that it was not from a place of just personal insights, but more so lived and things that I have also worked on. So thank you.
Rachael Lyon:
Thank you. And, of course, to all of our listeners out there, thank you for joining us, uh, for yet another really awesome guest discussion. And, uh, please don't forget, Jonathan, drum roll please.
Jonathan Knepher:
Smash that subscribe button.
Rachael Lyon:
That's right. And you get a fresh episode every single Tuesday. How amazing. So until next time, everybody, stay secure.
About Our Guest

Erica Shoemate, Founder and Principal Strategist of EN Strategy Group, LLC
Erica L. Shoemate, MPA, is an International Best-Selling Contributing Author, Tech Policy Leader, and Maternal Health Strategist and Advocate. Erica previously served as a National Security Leader and Analyst at the FBI and across the U.S. Intelligence Community. In 2017, as detailed in her 2021 best-selling contributing author anthology, Special Delivery: From Pregnancy to Toddlerhood (A Little Perspective), Erica utilized her strategic analysis expertise to save her own baby’s life. Currently, Erica is dedicated to transforming policy landscapes across multiple industries. Her National Security experience is deeply woven into all of her work as a policy consultant and maternal health strategist, and advocate.
Erica is a Tech Policy Leader with extensive knowledge in cutting-edge technologies like Generative AI and AdTech. She advises ChatBlackGPT, an empowering platform providing accurate information to the Black community. As a dynamic Public Speaker, Erica shares her insights on Tech and Public Policy, Maternal Health, and Breaking Down Workplace Barriers. Her unwavering dedication lies in crafting inclusive policy frameworks for a better future.
Listen and subscribe on your favorite platform