주요 콘텐츠로 이동
Background image

Exploring Rowhammer, ECC, and the Future of Secure Data Storage with JB Baker

Share

Podcast

About This Episode

Welcome to a brand new episode of To The Point Cybersecurity, brought to you by Forcepoint! This week, hosts Rachael Lyon and Jonathan Knepher dive into a side of cybersecurity that doesn’t often get the spotlight: the ever-evolving world of memory, storage, and hardware security. They’re joined by JB Baker, Vice President of Marketing and Product Management at ScaleFlux—a seasoned expert with more than 20 years of experience at top companies like Intel, Seagate, and LSI.

Coming fresh off the buzz of DEF CON and Black Hat, Rachael and Jonathan kick things off by discussing grassroots cyber initiatives, before shifting gears to critical threats like Rowhammer attacks and new vulnerabilities emerging as AI transforms our approach to data and memory architecture. JB unpacks the complexities of error-correcting codes (ECC), new approaches to memory protection, and how open-source, community-driven projects are reshaping data center security.

From the impact of quantum computing on the encryption landscape to the ongoing power challenges facing data centers, this episode is packed with insights, real-world examples, and a look at how the future of hardware security will shape everything from AI to edge computing.

Podcast

Popular Episodes

      Podcast

      Exploring Rowhammer, ECC, and the Future of Secure Data Storage with JB Baker

      FP-TTP-FPGO Image.png

      Rachael Lyon:
      Hello, everyone. Welcome to this week's episode of to the Point Podcast. I'm Rachel Lyon, here with my co-host, John Knepher. John, hi.

      Jonathan Knepher:
      Hello, Rachel.

      Rachael Lyon:
      Well, I want to ask you something if you've heard of, you know, we're recording this, everyone, in August right after on the heels of Black Hat DEF Con. And I always love seeing what news comes out of that event or those events. And I feel remiss that I had not heard of this before. Have you heard of defcon Franklin Initiative?

      Jonathan Knepher:
      I have not heard of the Franklin Initiative. I know of defcon. Of course.

      Rachael Lyon:
      Sure. Of course, yes. So apparently it was stood up, I guess, a year or so ago. And it's a pilot program that pairs white hot hackers with water utilities. And the pilot program looked at Oregon, Utah and Vermont. And so basically, you know, these cyber folks are volunteering their time to help look for vulnerabilities in water utilities and help them, you know, solve for some of these. These gaps to shore up defenses. I just think it's a wonderful project and they're looking to scale to address the more than 50,000 water utilities we have in the U.S.

      Rachael Lyon:
      it's very ambitious and I just think it's wonderful. And I would encourage everyone to go to the DEFCON franklin.com website to learn more because I love these kind of grassroots initiatives to help solve big problems.

      Jonathan Knepher:
      Yeah, absolutely. I will definitely be checking that out as soon. As soon as we're done today.

      Rachael Lyon:
      Absolutely. All right, well, without further ado, I'm excited to welcome to the podcast JB Baker. He is Vice President of Marketing and Product Management at Scale Flux. He's got more than 20 years of experience in enterprise storage, specializing in secure, high performance solutions for data centers. And since Joining Scaleflux in 2018, he's led efforts to advance computational storage, a critical technology, as hardware security takes center stage. And before joining Scaleflux, JP played pivotal roles at Intel, Seagate, and lsi. Welcome, jb thanks for having me. This is gonna be a fun conversation like we were talking about earlier.

      Rachael Lyon:
      We haven't really looked at security through this lens before, so I think this is going to be a very illuminating conversation for our listeners. John, you want to kick off?

      Jonathan Knepher:
      Absolutely. So, jp, I think we've all heard of, we'll start with Rowhammer from a decade ago, but it's coming back into the light now. Maybe you can explain to us how these attacks work and what we're seeing now allowing them to succeed.

      JB Baker:
      Sure. So Rowhammer attacks are going directly after weaknesses in the memory cells themselves. So they're targeting that physical hardware by, as the name implies, just hammering the cells over and over again, repeatedly accessing the same rows that, that can then cause kind of the noisy neighbor and cause adjacent cells to flip. Now you've got data corruption, which could be hidden. And there's also that when you, when you cause those bits to flip and you've got that data corruption, there's that potential for changes in privileges, system crashes. And all of this can happen without the traditional malware signatures. So they're coming in and kind of bypassing traditional software defenses by manipulating that memory directly.

      Rachael Lyon:
      Just to ask, John had mentioned this before we started talking here, and it's a problem with no solution. Is that correct or. It's a very difficult one to solve for.

      JB Baker:
      Yeah, I mean, it's, it's very challenging to solve. They're, they're also, they're very hard to execute in the real world. We've seen much more of this in academic situations and experiments and tests. I don't, I can't point to one that like such and such company, has been affected by a Rowhammer attack. But there are new, there are new vectors of ATT and CK opening up. We've got the expanse of AI. That's all you hear about. And hopefully everybody has a few shares of Nvidia.

      JB Baker:
      But with AI growing, the way in which processors access data and the amount of data that they access is changing dramatically. And traditionally we had all of the DRAM was attached directly to the cpu. Right. And that's how you get it there. But as the processing capabilities have grown faster than memory's capacity and memory's bandwidth, we've created this memory wall where the processors themselves get STAR for data because they just can't get enough data in their fast access through the memory to address that. To get that more memory with traditional architecture, you just have to put in more servers, more x86 servers. Not to use the processors for their capabilities, but just to get that additional memory to feed the GPUs. Okay, so long background there.

      JB Baker:
      But so the new vector where we're solving the memory wall challenge by enabling you to put dram on the PCIe bus instead of directly attaching to the CPUs. This newer technology, it's just getting earliest deployments this year. We should start to see some ramping. Next year is called CXL or Compute Express Link. It's an industry standard body. You can go check it out at the Compute Express Link organization. And so now the DRAM is on the PC bus and that gives malicious actors another vector to try to reach into. Long answer.

      Rachael Lyon:
      I know, it's great.

      Jonathan Knepher:
      So I mean that's interesting to, you know, now kind of it's almost like going back to like the old ISA days where the memory is just on the bus. Does, does this have a performance impact? Because I would assume you're going to want to do this in environments that need lots of high speed access to lots of memory. What's the performance impact there?

      JB Baker:
      Well, we're definitely, you know, it's, it's significantly slower than going to just the local DRAM because you're introducing the latency going across the PCIe bus itself. But it's still an order of magnitude better latency and better able to access small chunks of data than traditional SSDs. And that's your other alternative is if I can't go directly to the DRAM today because I don't have enough capacity now I've got to go out to SSDs. And while as an SSD guy I think of them as ridiculously fast, they're still a couple of orders of magnitude lower or worse on latency than going to DRAM. Right. And we just can't deliver as many small iOS either.

      Jonathan Knepher:
      So what are.

      Rachael Lyon:
      So go ahead.

      Jonathan Knepher:
      Yeah, I was going to ask. Yeah, I was going to ask. So what are the attack vectors that you're seeing in this space? Like you mentioned, we're opening up a whole new area. Give us some examples.

      JB Baker:
      Well, yeah, I mean it really is. It's just with, by putting the memory on the PC bus now there's a different way to go in and try to attack that memory to cause these errors. And I'm not an expert on the hacker software to get in there. A hardware components guy. And so we're very focused on what do we do in the hardware to prevent errors from happening down at the bit level and prevent those attacks from getting in in the first place with the encryption and the data protection that way and the ECC advances to protect against these data flips.

      Rachael Lyon:
      So you opened the door to ecc. This is a great segue to I guess can you explain the role of their error correcting codes. Right. In ensuring data integrity within modern data infrastructure? Can you talk A little bit more about that and the interplay here.

      JB Baker:
      Sure. So to go non technical then, ECC is like a spell checker. So it detects and corrects errors like typos that might occur when you're saving or retrieving the data. But there's a lot of limitations on the ecc. So like a basic spell checker, going back to my early versions, maybe it can only fix certain types of errors. Like there's one letter wrong in a word and then it can tell you that this letter is wrong. But if you had multiple letters, it can't predict what the right correction is. So if those multiple errors get to be too complex, the conventional ECC can't find or fix the errors.

      JB Baker:
      And now you've got the problem of data corruption, which can lead to system crashes. And so that's where we've got to make evolution, we've got to make innovations and evolve the ECC code to better protect against multiple errors. And this really gets exacerbated by, as I said, the AI and how fast it's accessing memory. So if you think of how often an error is going to happen, there's a, you know, each cell of memory has a specified error rate, 10 to the minus X bit errors per access. So if you access it more frequently now you're increasing the instances, you grow memory. So now we've gone from gigabytes to terabytes of dram. So you've grown how many cells you can potentially have errors in. You've expanded the blast radius because you're sharing this information across multiple systems.

      JB Baker:
      They're working in parallel together to solve a problem. And if you have a problem, if you have an error in one, that can bring down multiple GPUs, an entire cluster. And that's a very expensive problem to have. And as we've advanced from one production node to the next production node to the next production node, the cells in the DRAM and the flash gets smaller and smaller and smaller, which makes them more vulnerable to errors. So we're trying to access the memory faster, we're making it bigger and we're making it inherently worse. So you got to improve the error correction.

      Jonathan Knepher:
      So you bring up all of these great points. One question I've had for a long time, just personally, is why has there not been like widespread adoption of ECC into individual workstations and the consumer market? It seems pretty exclusive to server side components today.

      JB Baker:
      Well, I think all of the memory dimms in themselves, they're gonna have ECC protection right so they have some inherent protection against errors at the component level within the memory. And then like when we, when you go to your ssd, it's going to do raid, if you're familiar with that redundant array of independent devices, we'll call it instead of disks like it used to be in the, in the past. But so we're going to protect against errors in the media itself. But as we expand to faster and faster and broader access to these, you've got to improve that ECC and then also with CXL when we move the DRAM out onto the PCIe bus. Now you've introduced another potential place for there to, for errors to happen as you transfer the data across the PCIe instead of directly from the DRAM to the CPU across just the memory channels. So we're introducing new opportunities to create errors and hence we got to correct them better.

      Jonathan Knepher:
      Yeah, absolutely. And tying this back to rowhammer attacks and other malicious events, can ECC actually help us there or is that a losing battle?

      JB Baker:
      Oh, it absolutely can help there because as you. While the traditional ECC methods start to fall short and that's where rowhammer attacks could come in and knock out these cells. When we do, advanced ECC techniques like Scaleflux has developed this list decoding technique where instead of only being able to protect one or fix one or a few symbols per cache line, we're able to detect errors and fix them across many more, many more errors within that cache line. And that allows you, it gives you much better protection against the rowhammer type of attack.

      Rachael Lyon:
      And you mentioned kind of as we basically increase workloads, right, at computational workloads. If we're looking in your crystal ball ahead and we start throwing quantum computing in there, I mean, what, what do you think is going to transpire as we get closer to that?

      JB Baker:
      Yeah, I mean, looking what's coming at us, there's, there's the quantum computing and we're going to need quantum encryption to protect against that. I don't know that that's so much an ECC change, but it is definitely a security threat. And we're going to need that quantum safe encryption further out, homomorphic encryption. I see some of that, but it requires so much computational force, brute force horsepower that it's not practical today, particularly at these high speeds that we're talking about for memory and flash access to feed the AI processes. And then also AI itself is now it can help solve, but can also create issues for security. Because I've seen a couple of things where some of the Quantum safe encryption may not protect against AI based attacks because they use different methods to try to break the encryption. And so while a quantum computer may not break through the quantum safe encryption, an AI program might. These are, if we're early in these stages, you know, I'm reading and hearing about these different vectors just like you guys are.

      Jonathan Knepher:
      I want to go back real quick on the new ECC techniques. You know, what are the types of strategies you're using? I think a lot of us are kind of only really familiar with the older like Reed Solomon and one air correct to detect strategies. And it sounds like the world's come a long way since then. Can you talk a little bit more about that?

      JB Baker:
      Yeah, I mean, I can cover some of it to some extent. So the list decoding has been around for a long time. This was evidently kind of invented back in the 1950s, but hasn't been applied lately. A challenge with it has been particularly around. DRAM is trying to execute this different style of decoding fast enough. When we're in memory, we've got to be able to do our ECC detect and correct within one to three clock cycles for the cpu. Otherwise the latency is too high and we're causing problems there. And then they make this trade off of, well, I'll take the potential of additional errors versus lower my performance.

      JB Baker:
      What our research team has done with the list decoding is create a way that they can protect that 64 byte cache line, same as traditional ECC with a single long code word. That enables super fast low latency decoding as well as being able to fix that additional number of errors. This is not a perfect analogy, but if you've done sudoku and you look at the easy puzzle and the hard puzzle and it's how many on the easy puzzle they filled in a lot of the numbers for you. And so you look at an empty cell and you can very quickly identify, oh, there's only a one or a two can fit there, right? You look at the hard puzzle and you look at a cell and it's like, oh, that might be a 1, 3, 5, 7. And you've got to do additional work and more latency to get to narrowing it down to the correct response. That's what we're trying to do or what we are doing with this list decoding is reducing that latency to solve that more complex error and get identified what the correct data should be.

      Rachael Lyon:
      Interesting. Coming back to, I know we keep hearing a lot about basically the, the power, the energy required to basically fuel all these advances, be it AI, quantum, etc. There's this open Compute project that I just learned out of our briefing with you and I would love to, if you could explain more about it. It seems like it got its roots in 2009 through a Facebook program or project. Yes.

      JB Baker:
      Yeah. So Facebook, now Meta and Microsoft were the core drivers behind the Open Compute project. And what we saw as those two were the early hyperscalers. And what we saw was a problem where each of the hyperscalers was driving to unique requirements and that was impairing the industry's ability to deliver. Because now we're all trying to, we got limited R and D dollars every, every one of us. And it's like, man, do I develop the thing for Meta, do I develop the thing for Microsoft or develop the thing for Amazon, you know, or the general purpose thing. So in order to accelerate innovation and improve efficiency in both the R and D side and then in the hardware itself and the entire infrastructure, Facebook and Microsoft got together and kicked off the Open Compute project. And now there's dozens of industry players in it.

      JB Baker:
      Nvidia has recently, I believe they're now a member, they've been at the OCP conferences. And it develops a common set of requirements which makes it faster, better, cheaper for all of us technology developers to bring innovations to market to satisfy these massive scale customers and then leverage those advances out into other enterprise users who they need them for their infrastructure to be competitive and to be secure, but they don't have the scale to drive specific innovations themselves.

      Jonathan Knepher:
      Has this driven changes or evolution in the infrastructure and particularly in the realm of security of these platforms?

      JB Baker:
      Yeah, absolutely. So on just the infrastructure, they've driven standards on form factors, power envelopes, specifics around latency targets and performance as a SSD and memory guy that look at those aspects of the specifications the most. But also recently Microsoft really kicked off the Calyptra security project and then it's been adopted into OCP and expanded there. And Calyptra you can, this is easily findable on the Internet for further details as well. But Calyptra is a hardware root of trust, security block, that's where it starts. So that as you are bringing your system up, you can go out to the devices themselves and be assured that that is the right device, it has the right firmware, all of these things through this hardware IP block that sits within the controllers. So like in the SSD controllers that scale flux has developed in our new latest generation, we've integrated the Calyptra ip. It's circuits within the controller itself and then that ties into commands that are going to come from the host to double check things.

      JB Baker:
      One cool thing with Calyptra is now this is an open source project and there's the plus and minus on an open source, right? You think, oh, open source, so it should be easier to hack, but not really. And then also on the advantage side, open source, if there is a hack now, you have such a massive community to solve the problem, fix it, patch it and push it out immediately. Right? So you've got all of those security experts contributing to making this better, as opposed to having individual companies each try to solve it, or having a malicious actor come in and be able to break down this company's data protections and then it's all proprietary and nobody can help and it's a longer fix and maybe it's still not as secure. So we're really excited about Calyptra and its future as we go through advances. I think we're at 2.1 now on the colluder.

      Rachael Lyon:
      I love that kind of a global community coming together to make change. And as you look at kind of the future, it looks like the compute project has started with the data center, has expanded it to telecom sector as well, and edge infrastructure. So what does the future look beyond that? I mean, what are the other problems they're going to be tackling in the next five years, let's say five to 10 years?

      JB Baker:
      I think that the core problems are fairly consistent. You know, it's, we've got the challenges around power, and not just for green of, you know, how can we do this in a sustainable fashion? But if you can't improve your power efficiency, then you won't be able to improve the performance. Because we're already at the stage where data centers, data centers have a power cap. And with all of the GPU and AI workloads coming in and consuming massive amounts of power, you've got to drive efficiency. Because it's not like I can take my 10 megawatt or 20 megawatt data center and suddenly give it more power. That's kind of where it is, particularly in locations like urban locations like New York City. We talked to customers there who are running, maybe their data center is only a floor. They're not going to get more power.

      JB Baker:
      So in order to improve performance, they've got to improve efficiency. And then you go to the hyperscale where they're running those massive data centers. We're talking about putting small nuclear reactors built in concert with data centers just to supply Power directly to that data center and not to the community. Right. So that's the level of power consumption that's being driven. And so efficiency is tremendous. And with that comes power or heat dissipation and thermal management. And so you've got the liquid cooling and immersion cooling which are two separate things.

      JB Baker:
      And then carrying those out to edge environments where you don't have as much control over the overall environment. In a data center you're very climate controlled. You can design that thing and optimize all of the airflows and the water cooling, the liquid cooling. You get out to the edge deployment. And particularly you mentioned Telco. We've got devices that are sitting out on the street corner, micro data centers, the far edge and components have to be insanely power efficient out there to expand their capabilities and deliver on instantaneous inferencing and decision support.

      Rachael Lyon:
      Just to ask, and this is kind of maybe a left of center thing, this whole power concern, as we look ahead, it's prevalent everywhere. I mean that we can't escape it. What do we see as a solve here? And I know this isn't really your specialty, but I'm just curious since we are on the topic. There's always a way through, but does it become potentially like a have and have not situation ahead? Right. Who can afford to do it, who can execute versus those who can't. And then what dynamics does that then create in the next 5, 10, et cetera years in industries? I think it's a fascinating topic and I don't know if anyone has the answer, but I'd be interested in your perspective.

      JB Baker:
      Yeah, I'm not sure on the you have or have not, but this is. I don't see a slowing in the demand for the increasing demand for power. Everything I've read and seen all projections, our data centers are going to grow as a percentage of total global power consumption. What we can do is through efficiencies rethinking the data pipeline again. I'll go back to where my core knowledge is around the memory and storage and re architecting how we store, access and deliver data to the processors so that we avoid wasting idle power and we reduce the power that's used to move data and we get more work per watt out of all of this by improving that entire data pipeline that helps mitigate the demand. Right. So even so, let's say that the demand for AI processing flops goes up 10x. Well if we can do that with only a 20% increase in power consumption, that's a massive Mitigation and each generation of processors succeeds in giving you significantly more performance per watt.

      JB Baker:
      I don't have the recent Nvidia improvements on the top of my head, but in the SSD world we are typically every couple of years giving you twice as much data access for the same power. We haven't increased the power envelope for these SSDs but the speeds and performance is doubling. So there's that mitigation for the haves and have nots and where the limitations come in. We're already seeing that with data centers getting delayed on construction because they don't have power to supply them once they they're built. And it takes years of going through regulations and permitting and whatever to bring up a new power plant. So that's where we're starting to see things like in the, the Stargate project where I think in the first campus that their campus might be an understatement that they're looking at and defining to build in Texas there's co located power generation with these massive data centers. It's probably a small city in effect, but you've got these massive data centers and then you're going to be bringing up additional power capacity there. Right.

      JB Baker:
      Just to provide for that new demand.

      Rachael Lyon:
      Jonathan, this is your world.

      Jonathan Knepher:
      Yeah, I mean I think that's where we have to go.

      JB Baker:
      Right.

      Jonathan Knepher:
      Because I think there's kind of a fundamental issue with data centers load on the grid as a whole. Given a data center popping offline and off the generator, power is going to wreak havoc to everybody else. Right. So yeah, I think that's kind of the only way out and they have to do their own thing. But then you're going to have the problem too of what if they can't power themselves now you're going to have compute capacity falling offline. It's a whole separate problem.

      JB Baker:
      Yeah, there's no one magic pill for this. It takes classic, it takes a village, it takes government investment in their infrastructure because this capability is strategic for various countries. It takes the hyperscalers, it takes colo providers, it takes the infrastructure teams and local municipalities with the utilities. It takes a combination of traditional fuels, renewables and potentially these small nuclear reactors which are my understanding and my limited research is they're very different from what we like, the Three Mile island and you know that those style of reactors and, and then down to the technology developers like scale flux, Nvidia, AMD, etc. With improving the efficiency of the components themselves to help slow that demand increase.

      Rachael Lyon:
      It's an interesting challenge because I've lived in New York City in the summer, right. The brownouts or the blackouts because the energy consumption, just using air conditioners is just too much for New York. And living in Texas, we are all keenly aware of the challenges in the state. We think about that infrastructure. It seems like there would need to be significant, significant investments. Right. When we look at national power grids or how they all connect, where does that money get found? I mean, there's a lot you can do with efficiencies or trying to create your own nuclear reactor, whatever it is. But I mean, ultimately, longer term, this is a problem that needs to be addressed.

      Rachael Lyon:
      We already know that now, but it's increasingly going to get more significant. So who owns that? JB.

      JB Baker:
      That'S a great question. That's the multi trillion dollar question, right? Again, I don't think there's single ownership. It is a distributed ownership. There's definitely we already see it in the US and other countries, I'm sure with federal involvement with initiatives, whether it's loans that are available or direct investment. There's states and regional municipalities, there's the private utility companies, there's that the hyperscalers and the data center builders themselves are having to invest in that. That may well just be part of, hey, I'm going to go build this multibillion dollar data center. And oh yeah, there's another billion dollars or whatever it cost to build a power plant that's going to be sitting next to it. And then you, of course you need the infrastructure to deliver the fuel to the power plant and all that stuff.

      JB Baker:
      So there's, there's lots of players that will be involved in that and there's a good reason for them to be involved. There's profits to be had for all.

      Rachael Lyon:
      Absolutely. So I like to, we always like to end our podcast. I know we're coming on time with kind of more personal or personal perspective questions. Yeah, I know it sounds a little daunting. You know, I'm always kind of curious and John knows what I'm going to ask is how you found your way to this world when you're going through school. Did you I'm going to be in storage and I want to do product development. Or was it kind of some happy accident, happy discovery of how you got on the path to where you are today.

      JB Baker:
      Well, I would say going back, I always wanted to be in what I considered the high tech industry, you know, whether it was computer hardware. Computer software. And so coming out of business school, you know, you're interviewing Interviewing and, and I did. I got the job offer from intel to get into the computer industry and the high tech arena now, specifically storage. That's kind of an accident within Intel. I was in a role that wasn't directly producing the new technology and I'm not an engineer, so more of the business and marketing side. And I had one of my friends was over in the storage division and they had an opening for, in product management. And they, they said, hey, we got all the engineers we need.

      JB Baker:
      We need somebody who's got more that business thinking of making the trade offs on different features and products and looking at the markets. And so I got to get in and then just get immersed in the technology. And then 25 years later, I'm still in storage.

      Rachael Lyon:
      That's fantastic.

      JB Baker:
      And it's like I joke, it's like I play an engineer on tv.

      Rachael Lyon:
      That's wonderful. All right, John, you get the last question of the day. I did this to you last time.

      Jonathan Knepher:
      What's my last question of the day?

      Rachael Lyon:
      Yes.

      Jonathan Knepher:
      So jb, what's next in both storage and cybersecurity from your point of view.

      Rachael Lyon:
      Your crystal ball, if you will. Yes.

      JB Baker:
      Storage. There's some very interesting things coming up for the data pipeline to feed the AI beast. Right. I think I wrote something on this recently. The new memory storage hierarchy is evolving. You know, we used to have just the on chip caches, there's a couple layers there. Then you went to dram, then you went to hard drives. A couple decades ago or 15 years ago, SSDs got added in there and I guess tapes went out there too.

      JB Baker:
      But SSDs got added in the middle recently. Last year, HBM high bandwidth memory that's now attached closer to the GPUs themselves to provide that larger cache that's been deployed. Awesome. I mentioned CXL earlier. That's now another tier in this memory storage hierarchy. There's an initiative from Nvidia that's called Storage. Next, their codename, public codename, is SCADA S C A, D A. And that's a new way of accessing the data that's in Flash to align things with the small IO, the very fine granular I O sizes that AI wants.

      JB Baker:
      Right now Flash can't do that and we waste effective bandwidth. We can't fill the pipe with the IO sizes they want. We can fill the pipe for traditional x86 processors running traditional analytics and database processes, no problem. But AI, not as much. There's these massive capacity SSDs and storage out there. So there's now between compute SSDs and hard drives, there's these super high cap SSDs. So all of this is coming to build on that efficiency and deliver new ways of accessing more data more quickly to give you better efficiency out of your GPUs. On the cybersecurity side, I don't know as much as to what's coming, but as we talked about earlier things around the quantum, there's going to have to be some AI direct involvement.

      JB Baker:
      Some companies are already doing this and I believe this may become more broad is integrating more ransomware detection correction capability down into storage devices and memory devices themselves to give you additional protection techniques as opposed to once they're in through the mistake of an individual. I think that's the primary source still is human error, but giving you corrections out in the background to again make those more efficient and avoid a lot of data movements. So I think all of those things are coming.

      Rachael Lyon:
      It's a fascinating future. I keep seeing this analogy. You guys have probably heard it. If data is the new oil, then AI is the new oil refinery. And it's so apt. When we start looking at the world ahead and all the changes coming, we.

      JB Baker:
      Refer to it as the data pipeline, which again goes with oil or fuel or whatever.

      Rachael Lyon:
      Exactly.

      JB Baker:
      And your storage and memory there and networking are all part of that pipeline.

      Rachael Lyon:
      Exactly, exactly. Well, thank you so much for your time, jb. I appreciate all the insights, particularly in the storage realm. We've never had this conversation before in the five years of this podcast, so it's fun to bring a fresh perspective because these are areas we absolutely should be thinking about and looking at ahead. So thank you.

      JB Baker:
      Well, thanks for having me on. This is a lot of fun.

      Rachael Lyon:
      Awesome. And Jonathan, I'm going to drum roll. What do we want our listeners to.

      Jonathan Knepher:
      Smash that subscribe button.

      Rachael Lyon:
      And get a fresh episode? I love to say a fresh episode every Tuesday, delivered right to your inbox. So until next time, everybody stay secure.

       

      About Our Guest

      JB Baker Headshot

      JB Baker, Vice President of Products

      JB Baker is a technology leader with over 20 years of experience in enterprise storage, specializing in secure, high-performance solutions for data centers. Since joining ScaleFlux in 2018, he has led efforts to advance Computational Storage—a critical technology as hardware security takes center stage. Before joining ScaleFlux, JB played pivotal roles at Intel, Seagate, and LSI, and has a proven track record of driving growth through innovative product development.