Can Companies Actually Get Ahead of Zero Days? Skeptics Talk

May 22, 2025
 

Zero days have long been cybersecurity’s unsolvable puzzle—but are defenders closer than ever to staying ahead?

In this episode of Exploited: The Cyber Truth, host Paul Ducklin sits down with Steve Barriault of TrustInSoft and Joe Saunders of RunSafe Security to unpack whether organizations can actually prevent zero-day vulnerabilities—or if reactive patching will always be the norm.

Steve discusses how formal code verification used in avionics and other safety-critical industries can push risk to near-zero. Joe brings in the reality check, especially for embedded systems and critical infrastructure, with insights into how runtime protections can mitigate threats when patching isn’t possible.

Whether you’re a software developer, security architect, or working to secure embedded and legacy systems, this episode offers a real-world look at the layered approaches needed to manage today’s zero-day risks.

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Guest Speaker – Steve Barriault: VP of Sales & Solutions Engineering North America, Japan and Korea at TrustInSoft

LinkedIn

Key topics discussed: 

  • Why zero days remain so hard to prevent and detect
  • How formal verification can help eliminate vulnerabilities
  • What makes embedded systems especially vulnerable to zero-day exploitation
  • When patching isn’t possible, what protection strategies still work
  • How upcoming regulations like the EU Cyber Resilience Act are changing the game
  • What a multi-layered defense actually looks like in practice
Episode Transcript

Exploited: The Cyber Truth, a podcast by RunSafe Security. 

[Paul] Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin. And in this episode, I’m joined by not one but two guests. We have Steve Barriault, who is VP of sales and solutions engineering at TrustInSoft.

[Paul] Hello, Steve. 

[Steve] Hey. How’s it going, Paul? 

[Paul] It’s very good. And I have our regular guest, Joe Saunders, founder and CEO at RunSafe Security.

[Paul] Hi, Joe. 

[Joe] Greetings, Paul. Great to be here with Steve. 

[Paul] Yes. It’s great to have a three way chat, isn’t it?

[Paul] Our provocative question in this episode is, can companies actually get ahead of zero days? Just for listeners, before we start, let’s make absolutely clear what we mean by a zero day. It’s a type of bug or exploitable security hole. Not only that exists in some product or service, but that was found by the bad guys or by attackers first. And it gets its name because, basically, even if you are the most switched on sysadmin in the world and you apply your patches the instant they come out, there were zero days that you could have been patched in advance.

[Paul] Steve, should companies that write code and publish solutions, particularly in the embedded space, OT space, be focusing their efforts on writing code with no bugs? Or is it okay just to hope that nobody finds them and then go and try and fix them if they do?

[Steve] Yeah. It will be very difficult for me to say it’s okay and just paper over it because I think that it’s pretty clear from the data that we have from MITRE, for example, where you had 50% of all the problems that they found, not only with security, but also with quality, that are essentially overflows. If you have a runtime error in your program, that’s a gateway for somebody that you described to actually exploit in order to maybe get access to the system or just crash it.

[Steve] Because let’s be frank here. It’s like there’s some people, they just want to smash their windows, and that’s how they create chaos. 

[Paul] That was an unfortunate choice of words, perhaps. 

[Steve] Yeah. Okay.

[Steve] Sorry. 

[Paul] Windows and other operating systems are available. 

[Steve] Exactly. Now the question is that, can you detect that in advance? You need the right tools in order to do that.

[Steve] So say, for example, you have your code. You do want to have a tool that is actually going to tell you, exhaustively, without false negatives, whether or not there is a presence of a runtime error. If you’re going to do this, it cannot be at the cost of spending enormous amounts of money and time because nobody will be able to be in business with that. So for example, in the case of formal methods, which my company is doing, it’s all about having a tool that empowers you to use them, but also doesn’t require you to be a math major in order to use it. Most projects out there, you just don’t have your own code anymore.

[Steve] You’re actually relying on code that is coming from vendors or open source software. And you will need, you know, like, additional lines of defense. Even if your code is perfect, then the interaction between all of that code is going to be questionable. So that’s it. I would think that Joe might have something to say about that. 

[Paul] Yes.

[Paul] We’ve talked in previous podcasts about the problems of the supply chain being potentially enormous. Perhaps hundreds, maybe even thousands of separate software modules even for tiny simple products. If you want to do some kind of formal mathematically based analysis of all of that code, how do you do that without making a job that’s so complicated that you can never finish? 

[Steve] One way that you can go around trying to solve the problems is to test. And test for requirement based testing, you know, just testing functionalities.

[Steve] It’s great. The problem is for runtime errors, like, you know, overflows, out of boundaries, division by zeros, often, it’s just a subset of all of these realistic values that will trigger that error. So that’s problem number one. Problem number two is that if you’re going to try to test these problems and they don’t have an immediate effect, then you may actually miss it. So instead of doing this, then you take a mathematical approach.

[Steve] So you have a formal method tool that actually analyzes what your code looks like and makes a mathematical representation of it. And then it says, okay. Those are the entry points. Maybe I’m reading a sensor here, maybe that I have a user interface there. And then these are unknowns.

[Steve] So by default, I’m going to assume that this is full range. Can be anything. And it cascades these range through that model. So that’s how it’s able to say, at this point in time, that divider, that denominator can or cannot be zero. It’s a mathematically intensive tool.

[Steve] However, you don’t need to be a math major to use it. It’s like computer scientists will get what it does. Then you press a button, and then you get the diagnostic out. It’s actually pretty fast too. 

[Paul] Presumably, the idea is that you’re hoping to show that the way you have written your code will insulate it against certain bad behaviors, even ones you haven’t necessarily thought of, which reduces the exposure to an attacker in the future who says, you know what?

[Paul] I’m not going to order five pints of beer. I’m going to order 32,767 pints of beer, which no one would ever do in real life, and I’m going to see what happens at that point. 

[Steve] Certainly. 

[Paul] So you don’t need to try all of those scenarios. You’re modeling the way the code operates in a way that you can conclude that even if someone were to make these extreme inputs, that your code would handle it safely.

[Steve] Exactly. 

[Steve] And the thing that is very special about formal methods is that by default, it guarantees no false negatives. Meaning that if the tool doesn’t tell you that there is an overflow there, it’s just mathematically impossible to happen. That is what has been missing so far in the story of all of these tools that often you have tools that do something that is much more basic, you know, some inference. But the problem is that, ultimately, you either get a lot of false positives, which people end up saying, like, I don’t have the time to revise all of these.

[Steve] And at the same time, then you have the possibility of having false negatives. But formal methods actually lets you do that. They have been around for ages. It’s just that the amount of computing power we are talking about, like right now, we’re in that date of age where it’s actually feasible with a few gigs of memory. 

[Paul] You’re not aiming to replace all the other good things that we do, but merely to enable people to start in a better place.

[Steve] Yes. And I would say this. If you’re going to renovate your house, there’s a lot of tools in your toolbox. You’re not going to be hammering down nails with anything else but a hammer. And it’s a little bit like that.

[Steve] The whole concept of code quality and code security, we think that our tool has something very important to provide to the market. But do we have the ambition to replace everything? No. What happens if my program, once it’s linked, it goes into that library, and then the library has control and then does something. And it’s something that I do not like.

[Steve] I can handle it from a code side and be suspicious of whatever comes back from the library, but I cannot control what the library is going to do, can I? Or, for example, if it’s open source software, that can still create trouble, and then you need to have a contingency plan for that. So that multiple layers of security is going to lead to, hopefully, making sure that those zero bugs are nowhere to be found, or they are so far and few between. They are not exploitable. 

[Paul] So, Joe, maybe I can bring you into the discussion now by saying that this sounds very much like a part of Secure by Design.

[Paul] Not a replacement, but merely something that you can do from the beginning and during your development process. 

[Joe] Absolutely. Yeah. And Secure by Design, obviously, is meant to elevate the awareness of how to boost your software development process in a way that does enhance security and try to minimize effort through automation, through best practices, and through good tools. And good tools include things like different forms of static and dynamic analysis and even fuzzing and others.

[Joe] But certainly, a great tool is formal methods and using that in the process, especially for those applications that deploy in critical infrastructure where there is a lot of concern and where there are safety applications in transportation and automotive and in aviation and in other industries. Safety matters a lot as it does in industrial automation and in energy facilities and data centers. 

[Joe] And so incorporating the best tools into your software development process, and certainly incorporating other techniques that ensure that your software is resilient, to ensure that your software deployed can do what it’s intended to do, and attackers can’t insert their nefarious actions that were unintended by the developer already. And, ultimately, that’s what we wanna try to do is put tools in the hands of developers and ensure that software is resilient so unintended consequences don’t happen in the wild, especially those that may have grave consequences or or national security implications. 

[Paul] And, Joe, in previous podcasts, you’ve talked about the concept of secure boot.

[Paul] In other words, you’ve got your fantastic code. It’s been checked. It’s got no integer overflows. It’s got no memory overflows. It’s perfectly formed.

[Paul] You’ve built this firmware blob. How do you make sure that when someone gets to the pump station, when someone gets to the tidal weir, that they haven’t messed with it without you realizing, which is an important way of adding what would effectively be a zero day, isn’t it? A hidden backdoor, then an attacker can use it until somebody notices for the first time. 

[Joe] Yeah. Absolutely.

[Joe] And the goal, of course, is to ensure that the software that an operator or an asset owner deploys matches that which the manufacturer shipped in the first place. With secure boot, what you do know, then, since you’re signing software as a producer manufacturer, you’re authenticating that software matches at the time the software is loaded on the device. That gives you the confidence that you’re running the exact same software that was shipped. And of course, you still want to do additional checks along the way to prevent compromise at runtime or to ensure that the components, once that software has been loaded, it hasn’t changed. There is an extensive problem out there with dynamic libraries that get incorporated at load time, and you can imagine that software that gets loaded into memory could in fact change at some point along the way.

[Joe] So we wanna minimize all that risk between good code quality production, formal methods, shipping, and then ensuring that it runs, you know, the legitimate software that’s on there. And you’re still not out of the box. Right? You still have to ensure that there’s no other forms of manipulation at runtime when that software’s operating. Certainly, all these things dramatically reduce the attack service overall.

[Paul] And even if you have perfect code that you believe has no zero day holes in it, or you hope it doesn’t, and you’ve shipped it, particularly in things like embedded devices that may be in hostile environments. And by hostile, I mean that there may be very high pressures or temperatures or they may be subject to enormous vibration, the hardware can still misbehave, can’t it? So you can still get unexpected results. So at RunSafe, you have a way of protecting code at the time that it’s built, wrapping it in a cocoon that makes it much less likely that it will fail either because of a software oversight or because something actually goes wrong with the hardware in which it’s running. 

[Joe] Yeah.

[Joe] So what we do at RunSafe is we don’t actually wrap the code. What we do is we relocate where functions load into memory in a way that prevents an attacker from knowing where those buffer overflows, where those ROP gadgets are. 

[Paul] Alright. If you were to wrap it, that would make it essentially a different program, wouldn’t it? It would be a whole load of extra code, which might use more memory, might change its runtime behavior, might affect its ability to respond in real time.

[Paul] So you’re getting the same code, you’re just cunningly rearranging it so that nobody can reliably predict how they might exploit it. 

[Joe] Yes. And with that, we want to enable safety of flight capabilities and ensure that software can run-in the safest way and in the most trusted environments. And so with that said, what we do is we make it very difficult for the attackers to find those ROP gadgets and ROP chains in the first place and really trying to help people understand what that exposure looks like and quantifying the risk before you deploy so you have a good sense of whether it’s known risk or latent risk that you’re not aware of that you can solve for even when a patch is not available. 

[Steve] It’s interesting what you’re saying, Joe.

[Steve] It also points out something that, you know, an embedded software you see often. Right? It’s like they don’t have the ability to just multiply the number of defensive lines of code or the number of things that they need to do in order to ensure security. Like, there’s only a certain number of CPU cycles that are available. It’s a self contained environment.

[Steve] So if things fail, first of all, consequences can be dire. But, second, it’s like for the design of this thing, like, it needs to be hopefully correct from the ground up. Because, otherwise, if you keep on patching this and just adding more and more and more steps, then eventually, you end up with something that is not usable, that it’s going to miss timing, and that’s not good either. 

[Paul] Yes. Because if you’re botching a code path in order to avoid a particular erroneous condition, it means that it might take longer to respond, or it might use more memory, or it might react in a different way.

[Paul] Maybe I can put a question particularly about embedded devices. They’re not able to enjoy exactly the same kind of public scrutiny, and they don’t command the sort of bug bounties that say Windows or Apple do. 

[Paul] So perhaps there are zero days in there that could be found if someone decides to go looking. So how would you go about dealing with zero days in an environment like that? 

[Joe] Well, I think the problem with legacy code and critical infrastructure is a function of how long these assets are intended to last.

[Joe] And if you are managing an energy grid, you expect to capitalize embedded devices over five, ten, fifteen, twenty, thirty years. And it’s unlike maybe an IT system where, you know, maybe a lot of software is web based, and you can make changes to it pretty easily and pretty quickly. And the fact is there’s billions of devices out there. There’s hundreds of billions of lines of code, and a lot of it is not memory safe code. It’s got vulnerabilities in it.

[Joe] It’s got these latent vulnerabilities, and it’s very difficult to figure out the best way to administer patching on all of that software, and so you do need to have ways to avoid any catastrophic consequence or grave consequence, and that is an attacker who could attack those devices. And what we’ve seen from the US government is reports that China is inside US critical infrastructure and is targeting memory based vulnerabilities, and these are the kinds of vulnerabilities that allow an attacker to take over a device and do something that wasn’t intended in the first place. And so the problem’s pretty severe, and we can’t simply rewrite all the software that’s out there in a reasonable amount of time. Now, there’s a lot of work to be done, and there’s a lot of rework that will happen, and there’s new devices being shipped every day. But even those today, many of them have memory based vulnerabilities in the code that’s shipped.

[Joe] With that said, you do need to find ways to disrupt the economics of that equation and find ways to asymmetrically change the equation in a way that reduces the attack surface and doesn’t force people to rewrite software. And I, for one, am a big proponent of memory safe languages, things written in Rust and others, yet the idea that, that’s all gonna change overnight in an ecosystem that’s built around all these other components, all these software components, all these developers with other skills, you know, with all these compilers and different systems that have to interoperate, and as Steve alluded to upfront, have to work in safety certified environments. And so the problem is pretty extensive, and, you know, you need to take a multi-layered approach to solve the issues in existing code out there today. 

[Paul] Because, Joe, as you’ve pointed out before, even if you were inclined to rewrite all your software in Rust and you could do it magically in one week, there are still some environments where Rust is not yet ratified for use, because it’s considered, if you like, the new kid on the block. There are some places where you just can’t do that rip and replace even if you wanted to.

[Paul] Now Steve, you actually have a product, don’t you, that deals with Rust code on the grounds that even if your Rust code is perfect, it may need to interact with unsafe code. For example, to read sensors or to manipulate other parts of some embedded system like a lathe, where you, the author of the Rust code, don’t get to decide how that other bit works. You’re compelled, if you like, to work with the sins of the past. 

[Steve] I’m probably going to earn a few rotten tomatoes here from the Rust community. 

[Paul] Yeah. A lot of them are quite keen on Rust being perfect with big air quotes. 

[Steve] Exactly. Rust, is it an improvement on what we had before? Yeah. I mean, it’s safer, but can you have run time errors within Rust?

[Steve] Yes. Even safe Rust. We have code out there that is written many languages. It’s going to be increasingly with Rust. It will need to be maintained.

[Steve] People, from my perspective, from what I’m hearing from the industry, they do take a look at what they need to do in order to bring it up a few notches from a cybersecurity standpoint because now they’re very much under pressure saying, I have this infrastructure. I won’t be able to rewrite the code from the ground up immediately. It’s going to take years, but I need to start somewhere. So one way of doing this might be that when you do have a code base that you want to have version two, for example, that is safer, then you consider the rest of the code base, so basically dependencies that this specific subset of the code base has, as being suspicious. Like, formal methods will let you do this, especially in embedded software because the cost of being wrong is actually pretty high.

[Steve] There’s also, I would say, an ethic because of quality regulations and standards to say, well, before I ship, then I should be doing certain activities. And I think that increasingly, what you’re saying is that having standards also in cybersecurity for exactly that purpose because people realize that this is a problem. I will just mention, like, at the FDA that in September 2023 came up with the equivalent of the principle of software validation. Now they have the equivalent for cybersecurity because they want to be ahead of the curve. They want to make sure that people cannot hack some kind of the medical device, and then suddenly, you know, people may be injured or worse.

[Steve] So I’m kind of encouraged there, but is that going to be a journey? Absolutely. It’s not going to happen overnight. 

[Paul] Joe, perhaps this is an opportune moment to mention something that I know you’re very keen on because we did a whole podcast episode on it, and it was all I could do to rein you into the time we had available. And that is the effect of something like the EU’s Cyber Resilience Act, which basically has both carrot and stick to make people make their software better if they don’t want to do it themselves.

[Paul] What effect do you think that will have on zero days and latent bugs that maybe some x y zed typhoon hacker already knows about? 

[Joe] Well, I think it’ll certainly improve the ability to reduce bugs in software. Why? Because, as you say, you can implement regulations with both carrot and stick. And in this case, if there is a bug that results in a cyberattack and your product is liable, then by 2027, you’ll have to pay a fine, and that fine can be pretty significant.

[Joe] And so I do think that that has an effect on people. Will it fix everything? No. Is it the only approach? No.

[Joe] Carrots also work, and carrots can work in terms of grants and credits and other things to encourage people. And also, you can explore the notion of a safe harbor from cyber attack in terms of developing your own best practices that align with industry expectations and the like. What’s very hard though is to litigate. Did someone do their best efforts? And so I do think when it’s very clear that there’s a regulation with potential for a fine and associated liability, it is a wake up call.

[Joe] And some of our customers, their cross functional teams are working on ways they’re going to reduce the likelihood of that fine being imposed on their products. They don’t want defects. And so for formal methods, for software memory protection, for other security tools, I think investing in those products in advance will help you get ahead of that potential liability down the road. 

[Paul] Yes. Because we’ve spoken before about the opposite side of the coin to Secure by Design, haven’t we, which is Secure by Demand.

[Paul] And you made a joke a couple of episodes ago about that sounds very strict. But we’re really talking about supply and demand, aren’t we? That if people prefer to buy software where security is part of the value, if they’re saying that’s what we demand in the marketplace, then let’s hope that’s where the suppliers will tend to go. 

[Joe] Absolutely. And if those discussions lead to even if it’s back and forth, who’s responsible for which piece? That’s better communication than not talking in the first place.

[Joe] And so if you are someone who manages assets across your infrastructure and you are asking your suppliers about their security practices, you may want to ask them about how are you reducing problems or bugs in your code. You can even ask them about their formal methods and their software memory protection. Good questions to ask because I think product manufacturers and producers of technology need to know that their customers care as much as regulators or end users ultimately. 

[Steve] We should be speaking with people that are worried about these things, like directors or cybersecurity and the likes, because there’s definitely the effect of having a cyber bug in a car, for example? Well, it’s probably going to be, like, taxing your revenue for quite a while because suddenly people in the market, like to your point, Paul, people are going to say, hey.

[Steve] That car, I heard that it’s not that good because they had some kind of cybersecurity exploit that got exploited. I would also add that it’s important also for us to communicate with the users, so people that are going to put the infrastructure together, like the developers, for example. 

[Steve] Sometimes when people are faced with tool manufacturers and we are telling them, like, you need to focus on quality, they get it except that they are already overworked. I will tell you this, cybersecurity is available with tools that actually are not going to make your lives miserable. For example, in our case, formal methods, like, some people may have had these classes in college.

[Paul] Oh, they think, oh, no. I can’t go there. It’s far too complicated. 

[Steve] Nothing like that. It’s accessible.

[Steve] It’s actually a tool that is industrial grade. And I’m sure, Joe, you’re going to tell me that your offerings are exactly the same. We’re in the business of making it possible for people to actually use these methods and these tools in order to do their job and to do it efficiently. 

[Joe] Absolutely. I mean, practical applications of hard science, I think, is the point of companies that are trying to bring those things to market.

[Joe] Bringing an application to market that makes it easy for technologists to adopt and deploy without ruining their lives, without slowing them down, and certainly elevating the ability to deploy safe secure code in market at a reasonable price is the goal of both companies. 

[Steve] Absolutely. 

[Paul] Now given that, by definition, if you like, a zero day vulnerability is something that you managed to bake into your software without realizing it, and then neither you nor any of the other good guys found it. Clearly, this is quite an interesting battleground. In the next five years, if we can just finish up, what do you think are the biggest changes that companies will need to make if they want to minimize the number of zero day vulnerabilities, which concomitantly, of course, reduces the number of times that they need to patch in a terrible hurry because something horrible has happened.

[Steve] So I would say, like, definitely, there needs to be more rigor to the development cycles. I think that it’s already started, for example, in automotive. It’s much more, okay. We know that there’s a problem there. It needs to be addressed.

[Steve] It’s a clear priority. Again, with a caveat that you need to ship code at some point in time. Indeed. And I think that the only way that you’re going to be able to achieve that is by having better tools because people have been complaining to me about tools that are swarming them with bogus false positives and false negatives. That’s a worse of both worlds.

[Steve] Right? So we need to be more systematic. We need to demand more of these tools. Open source hardening would be a question as well. Because if you have open source software in your code, it’s possible that many people have the same open source software in their code, and then it opens the door to somebody to step in and to harden it for everybody.

[Joe] Yeah. Exactly right. And I agree. Adding more tools that help development teams automate and testing for vulnerabilities, fixing vulnerabilities, certainly implementing formal methods, and adding in other protections is the way to go. And I think elevating our software development life cycle management and discipline, incorporating automated tools is one step.

[Joe] And they also think to the point where there’s so much code being used, and there’s a lot of open source software to Steve’s point being used by a lot of these product manufacturers that are out there shipping technology to critical infrastructure that there needs to be ways to prioritize where to begin and what to focus on. And so what we want folks to do is consider the risk across their entire software supply chain and find ways to identify vulnerabilities or latent risk in software components and really figure out a way to prioritize which components of your bill of materials to address first. And so if you can start with identifying maybe a Software Bill of Materials that represents your entire software stack, look at the vulnerabilities and look at the risk in the software. Prioritize those things, and then start to develop the tools around those components and those packages where you need to boost your security overall. Then you can start to take effect and boost resilience overall without slowing down your developers.

[Paul]And rigor, when you said that, Steve, I thought, yes. That’s what we need. A bit less vibe coding and a bit less move fast and break things and a bit more rigor. When it comes to board level, that doesn’t mean that we expect everyone to go out and do a degree in advanced mathematics. It just means that you have to want your software to work properly for the greater good of all.

[Paul] Joe and Steve, thank you so much for your time. This has been absolutely fascinating, getting two intriguing and important angles on getting code quality better to crush zero days. So, hopefully, there are fewer and fewer bugs left in the code we produce in the future. That is a wrap for this episode of Exploited: The Cyber Truth. If you found this podcast insightful, please be sure to subscribe, and, of course, share it with everyone else in your team.

[Paul] Don’t forget everybody, stay ahead of the threat. See you next time.

What Every Industrial CISO Needs to Know About Embedded Risk

What Every Industrial CISO Needs to Know About Embedded Risk

  As industrial environments become more automated and interconnected, embedded systems are fast becoming one of the most exploited attack surfaces in OT. In this episode of Exploited: The Cyber Truth, Joseph M. Saunders, Founder and CEO of RunSafe Security, joins...

read more
Can We Fix OT Security?

Can We Fix OT Security?

  Operational technology (OT) powers the systems we depend on every day—energy grids, water plants, manufacturing lines, and more. But with nearly 70% of industrial firms experiencing OT cyberattacks last year, one urgent question remains: Can we actually fix OT...

read more
The EU Cyber Resilience Act (CRA) Exposed

The EU Cyber Resilience Act (CRA) Exposed

  The EU Cyber Resilience Act (CRA) may not be fully enforced until 2026, but the time to act is now. In this essential episode of our podcast, we dive into the details of the CRA with cybersecurity expert Joseph M. Saunders, Founder and CEO of RunSafe Security. As...

read more