Firmware forms the foundation of all embedded and connected devices—but it’s often overlooked in cybersecurity discussions. In this episode of Exploited: The Cyber Truth, Joseph M. Saunders, Founder and CEO of RunSafe Security, explains why attackers are increasingly targeting firmware to gain persistence and control across critical sectors like healthcare, automotive, energy, and defense.
Joe details how firmware determinism, third-party dependencies, and complex supply chains create high-stakes vulnerabilities. He also shares practical strategies for breaking determinism to thwart attackers, understanding firmware Software Bills of Materials (SBOMs), and implementing protections at build time to reduce risk.
Whether you’re a CISO, security leader, or device manufacturer, this episode provides actionable insights to secure the foundation of your systems and strengthen resilience across your enterprise or operational environment.
Key topics include:
- Real-world ways adversaries exploit firmware vulnerabilities
- Risks inherited from third-party firmware and complex supply chains
- How “shifting security down the stack” enhances trust for all systems above it
- Practical steps CISOs, security leaders, and device manufacturers can take to harden firmware
Speakers:
Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.
His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and helping all of us to raise the bar collectively against cyberattackers.
Joseph M. Saunders: Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.
Episode Transcript
Exploited: The Cyber Truth, a podcast by RunSafe Security.
[Paul] (00:01)
Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of Run Safe Security. Hello, Joe.
[Joe] (00:19)
Greetings, Paul. Look forward to the discussion.
[Paul] (00:22)
Very intriguing title this week, “Risk Reduction at the Core: Securing the Firmware Supply Chain.” Historically, we sort of divided computing devices into hardware and software, didn’t we? Software you could load off a disk, and the hardware was the actual chips and the resistors and the wires and the capacitors and maybe the keyboard. But these days when we talk about firmware, it is part of the system and it typically includes the operating system and the applications that an embedded device will run. And it can’t just be fixed by loading a different one from disk next time you boot up, can it?
[Joe] (01:04)
Yeah, it presents all sorts of issues, security issues different from general IT infrastructure in part because this firmware, these embedded devices, are deployed across critical infrastructure and they may be in parts that are hard to reach. So not only is the technology a little different, but where it gets deployed and how it’s used is different than a user in an enterprise. So these IoT devices are special and also very vital for how our economy works and our infrastructure works.
[Paul] (01:37)
In the old days, firmware was burned into a ROM, read-only memory, in a way that it could not be updated. You literally had to desolder or pull out the chip and put in a new one to get an upgrade. These days with flash memory, you can update firmware automatically without changing the chips, which is very convenient for the good guys, but it also introduces a massive risk because what we can update easily when we really need to, the bad guys may be able to update without authorization when we don’t want them to.
[Joe] (02:13)
Yeah, certainly risk for the good guys, opportunity for the bad guys. Yeah. For exactly the reasons you say.
[Paul] (02:22)
It sounds a lot worse when you put it like that. Our risk, their benefit. And we have to try and turn that on its head, don’t we?
[Joe] (02:29)
Yes, and so there’s always that trade off with progress with the ability to offer updates and reach these connected devices and the ability to actually do updates on device, if you will. That means there may be enhancements, there may be additional security benefits offered down the road or other capabilities that are introduced. And for that benefit, it does mean that there is exposure.
[Paul] (02:55)
So this really is a critical attack vector, isn’t it? Because it’s not just as though, well, maybe some of us might have booby-trapped online meeting applications on our phones that would be unfortunate for us and the people we have meetings with. This could be malware, malicious software that gets embedded in devices that are quite literally all over the country and that may not easily be able to be updated again for days or weeks or months because of the fact that they are in far-flung places and they can’t just be updated as easily as a mobile phone or a web app.
[Joe] (03:38)
You think about embedded devices in operational technology networks and embedded in maybe the water supply system or the energy grid, or even in a manufacturing plant. Part of the concern is with the proliferation of these devices and as these devices have gotten smarter and connected and they’re involved in these massive portions of critical infrastructure, then there is a consequence that is there of disrupting critical infrastructure and disrupting the operations, disrupting the energy grid itself. We don’t want to let that happen.
And so with that, there’s always this issue of commercial organizations shipping products into infrastructure. That infrastructure may also be managed by another commercial entity. And so there’s a whole relationship in thinking about how to secure not only access to the infrastructure, then the devices themselves. And the example I like to always use to kind of illustrate why this attack vector is so significant is that of cooling systems inside data centers. And you might think, you know, that’s a good supporting capability that keeps data centers at the right temperature. And with the proliferation of large language models, we actually have an increase in energy consumption. And in order for these systems to keep operating, there may be controllers and sensors and other things around the data centers taking temperature or doing other things. And if those things get compromised and the cooling system fails, then the very large language models inside that data center won’t process. The idea is that in this kind of infrastructure, disruption is a consequence that could cause economic loss or in the case of the water supply, even worse. So it’s very important to then think about how do you secure these devices because there’s always the potential that these types of devices can be compromised and attacked.
[Paul] (05:39)
Yes, and if you think about water supply and the other side of that in the ability to drain water away successfully and control what happens to the waste. All of these things are typically controlled these days by hundreds of thousands, possibly millions of individual components like valves, each of which is manipulated by some kind of embedded device with its own firmware. And so a bug in one of those, if it’s used in thousands of pump rooms or waste water processing plants around the country, a bug in one could actually cause a disruption to all of them, even if that wasn’t what the attackers originally intended. They might suddenly find that they build a tool to attack one city, for example, and suddenly realise, hey, this works in all 50 states and the District of Columbia.
[Joe] (06:33)
You’re exactly right. The proliferation of these devices across geography means that there is then suddenly an attack vector that is beyond geographic bounds. And so whereas these individual points or plants or portions of infrastructure may not have been as connected in the past, now the common attack surface is the very software that’s common across all the systems and all the geographies. And that common software means if there’s a vulnerability in one, there’s a vulnerability in all of them. And if an attacker has figured out how to compromise that device, they can replicate that. I always like to say that one of the great promises in software is the ability to produce millions and millions of copies of the same code. And when you put that on a piece of hardware, all those components will operate the exact same way. With all the same inputs, you’ll get the same outputs. It’s deterministic.
[Paul] (07:29)
It’s not like an old analogue system like the centrifugal advance on the points in the car. It mostly worked the same, but it did depend on temperature and metallic wear and all sorts of external factors. With a digital system, in theory, for the same input you should get the same output every time. And that’s great for testing, but it’s also a terrible risk if something goes wrong to one of them. It goes wrong theoretically to all of them possibly at the very same time.
[Joe] (08:01)
Yeah, so that determinism also then works in the favor of the attacker. Yeah. If they compromise one, if they reverse engineer one that bought say off eBay or someone had an extra one and they got their hands on it.
[Paul] (08:15)
Before people think, well, this kind of stuff doesn’t come up on eBay, bear in mind that almost all of the famous ATM hacks that have been demonstrated over the years have been done by people like the late great Barnaby Jack buying used ATMs off eBay, sometimes for just hundreds of dollars a time. Go think.
[Joe] (08:39)
Exactly right. Sometimes there’s this legitimate reasons to buy that hardware because you might be developing something new and you want to test things. And so there is a market that’s legitimate for those things, but that very market then also creates the opportunity for someone to do perhaps something nefarious with it. I guess that determinism ultimately works in the favor of the attacker as well. There’s a vulnerability on one, there’s a vulnerability on many, as we just said, you know, that’s one of the tricks. It’s very hard to some of these devices for different reasons and I’m sure we’ll get into that.
[Paul] (09:14)
Maybe we could get into that right now, Joe, and you could just say something about how do you build a system that is essentially deterministic, yet it is not identical in a way that thwarts the attacker.
[Joe] (09:28)
The way I like to describe it is: how do you make those devices remain functionally identical, but in fact be logically unique so that you break that determinism for the attacker. The way we do it at RunSafe, of course, is we insert security protections. When you build that software, produce that binary, put that binary on those devices, we add in security as that’s getting manufactured. For the purpose of then when that software loads, on the device out in the field and that software loads into memory. We randomize where those functions go. We relocate uniquely every time where those functions go into memory when that software loads on that device. And what that means is you are breaking that determinism.
If there’s a vulnerability and function ABC that always loads in memory location ABC. Well on your device, Paul, it may load into memory location XYZ in mine, might be in memory location QRS. And the benefit of that though, if you think about other ways that these devices could be protected, one of the big constraints in infrastructure of course, is the power and the compute capacity, if you will. If you can’t increase those things easily and you can’t add new software onto a device because it has limited power and limited compute resources, then you can’t put agents on software that might monitor things. There are some limitations in infrastructure that could change behavior, could slow things down, could change the support process and introduce risk in and of itself as well.
[Paul] (11:10)
You may not even have kilobytes to spare, let alone megabytes or gigabytes. So you can’t just keep adding new stuff to it, can you? You have to deliver exactly the same machine instructions, but shuffle the deck so when you deal it you get the same game, but not by adding lots of extra blank cards that make it behave significantly differently.
[Joe] (11:32)
Right, because as you increase the resources involved on these devices, the cost goes up. Also, if you consider say satellites or flight software or drones, airplanes, even military weapons, one of the key considerations, of course, can you deliver software on time? But also its size, weight, and power. You can’t just necessarily increase the hardware that you put on some of these systems.
You’re also increasing the cost. You want to avoid increasing size, weight, and power in a lot of these systems. If you’re introducing other hardware to monitor these devices on transportation or autonomous systems or space systems, or even in aviation in general, or weapons programs, it comes at a cost. There’s a downside to that. Just as equally as important as not overloading that embedded device inside the data center or in the energy grid.
It’s also true in these transportation and space applications where you can’t afford new hardware in the first place.
[Paul] (12:38)
To put it in perspective for those of our listeners who might have worked on things like web apps, you don’t even have to wait ‘til tomorrow. You can just do the update before the next person visits the website and you go, whew, well, we dodged that problem. There’s none of that in any of the embedded space really, is there? Not only is it impractical to do it because of the complexities of updating, there are just regulatory and operational reasons why it cannot be done.
[Joe] (13:06)
Yeah, there’s definitely regulatory reasons. And I would just say part of the ecosystem to distribute this kind of technology is someone producing a component or firmware on a device may be delivering that in part to an OEM that’s producing a broader system overall. And so that creates a layer of distribution that OEM may have a distributor who’s then reaching and putting these systems or these devices into the infrastructure itself.
That could yet be another layer. Then you start to think about, how do I push updates through two or three or four layers and who supports that? And what’s the cost of that and what’s the timing and how often can that even happen? You can’t do it continuously for sure. There’s even a cost to doing it. So it’s only happening periodically. And I think in a lot of these systems, one of the challenges is even though you can provide some of these updates, I think a lot of the updates take a long time to even reach the final end product.
You can imagine if a technician has to go around and touch all these systems to update it because there might be security reasons or others. The complexity of the problem, given the number of suppliers is high and you have to certainly understand what software, what operating system, what components, what libraries are on my devices and how at risk are my devices in the first place. And so it poses a pretty significant security question for the supply chain itself, how do you get reliable information? How do you assess that risk of the software that’s on those devices? And what do you do about it? How do you manage your supply chain? The number of developers that touch these kinds of devices then, if you think of those thousand suppliers, they are using open source software.
They themselves are using third parties for some of the development. And then they themselves are developing some of it in-house. I think the best approach is if the manufacturers have a good set of standards on what has to be done before those devices reach the manufacturing plant itself.
In my view, what needs to happen, given the distributed complex software supply chain implied across these thousands of vendors, is a way to understand what’s contained within the Software Bill of Materials on these devices I’m deploying in my infrastructure and what are the vulnerabilities associated with those and what, if anything, has been done on those devices to prevent exploitation, to protect those, to monitor those, so that we don’t find ourselves in one of these situations.
[Paul] (15:41)
Joe, when you say Software Bill of Materials, or SBOM for short, that’s BOM not BOMB, that’s not just the recipe for how you might make the software if you wanted to start from scratch. Because that just tells you what you thought you wanted to put in it. It doesn’t necessarily say what actually went into it when you built it. And it doesn’t vouch for the fact that nobody had fiddled with some of those ingredients before you baked the cake.
How do you make sure that the person who’s one upstream from you is delivering you what you thought so you can deliver the right thing to the guy who’s one downstream from you?
[Joe] (16:22)
Yeah, I think generating a Software Bill Materials as close as possible to the time you produce the software binary that’s getting deployed on that device is the best approach. And it’s for the reasons you’re starting to describe, which is if you do it after the fact, that’s like determining what were the measurements that went into the baked cake that you just produced. If you do it before you bake the cake, well, what about that little chef’s magic touch?
And in this case, the chef is actually, I guess the build system and maybe even that compiler that converts the source code into the software binary in the first place. What actually gets produced when you deliver that binary isn’t as straightforward as what was the original developer’s plan? What’s going to go into it? Yeah. Ingredients get added at the final moments.
[Paul] (17:16)
Or they get substituted with things that somebody thought, what’s the difference between sodium bicarbonate and sodium carbonate? Surely it won’t make a difference. So how do you equalise all that in something as complicated as the firmware supply chain for embedded and critical devices?
[Joe] (17:34)
You do need to produce those Software Bill Materials as that software is getting compiled. Otherwise you may end up with margarine in your cake and not butter. And you know, that would be catastrophic.
[Paul] (17:47)
Yes. So Joe, there are tools out there that claim they can essentially taste the cake after it’s been baked and you’ve said in previous podcasts that they can actually do a very good job if you’ve got nothing else but by very good job they might get 80 % of the ingredients correct. But not knowing 20 % of what went into your cake, 20 % sounds like an awful gap.
[Joe] (18:13)
Well, it comes at a cost. It comes at chasing false positives, false negatives. Yes. And the reason for that is what the binary-based SBOM generation tools do is they look for components and then they make assumptions usually based on heuristics. If I see the following types of libraries or components in this software, then I’m going to assume that you also have X, Y, and Z. We can’t just rely on heuristics.
We will miss things, we’ll be chasing the wrong components. And when we identify components and associate vulnerabilities with them, it only compounds the problem when that component doesn’t actually exist in that binary. What I think is a great opportunity, and it’s where RunSafe fits, is to be able to apply security protections and almost be independent or agnostic to what those architectural implications are. We work across instruction sets, we work across operating systems.
We support many different kinds of build tool chains themselves. If you think about an organization that may produce a lot of different gadgets and have slightly different build tools or may have different settings for the kinds of embedded Linux that goes on these devices, it does matter. And you need to have a standard process to get the ingredients at build time. So you have an authoritative SBOM you can trust. What becomes a relatively simple request, Hey, what software is in my device actually is the result of a very complex ecosystem, as we said. And what we really want is to have really good information about what’s in the device and make that easy for developers to produce upstream so downstream users know with confidence how to assess the risk of a device in the first place.
[Paul] (19:59)
What are the regulators and perhaps just as importantly, what are customers and the industry itself thinking about how they might reduce these risks and improve their supply chain?
[Joe] (20:11)
I’m actually on respond on three different regulatory frameworks, if you will. Yes. In the medical device arena, at least in the U.S. there is a requirement to produce a Software Bill of Materials for medical devices. So that’s one set of requirements. You have to produce it and know what’s in there and also then address the vulnerabilities. But the second one is relatively new in the United States and it’s coming due. You have to be ready by 2027 and you have to start to have the framework put together by March of 2026, and that’s for the automotive industry. And part of the issue is if you think about all these components, the U.S. wants to make sure that you’re not incorporating components developed by an entity from China or Russia, because there is a supply chain risk. And so that’s a new requirement in the automotive industry. And in fact, just to point out how complex this can get for a request like that for automotive companies who want to sell their cars in the United States, you have to ensure that component doesn’t come from China or Russia, as I said, but you as the OEM, the product manufacturer, the brand we know and love, whether it’s Ford or Honda or BMW, you’re going to sell your car in the United States. You have to go four layers deep into your supply chain to ensure that there isn’t a component originating from an entity in one of those countries.
So that’s a pretty significant demonstration of how complex it is. The third one is in the EU, there’s the Cyber Resilience Act.
[Paul] (21:43)
I was hoping you had mentioned that.
[Joe] (21:45)
Of course, I save the best for last. The way I think about it is we’ve talked about in the past, Paul, is that if you’re going to have to do this anyway, don’t cut corners. Invest in a software bill of materials that will somehow pay dividends in other ways. And what I mean by that is something that can help set you onto a path to boost resilience, to boost your security posture, maybe to improve things in your software supply chain.
[Paul] (21:56)
Absolutely.
[Joe] (22:13)
Practices in the first place.
[Paul] (22:15)
I guess what you’re saying is if you have the words checkbox compliance floating around anywhere in your company, boot those words out right away, build your software, build your products in a way that naturally they tend to comply. And if they don’t, well, you can easily go back and fix it. We should do this because we want to, not merely because we need to.
[Joe] (22:39)
Yeah, I think it’s a good business practice to think about security as an element of quality, to think about software bugs as an element of quality, safe systems by definition are quality systems. And I think the key is building a really solid foundation and methodology and benchmark the process you go through to make sure that it’s repeatable and reproducible. If you could do all that and do it in a timely fashion, then you’re on your way to being a high quality software development organization. And let’s face it, software development is hard. Getting rid of all bugs is an impossible task, but at the same time, you do need to meet the standards and probably exceed the standards if you’re gonna be perceived as a provider with any semblance of quality in your product. If you think about organizations, they have their own internal governance.
Yes, for policies that teams need to, product teams need to adhere to in addition to industry standards. And so when you have a combination of policy, internal governance and industry standards, those are good controls to put into your software development and really operationalize so that you’re producing in a consistent way. Let’s not forget though that there is an adversary out there. There’s sort of the hidden stakeholder is the adversary that if you don’t produce something of decent quality or of the semblance of security or a safe system, then attackers will find it because you’re exposed.
There’s a lot of forces at play here to ensure that you do develop a high quality software that’s safe and secure. And ultimately just differentiation in general is a good reason to invest in security and invest in understanding what all those components are and being able to communicate transparently with good customer service to your customers. So they know if they’re exposed, they know they can communicate with you when new vulnerabilities come out and get the straight answer. Why would you take an incomplete approach and just check the box? You might as well invest because you’re going to be chasing your tail down the road.
[Paul] (24:50)
Yes, I was speaking at a conference in Berlin last week and Berlin Airport was famously affected by a supply chain attack basically. The company that runs their check-in services and their bag drop services was taken off the air and the airport was in chaos. The next time some airport goes out to buy check-in software, they might be asking more difficult questions than they did in the past. With that in mind, what are your top recommendations for CISOs or for software engineering teams who want to be able to produce software that passes what you might call the truth in engineering test? Where do you start if you haven’t done so already?
[Joe] (25:36)
We live in a digitally connected, interconnected world and we all benefit from it. We all benefit from the convenience, ability to scan things and doors open and just walk right in, in a trusted environment. There’s lots of good ways that we’ve applied technology and connected technology to improve the quality of the life and the creature comforts we expect in today’s world. And with that, it does mean that the CISO you mentioned, does have a very difficult job and it’s challenging.
[Paul] (26:08)
It’ss not just what you buy, it’s as much what you supply to the next person along.
[Joe] (26:14)
Right.
And so my top recommendations are to certainly ask for a Software Bill Materials and to use that in a way that helps you get vulnerability data and risk data about the devices so that you can really prioritize which systems, which vendors, which, which devices you need to look at more carefully, work with the suppliers. My second recommendation is have a seat at the table, ask questions and give feedback and be constructive and not just demanding.
[Paul] (26:47)
Yes, nobody’s perfect, but if everyone gets a little bit better, that’s a lot more productive for the future than one or two companies becoming fantastic and everyone else lagging behind. Because there will be a little bit of everybody’s ingredients in most cakes.
[Joe] (27:03)
Yeah, for sure. And so generate a Software Bill of Materials use that information to help you prioritize what areas to look at, engage your suppliers and give them the feedback that you’re seeing based on the risk you’ve prioritized. And then even go a step further. Are there ways you can ask for Secure by Design practices? CISOs are in a strong spot. The catchphrase of course is secure by demand, but the idea is to ask for security built in for the benefit of your ecosystem, your network, your operations, and your environment.
[Paul] (27:35)
Yes, because neither secure by design nor secure by demand, opposite ends of the market if you like, neither of those can really work on its own.
[Joe] (27:46)
Yeah, it seems like we need secure by engagement, secure by collaboration. And that’s really the point. It’s two ends of the same problem. And you’re both stakeholders. The operator is a stakeholder, the manufacturer is a stakeholder, and we need cooperation in between. We need best of breed, software bill of materials. We need those things done at build time so everyone has a clear, visible view of the vulnerabilities and the risk at play.
[Paul] (27:51)
Absolutely. So if we can just finish up by, if you like, the shoe on the other foot and imagining the CISO at the supplier who is providing software and firmware to third parties down the line. What can they do in terms of things like vulnerability disclosure? How do you deal with vulnerability disclosures in a way that is not doing your company down, but at the same time isn’t inadvertently misleading people or making things sound less severe than they really are? How do you tell if you like, truth with honour.
[Joe] (28:46)
Well, it’s a really good question. And I think in the old days, we may sort of avoid telling people about vulnerabilities until we have a fix or a patch. Yes. And then we erase it out there and say, you got to do it immediately. And, that just creates tension in communications. Hurry up, implement this. Well, how long have you known about it? I’ve known about it for three weeks and you’re just telling me now.
[Paul] (29:07)
Suddenly it’s your fault. What do mean you haven’t patched in half an hour? Well, why didn’t you fix it four months ago?
[Joe] (29:10)
Right.
Exactly. So that’s not really a healthy relationship. Now, if you build in security into your products and you have protections that prevent exploitation, even when a patch is not available, guess what that means? That means you can disclose immediately upon finding it. If you already have protection in place, if you build in security, then you can make your communications more transparent, more functional, less dysfunctional, and gain the confidence by disclosing. It’s like a low key flex as we say these days. Bad news, there’s a vulnerability in your system, but we’ve already got you covered. You’ve demonstrated that your software development process, your security investment and your support process and your communication with your customers are all in alignment. You’re not having to face that trade off of should I disclose or not you’re disclosing and inspiring confidence in your customer.
[Paul] (30:14)
This is something that affects all of us who are involved in the software industry in any way. We can’t just wait for somebody else to do it all for us. We all have to work together. And it’s very much as the Air Force might say, per aspura ad astra. Through hard work, you can reach the stars. So thanks to everybody who tuned in and listened. That is a wrap for this episode of Exploited: The Cyber Truth.
Thank you so much to Joe Saunders once again for his thoughtful and insightful deliberations. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like us and share us on social media as well. Please also don’t forget to share us with everyone in your team so they can hear Joe’s words of wisdom. Remember everybody, stay ahead of the threat. See you next time.