Designing Security into Life-Critical Devices: Where Innovation Meets Regulation

November 06, 2025

 

As connected healthcare evolves, medical device cybersecurity has become inseparable from patient safety. In this episode of Exploited: The Cyber Truth, RunSafe Security Founder and CEO Joseph M. Saunders joins host Paul Ducklin to discuss how medtech organizations are designing security into devices from day one—embedding protection across concept, development, and maintenance phases.

Joe unpacks what “secure-by-design” really means in practice, how the FDA’s new Secure Product Development Frameworks (SPDFs) are shaping engineering collaboration, and why cultural change is essential to making cybersecurity a core part of product quality.

This episode offers practical guidance for developers, compliance officers, and product leaders on:

  • Building security into device lifecycles—not bolting it on later
  • Meeting regulatory expectations while accelerating innovation
  • Managing post-market security for long-lifecycle devices
  • Earning trust and ensuring patient safety in connected care systems

If you work in medtech, regulatory compliance, or embedded security, this discussion will help you understand how to stay audit-ready, innovate faster, and lead with security-by-design.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:04)

Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello Joe.

[Joe] (00:05)

Hey Paul, looking forward to today’s topics of discussion.

[Paul] (00:24)

You’re sort of half working half vacationing aren’t you in the mighty state of California? Where the weather is probably a bit better than a lot of the rest of the United States at this time of year.

[Joe] (00:35)

It is perfect weather, it’s cold and crisp and I went for a nice long walk this morning so that’s the vacation side. And then the flip side is I’m working a full day today. So both work and vacation at the same time I guess.

[Paul] (00:47)

Since we’ve spoken about life-affirming experiences, like going for a walk on a crisp and beautiful morning, let’s delve into this week’s topic, Designing Security into Life-Critical Devices. Our subtitle is Where Innovation Meets Regulation. Joe, in the healthcare industry, traditionally it’s all been about technology and fancy new stuff, hasn’t it?

But cyber security is now becoming a very, very pressing concern.

[Joe] (01:23)

At the high level, we’ve got requirements in the US FDA requirements to build in security and ensure that medical devices are protected from cyber attack. We can all understand why. It’s absolutely incredible the number of devices. And of course, there’s different classes of devices. The common denominator is these devices are connected. They are doing life critical functions and security is important because the consequences are stark. And so what we want to ensure is that these medical devices are safe, are secure, are resilient. It comes back to then the software development practices of incorporating security into your processes overall and building secure medical devices from the get go.

[Paul] (02:14)

And that’s not something that is traditionally associated with that kind of device, is it? If you’re building some kind of new amazing surgical robot or upgrading your fantastic MRI scanner to give even more detail, as a scientist and an engineer, you want the technology to be as fancy and as amazing as possible. But as you say, with thousands or even tens of thousands of these embedded devices, in the average hospital these days, there’s quite a lot that could go wrong, possibly at the same time.

[Joe] (02:52)

Yeah, if you rewind about 10 years, the FDA started to get serious about cybersecurity of medical devices, and we’ve come a really long way since then.

[Paul] (03:02)

That’s quite an interesting thing to think about, isn’t it? the FDA, for listeners outside North America, is the Food and Drug Administration? Yes. It’s now cyber security that falls under their remit. As much as, or even more importantly than, its traditional remit.

[Joe] (03:22)

Therein is sort of the trap from a product development perspective. As you said, we want to develop new products that are innovative to boost quality of life, quality of patient care. The trap, as I say, could be that you don’t think about security as much as you do about the innovation in the patient care aspects, which of course are primary. My thought though, as we’ll get into further, I’m sure, is that it’s not mutually exclusive. You don’t have to suffer innovation for security’s sake. And the FDA has tried to set the expectation and of course medical device manufacturers are complying, incorporating security into your software development process where it can be an enabler of all the security expectations that the FDA is setting while you can focus in on the innovative features and capabilities of the medical devices you’re trying to produce is the right mindset to have ultimately. We’ve used the phrase and we talk about it all the time.

Secured by Design is the right mindset.

[Paul] (04:25)

Secure by Design is kind of like saying, take this idea of checkbox compliance and just throw it out of the window. Get to a point where you tend to comply because you actually thought about and acted on all these security issues right from the start.

[Joe] (04:43)

I believe the right mindset is to incorporate security from the start, just as you said, and not bolt it on later. And to do that gets back to some basics in software development. Would love to dive into that deeper as we think about this challenge.

[Paul] (04:59)

Let’s dive into that at least a little bit right away and talk about Secure Product Development Frameworks, or SPDFs for short, which are now strongly emphasised by the Food and Drug Administration’s latest cybersecurity guidance.

[Joe] (05:18)

If you think about the medtech arena and you look at secure product design, threat modeling and identifying a risk management framework, ensuring you understand what the expectations of the FDA are, and then developing that secure architecture for your devices, especially if there’s communications on those devices. And of course, authentication of devices and use within your overall system. The idea then is to not only build in all those security controls that you would have, but also generate software build materials to help communicate all the different software that goes on that device.

[Paul] (05:59)

So that Software Bill of Materials means that you are able to come up with a list of ingredients in your product, which in turn means that if one of them is later found and publicized to have a vulnerability, which is something we discussed in that great podcast recently with Kelli, you know that it affects your product and you know that you need to take stock of that and come up with some answer for your customers. And that is a strength and not a weakness, isn’t it?

[Joe] (06:34)

It is a strength and not a weakness. And part of that is because we know not all the software that goes into these devices is originating from the medical device manufacturer itself. They have suppliers, they’re using open source software. And so really understanding not just the mix of those components that are on the device, but what is their origination? What’s the provenance of that software? And what are the vulnerabilities with all that software that comes into your device ultimately? Again, from third parties, from open source and in the software that the medical device manufacturer produces themselves. Developing a Software Bill of Materials to understand all those components, making sure that you’re not violating any open source license restrictions, but then also identifying all those components so you can more effectively identify the vulnerabilities associated with them and address those. 

Because part of what the regulations require then ultimately is to mitigate the vulnerabilities on those devices. It all comes together. You generate a software, build materials, you understand that software provenance, you identify the vulnerabilities, and then you find ways to mitigate those. And if you have a robust methodology towards security integrated into your software development process, then you have a better chance of minimizing all of the effects of that and reducing the cost to support, from a security perspective the overall compliance you have a better, more robust, more resilient product that you put out in the field.

[Paul] (08:08)

So not just a case of knowing what ingredients went in in case you need to vouch for one of them later after the product is fielded. It also provides a mechanism for making sure that you aren’t accidentally putting in ingredients when you build the software product that you didn’t expect because somebody substituted something without asking with the best will in the world or much worse some devious attacker tricked you into substituting something without even realizing it.

[Joe] (08:44)

Yes. People often will just say soup software of unknown Providence.

[Paul] (08:52)

Yes, I have a cryptography library of some sort. Yeah, but which one? Yes. Which version? What options did you use when you compiled it? All of those things can be critical, can’t they?

[Joe] (09:05)

If you’re producing a device that has vulnerabilities in your software that is of unknown provenance, then you are putting patients at risk. You are putting hospital systems and healthcare systems at risk. So really understanding the provenance of the software and the vulnerabilities associated with them while mitigating and addressing them is of utmost importance. And how could you argue otherwise? That’s why I think the industry has come so far.

[Paul] (09:31)

Exactly.

[Joe] (09:35)

Safety is of utmost importance to ensure patient care. And with that, understanding the software, underlying software components is an obvious step where you have to have a good feel for what’s in these devices you’re manufacturing and producing.

[Paul] (09:49)

And Joe, if we can just zoom in a little bit on the creation of a Software Bill of Materials, you have some quite strong opinions about how and when those should be created, don’t you? Some people will say, well, you just need to know all the source code that you could pick from. So let’s go through the larder and write down everything that’s in there. And we’ll know that it must be at least one of those, we hope. And another way says, well, you wait until the cake’s been baked and then you get some taster to come in and figure out what went into it. But the so-called build-time Software Bill of Materials, where you identify, log, and manage every single component, package, or even file that actually gets used in the baking of the cake, in your opinion is a much, much stronger way of vouching for what you’ve made.

[Joe] (10:40)

Exactly right. And if you think about the different ways you can generate a software build into it, you can do it from source code. Yes. And you’re sort of assuming what’s going to be put into that software, sort of have a plan, but that doesn’t tell the whole story. And as you say, if you try to do it from the binary working backwards, like the baked cake analogy, trying to understand its ingredients, you don’t quite get all the way there either. And the best moment to get the full picture is at software build time where you have perfect visibility, you have 100 % completeness in identifying all those files, all those components, all those packages that are used to create that ultimate medical device product. Going back to the purpose of understanding the underlying risk in this software, it’s safety and it’s security and it’s patient care. And so why would you cut corners? Why would you do an inferior approach when there’s perfectly good ways to do it to be more complete?

[Paul] (11:42)

By getting to the point that you can actually demonstrate or begin to field your product faster, surely that’s got to be better for your innovative engineers than having them think they’ve finished a product and then run around in circles for two or three years afterwards trying to get it from, hey it works in the lab, to we’re allowed to sell it in the field.

[Joe] (12:04)

That’s why I think it comes back to the overall architecture and software development practices. If you can standardize your overall architecture and software development approach as much as possible. Now you can’t do it perfectly on all devices because the requirements are different, but the extent to which you can standardize, what that means is you have economies of scale. You can address the same vulnerabilities across your products and you can build the same disciplines into your software development process in the first place.

[Paul] (12:36)

In other words, if you’re going to make mistakes on product A, then it’s very much better if you consciously and practically avoid making the same or similar mistakes on product A plus 1, A plus 2 and A plus 3. As you proceed, your development times should in theory get shorter and shorter with better and better results.

[Joe] (13:01)

And more innovation over time because you’re freeing up your resources to build new innovative products and features going forward.

[Paul] (13:09)

Yes, and if there are going to be vulnerabilities, particularly exploitable ones, in the future, having that bill of materials means that you know where those problems are and what you need to do at a minimum and perhaps at a maximum to fix them. So it’s not just about being proactive, it’s about being able to deal with bad situations more effectively if they should occur.

[Joe] (13:38)

My feeling is we shouldn’t be recreating the entire effort every single time. And if you can standardize a bit and then identify those imperfections in process, those software bugs, those vulnerabilities that are unique to that device, you’re really working on the exceptions and you’ve got the overall process and platform secure. And what that means is your developers are more efficient, your products are less costly and they have a higher security posture overall. 

Why would you solve the same vulnerability over and over and over again across 20 or 30 different products when you could standardize your overall approach? I think you’re right. I think focusing in on that overall process so that when you do identify an exception, a software bug and vulnerability, you do have focus to mitigate. One of the challenges I think that’s out there is there are a lot of false positives that end up consuming developer time. I do think with the robust methodology that focuses on true positive vulnerabilities, minimizes the false positives, then the idea is that developers will be more efficient and your products will be more secure and safe.

[Paul] (14:38)

Absolutely.

So Joe, what do have to say to the kind of person who still thinks that something like a detailed Software Bill of Materials just tends to advertise what vulnerabilities you might have so openly that it actually makes you more of a target and gives you less protection than if you had a little bit of secrecy slash obscurity in the mix?

[Joe] (15:20)

Security researchers who end up developing exploits because they have found bugs or weaknesses in components that you have, they’re going to find vulnerabilities whether you publish your SBOM or not.

[Paul] (15:33)

Absolutely, yes.

[Joe] (15:35)

The better approach is to embrace those facts and build an SBOM that allows you to communicate what you’ve done and to ensure that you’ve addressed as many of the issues, if not all the issues as possible. And I would argue going a step further, adding in security protections that anticipate that new bugs will be found is a good way to do that. The question becomes, if you want to obscure things and hide from them and not communicate, you’re going to get exploited because you probably don’t have your eye on the ball.

[Paul] (16:08)

Absolutely.

[Joe] (16:09)

Your overall approach is so much better embracing transparency, embracing communication of your vulnerabilities, and addressing them in a proactive way that the chances of getting exploited go way down. And it’s not about publishing the Software Bill of Materials because that’s easy enough for an attacker to identify what the underlying components are in the software. It’s about engaging and managing your risk as opposed to trying to obscure your code.

[Paul] (16:39)

And those attackers can just use binary analysis, can’t they? But if they do find a problem, the one thing you can be sure of is that they are not going to tell you. Whereas if you are open and honest about all of this stuff, you actually, as you say, not only are better prepared yourself and less likely to commit to something bad, you also have a very good chance that one of the good guys will find the problem and disclose it you responsible for the greater good of all.

[Joe] (17:10)

And you bring up an excellent point. If you think about what an attacker does to analyze a binary and by a binary, mean, the software that’s deployed on one of these medical devices, of course, there are tools out there that security researchers, whether they’re attackers or let’s say the good guys trying to identify vulnerabilities that can be fixed ahead of time. There are binary analysis tools, as you say, that really look at what are the underlying weaknesses in those software binaries. on which or against which an attacker can build in an exploit. And a good example of that is looking for the underlying return oriented programming gadgets, ROP gadgets or ROP chains, that exist in compiled code in software binaries.

[Paul] (18:00)

Those are things of already executable code that you don’t have to poke in there, that if you can just deviate the flow of control very slightly, you may actually be able to get the software to misbehave so that it nearly but doesn’t quite crash, and when it has nearly but not quite crashed, you, the attacker, end up controlling what it does next. Yep. Unauthorised, unwanted, uncontrolled, unregulated.

[Joe] (18:28)

By doing that, what you’re doing is you’re accessing legitimate functions. Just as you say, the attacker can string them together in a way that wasn’t originally intended by the software developer. And the reason I went into that specific detail is we said, should I publish a Software Bill of Materials? The answer is yes. You want to transparently communicate with your customers. Why? Because attackers have these other tools to really understand what’s going on. I know you know this, Paul, but if you do a search on eBay, you can probably buy a gadget or a device. Yes. On eBay. And so what does an attacker do? They buy one of the existing devices because people are selling off their devices and recouping some cash. When you buy one, what can you do with that? Well, you can analyze it for months on end. They don’t need to look at a Software Bill Materials.

[Paul] (19:02)

Yes.

[Joe] (19:22)

They’ve got advanced tools to analyze the binaries, look for those underlying ROP gadgets, ROP chains, find their points of entry. They’ll look at communications that they can leverage to gain access. That’s what leads to exfiltrating data from devices. That leads to manipulating the code to do something it’s not supposed to do. What that means is, yes, it’s a big tall order to take on folks like that. Yes, they have lots of time to prepare in some circles, you might say the preparation in the battlefield so they can execute their exploits later. It’s still an economic equation. 

Yes. If you do the right steps to thwart even the best prepared cyber attacker who might spend five, six months developing exploits to work reliably on a target device, if you can thwart them and make that much harder, they’re simply going to look elsewhere. You want to be robust enough where they don’t want to spend six months on your effort because it’s going to be wasted effort. All of that comes back to the software methodology, incorporating security into the practice in general.

[Paul] (20:30)

Joe, you mentioned there the issue of data exfiltration and of remote code execution where you get some code to execute leading to perhaps much greater, much worse things being fetched, installed, and used later. How should manufacturers rethink their defenses for medical devices that are now, very loosely speaking, almost always on the internet rather than completely self-contained and disconnected from it.

[Joe] (21:01)

You make a good point, which is these devices are in fact generating a bunch of data that is used for good reason to help monitor effectiveness and care of the patient. If anybody’s been in a hospital, you have seen all the screens, all the monitors. And when you look at all these devices, whether they’re taken home with the patient or incorporated into the hospital, they are connected because they’re communicating results and data about effectiveness of care. You may go to a specialized spot where there’s a medical technician that’s going to do the MRI or do the x-ray, let’s say, and that information will all be shared to radiologists who might be in a different physical location altogether. And those radiologists are reviewing multiple scans and results and communicating back. 

If you look at that whole ecosystem, the cloud environment, the communications channels and the connected devices, quote unquote, on the edge. Then you can imagine that you have to think about security in the broader sense, the cloud, the communications, and the devices. And so it’s a lot for a hospital system to think about. I do think there needs to be zero trust architecture built in there, encrypted communications in there. And then of course, protecting the software itself on those devices. 

Why? Because as we have seen in many industries, it’s these open communications channels that enable somebody to get on device and exercise their cyber attack or their exploit. And it’s this connectedness that benefits society by improving both the efficiency and effectiveness of patient care, but also creates that exposure to attack methods that a cyber researcher or cyber exploit developer will rely on to administer their cyber attack in the first place.

[Paul] (23:01)

So to summarize the ethos of Secure by Design, you might think of it in a way that reactive security is a heck of a lot easier if you get the proactive part of security right in the first place.

[Joe] (23:16)

100%. I think of it as almost an optimization equation. Yeah. Yes. And you want to maximize security and minimize effort. If you can do everything right from a Secure by Design perspective and look at your overall security architecture and have security built in. In fact, when something does happen, it should be the exception and it should be more easily addressable at that stage.

[Paul] (23:21)

Yes.

So Joe, to finish up with all of what you’ve said in mind, how will we actually know that we’ve reached what you might call a good point in healthcare security? That we’ve really embraced proactive security and Secure by Design in healthcare security from innovation in embedded devices all the way to care delivery in clinics and care homes and doctors, surgeries, and hospitals?

[Joe] (24:15)

I would like to put out that challenge of looking at the Software Bill of Materials and the communication of those materials from medical device maker to hospital system and ensure that everybody is looking at and reviewing those. That’s one milestone I have. It seems simple, but when people are doing that on a consistent basis and can communicate things like mean time to resolution or how long it takes to get a product approved and on the market. Those are some measurements, but I think the action of producing a Software Bill of Materials and communicating it is an important step to it. 

Over time, what we ought to see is that there are faster response times in mitigating vulnerabilities when they are found out down the road, and that there is more time for developing new features in less time chasing false positives and unknown risks. So I think there’s got to be this balance towards proactive cyber defense. And what I would like to see across the board is a robust software methodology that is willing to share the software bill materials and ultimately have fewer false positives on the vulnerability side and faster time to resolution when in fact a vulnerability gets introduced into the ecosystem.

[Paul] (25:39)

So, Joe, if you will forgive me using a trifecta of cliches to close out with, in cybersecurity it really is a case of if you don’t measure it, then you cannot manage it. It’s also a case that security is very definitely a journey and not a destination. And perhaps even more importantly, it should be treated as a value to be maximized rather than a cost to be cut to the bone.

[Joe] (26:09)

I agree 100 % Paul and I think that’s a great summary.

[Paul] (26:13)

Excellent. I wish we didn’t have to stop because you can probably hear that Joe is getting more and more passionate and I just wanted to hear more and more of that. But that’s a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. Thanks to Joe for his passion, his enthusiasm and for looking attentively to the future. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media as well, and don’t forget to recommend us to everybody in your team so they can listen to Joe’s passion. And please remember, stay ahead of the threat. See you next time.

How Generative AI Is Addressing Warfighter Challenges

How Generative AI Is Addressing Warfighter Challenges

  In today’s fast-paced defense environment, speed and intelligence win battles before they begin. In this episode of Exploited: The Cyber Truth, Joseph M. Saunders of RunSafe Security and Arthur Reyenger of Ask Sage explore how generative AI is revolutionizing...

read more