Open source is everywhere in embedded development — from compilers and build tools to libraries pulled in to hit tight delivery deadlines. But while open source accelerates progress, it also introduces risks that compound quietly over time.
In this episode of Exploited: The Cyber Truth, Paul Ducklin sits down with RunSafe Security Founder and CEO Joseph M. Saunders and embedded systems expert Elecia White to explore how early open-source decisions shape security, reliability, and maintainability long after devices ship.
Elecia explains how “just one package” often brings transitive dependencies that teams never intended to support, monitor, or defend—and why those decisions matter more in embedded environments where updates may be rare, costly, or operationally complex. Joe adds perspective on why visibility at build time, not after deployment, is essential for understanding risk.
Rather than arguing against open source, the discussion focuses on how to use it deliberately:
- Evaluating libraries before adoption
- Limiting dependency sprawl
- Designing firmware boundaries to contain third-party risk
- Maintaining build-time SBOMs that support fast response
- Planning for vulnerability disclosure and long-term support
If you’re responsible for embedded products that must remain secure and reliable in the field for years, this episode offers concrete guidance for balancing speed today with resilience tomorrow.
Speakers:
Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.
His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and helping all of us to raise the bar collectively against cyberattackers.
Joseph M. Saunders: Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.
Guest Speaker – Elecia White: Principal Embedded Software Engineer
Elecia White is a principal embedded software engineer, author of O’Reilly’s Making Embedded Systems, and host of Embedded.fm. She enjoys sharing her enthusiasm for engineering and devices. Her resume includes children’s toys, a DNA scanner, inertial measurement units, Fitbit, and a variety of robots.
Episode Transcript
Exploited: The Cyber Truth, a podcast by RunSafe Security.
[Paul] (00:02)
Welcome back, everybody, to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of RunSafe Security. Hello, Joe from a WeWork in Washington, DC.
[Joe] (00:21)
Greetings, Paul, excited for today’s conversation and look forward to the topics we’ll be covering.
[Paul] (00:27)
Me too, not least because our guest this week is Elecia White of Embedded.fm. We’re going to be talking about balancing speed and security, the open source dilemma in embedded development. So Elecia, why don’t you start by telling us a bit about yourself and how you got to where you are in 2025?
[Elecia] (00:53)
As you said, my name is Elecia and I have been doing embedded development for decades now. 8051s were where I started out.
[Paul] (01:04)
That’s old school!
[Elecia] (01:08)
Doing Fortran on Vaxes?
[Paul] (01:10)
You can get treatment for Fortran. You can get over it.
[Elecia] (01:15)
These days, it’s Cortex-M4s and single board computers. I do consulting for a number of folks, mostly medical or critical safety, or deep scientific instruments. And I wrote the book, Making Embedded Systems for O’Reilly, and I host the embedded podcast, Embedded FM. I’m really happy to be here and excited to talk about security, although it’s a topic I’m always a little afraid of.
[Paul] (01:46)
Right, let’s kick off with the concept of open source software, which almost everybody uses, especially if they have an iPhone or an Android phone or a Windows laptop or a Mac, even though those feel like proprietary systems. In the embedded world, open source has made huge strides in recent years, hasn’t it?
[Elecia] (02:12)
It has, although it depends on how you define both open source and embedded. You mentioned phones. I don’t really consider those embedded. Those are platforms. Those are computers, not microcontrollers.
[Paul] (02:26)
Yeah, I was just mentioning those as examples of computers people have that have a lot of open source in them, even though they may not realize quite how much.
[Elecia] (02:35)
And open source, there are libraries that we all know about. And then there are the vendor libraries that come from the chips, and there are compilers and tools. So open source comes in through many different avenues.
[Paul] (02:51)
So you could have a completely proprietary system, but the compiler, the linker, the entire build environment, the source code management system used and all the servers that are used to build that code could be open source, even though the final result is closed. So what are the big risks for open source users in general, especially those that maybe haven’t been using it for terribly long and for embedded developers in particular?
[Elecia] (03:10)
Indeed.
[Elecia] (03:24)
One of the biggest difficulties for open source software is the licensing. If you use all the tools, usually there are no licensing issues, but if you compile the code in, then you have to start looking at the licenses. You can’t compile in somebody else’s copyright code. It’s not only morally wrong, it may make your company go out of business. So licensing is one of the first things to even consider.
[Paul] (03:52)
Even when you ignore the quality, the licensing, as you say, can be quite tricky. And we’ve spoken in a recent podcast about the problem of something like the GNU General Public License, where you can have a million lines of code that you’re allowed to use without revealing your source code or how you’ve modified it. You incorporate one tiny module that is covered by the GPL, the General Public License, copy left, and suddenly bang! All of your code has to be revealed. And if you decide not to bother and hope no one will notice, good luck with that.
[Elecia] (04:29)
I mean, I don’t want to say it’s a security risk; it’s a company risk. In embedded systems, you are compiling them together. It’s not like Linux where you can run an open source program and get the results to a file or something like that. Your proprietary, your algorithm code is going to be compiled into a single binary with the open source code.
[Paul] (04:32)
Yes.
[Joe] (04:52)
Yeah, and those modifications, when that software is distributed, has significant effect then on the owner of that embedded software maintaining their licensing. It could be if you do this incorrectly or you do ignore some of those licenses, you could be subjecting your proprietary code to the same licensing conditions by virtue of distributing that product.
[Paul] (05:17)
And even if you decide, well, I don’t mind making all of my proprietary code open source, I don’t really have a problem with that. If you haven’t been ready to do that, if you’re not familiar with the vehicles you need for distributing that code in compliance with the licenses you have, even if you want to comply, it can be a huge undertaking to do that within the time allowed by someone who might be suing you for violating things like copyleft licenses.
[Elecia] (05:46)
So that’s the first place to start, but we wanted to talk about security, right? Because that’s where you start worrying about open source code and whether or not you’re dependent on one person having not put in a back door or keeping up to date or all of the things that we need to depend upon. So, I mean, licensing is the first thing I look at, but after that, there are more cans of worms.
[Joe] (06:09)
I have one client that calls it their Kevin problem. And Kevin happens to be an open source developer who not only built something interesting and useful and did it in a very efficient way, but he may or may not want to provide support or updates. Major organizations are dependent on quote-unquote “Kevin,” who may not resolve an issue or may not think the issue is worthy of solving. I’ve had customers who say, I need to solve my Kevin problem and I need to solve the related security implications if Kevin doesn’t provide an update.
[Paul] (06:45)
XKCD 2347 cartoon springs to mind. That beautiful architectural diagram of some giant software project and there’s a huge hole in the structure and there’s a little strut about the size of a cocktail stick holding up one side of the project and it says that code that’s maintained by one guy in Nebraska and there’s no reason why they should feel compelled to continue for no money and there’s also no reason why they should be held liable if you choose to use their product without paying them or negotiating with them about whether or if they wish to support it.
[Elecia] (07:23)
When we talk about open source being free, it is important to define free. We’ve had lots of people say it’s free as in freedom. You can do what you want. It gives you lots of openness. It’s free as in beer. You don’t have to pay for it. But we don’t really talk about the idea of open source software sometimes being free like a puppy.
[Paul] (07:44)
Yes. The person who’s building it may be doing it for fun and they may decide they want to take it in any direction they want.
[Elecia] (07:53)
And when you get a puppy, sure, it’s fun and everybody loves it, but it’s not cost-free. It’s not free as in beer. It’s not free as in freedom because your puppy is going to need to be walked and need to be trained. And when we work with open source software, you do need to take that responsibility of understanding your dependencies and planning for the future for when Kevin goes on vacation for a couple of years.
[Paul] (08:21)
Elecia, do you have some horror stories that you can share and perhaps then balance those with success stories where people have decided to adopt some open source project and it’s either gone horribly wrong or been spectacularly useful for them or some amazing combination of both perhaps?
[Elecia] (08:41)
A bit tough. One of my favorite open source projects, which most people probably don’t even realize exists, is the CM-Sys libraries. ARM, who is the vendor for the Cortex-M lines and a bunch of other Cortex lines of processor, and this is the same ARM that probably made your processor if you’re running a Mac. They have this library, the set of libraries, that all of the different ship vendors use. Nordic and ST and NXP ,and there’s this hardware abstraction layer that makes it so that these chips are similar between different vendors because their cores are the same. The Cortex-M is the same between NXP or Nordic or Renesas.
There are also cryptographic libraries and neural network libraries and they are intended to be optimized for the processor. They aren’t always, but there’s a wealth of both information on how these libraries work, how this math works, as well as a trust that this has actually been looked at by a lot of people. It’s using something that is well tested and well understood and decades, well, maybe not decades, but at least one decade old.
[Paul] (09:58)
If it’s 1.1, that counts as a plural, I think. We can have decades. So that’s a case where you kind of assume that the vendor, because they are a very large, wealthy, important company, have probably not only done their homework, but have invested in this because it’s a good investment to them to persuade people to buy their product line. But in other cases, you can have open source projects that almost everybody somewhere has relied on at some time that have almost been overlooked or just taken for granted by the community. Do think companies should be more prepared to give back to the community than perhaps they have been?
[Elecia] (10:40)
Yes, I would love that! Some of the places I’ve worked have done that, and they should. Okay, this goes back to my free as a puppy thing. Because you do invest your time and your money into your pets, and you need to do that same thing for the dependencies you’re creating. If somebody had said, look, I am going to use this library, I need to be partially responsible for its security.
[Paul] (10:46)
Yes.
[Elecia] (11:09)
If my security depends on this library security, then it is my responsibility as an engineer, as a developer, as a company who’s releasing this to contribute either time or money. Yes, of course.
[Paul] (11:21)
Or both, of course.
And that money doesn’t necessarily mean paying someone; it could mean doing some testing on your own time and providing those results, clearly written up, to the maintainers in a non-judgmental way so they can use them. Or responsibly disclosing problems that you find, even if the open source license doesn’t require you to publish things that you’ve found.
[Elecia] (11:44)
Even realizing their documentation could have been written better and adding a few commas. It’s a place to get started.
[Paul] (11:52)
Yes, I agree with that. One flaw that I found with quite a few open source projects is that the documentation and even the comments in the code tend to rot away over time because that is not the exciting part that most developers want to work on. And yet it is a place where even not very technical people can contribute deeply by making sure that the product is, as you say, well documented, which means it will probably be better tested.
Which also means it’s easier to review.
[Elecia] (12:24)
Agreed. Totally agreed. And it helps you understand the code as you go through because you aren’t just reading the code and seeing the lines scroll past. You’re reading the code enough that you can check the comments or add a little bit of documentation.
[Paul] (12:38)
Because if you don’t know, and you go and try and find out and the documentation doesn’t tell you, that probably means that no one else knows as well, so what’s useful to you is going to be useful for the community as a whole. Paying back is self-serving altruism. It will help you, and it will help everybody else, and given that you’ll probably depend on some of their code in the future, then what goes around comes around.
[Elecia] (13:03)
One of the things that works with open source is finding the developers who work on it and asking them to make changes, paying them to make changes, as contractors or consultants. If you’re paying Kevin, it’s a lot easier to get his attention.
[Paul] (13:17)
Yes, indeed. Now, Elecia, I guess another problem that is often overlooked when people decide to adopt Open Source Project X is it typically comes along with dependencies Y, Z, P, Q, and a letter from an alphabet you’ve never heard of before. And that’s not always obvious, is it?
So how do you sort that out before you dig a hole for yourself so deep that you will eventually inevitably fall into it?
[Elecia] (13:50)
That’s a good question. It’s tough. As you were speaking, I was like, okay, import matplotlib, import numpy, import opencv. Checking faces is going to be trivial because I have all these libraries. And yet I have tried to compile opencv. It was very much like a balloon that when you get your hands around it, it just squishes out. I think you already know about software BOMs where you make a list of all the things you depend on.
[Paul] (14:18)
Yes, that’s BOM without a B at the end.
[Elecia] (14:20)
Right, right, right. Bill of materials. Making a list of everything you depend on and the versions that you depend upon is critical for shipping products because if you want to be able to reproduce what you put out, you need to have a static idea of what it was. Once we have a list of what they are and maybe how to install them and their versions, now you can start looking at the security pieces for each one of their sub-modules.
If you have submodules on submodules, this is one of those areas where you start thinking, okay, now I have dependencies that go really deeply. Maybe I should be considering whether I want to import NumPy so that I can get a matrix multiply when I could do a matrix multiply myself. There is no single good solution, although SBOMs and documenting what you have is the first step.
[Joe] (15:19)
And a lot of people do not generate an SBOM when they do their builds. They are creating an SBOM, say, after the fact from a binary in hopes to paint a picture of everything that’s actually in that software image. When in fact, a lot of these things that get pulled in can’t be assumed to be in there. And a lot of heuristics won’t find them.
Generating a Software Bill of Materials, not just from source, but doing it at build time so that you see those transitive dependencies and see all those underlying files and all those components that truly comprise the full picture, is vital. And I agree with you that if you don’t have that complete picture, you don’t have a complete picture of your security posture, let alone a total understanding of just how bloated that code may become because you’re missing some of underlying files and components in the software in the first place.
[Elecia] (16:12)
Bloated code can be slow as well as a security problem. You may not have enough flash. It’s easy to just pull in these big libraries. Okay, this solves my problem. But does it create 10 others?
[Joe] (16:26)
And it’s amazing you say that because in a constrained power, constrained compute environment, wow, do you have to get creative? Do you have to get precise? Cause it becomes an optimization question of performance, security, and all these other things. So I’m with you 100 %. For me, constrained environments necessitate being comprehensive and complete and probably creative for a little bit in how you approach some of these metal problems.
[Elecia] (16:52)
Joe, you were talking about a compile-time solution. I usually do this SBOM as part of making sure that somebody new can build my system. You’re doing this at compile time?
[Joe] (17:04)
We sure do. We do it at build time because of all those components that might get pulled in, whether it’s from the compiler or part of the build process in general. Consider a cake. You might be able to derive the ingredients from the baked cake and you’ll get pretty close. You also might be able to look at the initial ingredients in the recipe and assume what’s going to go in there. But to actually know what the build system does, you actually need to see all the file opens as they are processing through to create that software image or that binary that results from it. And without that, you don’t have a full complete picture of everything that goes into it. And therefore is my system at risk, at least from a security perspective.
[Elecia] (17:43)
So this is a continuous integration step where you check files, not the final binary.
[Joe] (17:49)
You’re enumerating all the files and components as you’re generating the binary. I guess the core philosophy I have is you want to keep those things as close together as possible so that they remain the same picture or they represent the same information.
[Paul] (18:05)
It strikes me in a way, both you and Elecia are coming at this from slightly different angles, but very importantly, from the same side, if you like, of the equation, namely, before you actually deliver the software. In other words, the security implications have been thought of beforehand; you’re not trying to figure out what happened afterwards. So, Elecia, how do you do that? How do you decide in advance what is likely to be depended upon by the dependencies that depend on the dependencies and so on. How do you solve this transitive closure problem or whatever you might call it so that you don’t get sucked into dependency bloat by mistake?
[Elecia] (18:49)
My impractical advice is to use large libraries, CMsys, VanderHals, RTOSs you buy, that don’t depend on other things, thereby limiting what you need to pay attention to. If you are only pulling in three libraries from very trusted sources that have no dependencies, then you only have to monitor those three things. But when you start getting into BLE and wifi, that process doesn’t work as well. When you start thinking about single-board computers, that process doesn’t work at all. Which is why I really prefer the smaller processors, where it is a viable option. The list of dependencies is the place to start. A lot of companies don’t even get that far. Start there. Don’t worry about all the rest of this quite yet.
[Paul] (19:34)
So it sounds as though in some cases, choosing a, let’s call it a smaller chip, not one of the latest and greatest system on chip computers like say, Apple’s, where I believe their current ARM chips, what do they have, 200 billion transistors? The smaller and simpler the chip, A, the cheaper it will be, and B, the simpler and easier to manage, corral, and make vulnerability predictions about your software will be. Which, in the embedded space, may have to live in the world for 5, 10, or 20 years, rather than 5 or 10 weeks like the average web app.
[Elecia] (20:14)
The separating into smaller processors is one way to solve some of these problems. Of course, you do have the scale problem that if you want a lot of things to happen, then you’re buying a whole bunch of chips and then that’s no longer cheaper than buying one chip that can do lots of things.
[Paul] (20:29)
So what practical things can you do during the long-term development process? Talking about developing and building and adding features to a product that’s running on hardware you’ve already chosen, it’s difficult to go and change it. What sort of guardrails can a development team put in place that reduces the risk that they will import a package that they later come to regret? How do you choose wisely and choose early?
[Elecia] (20:59)
Professionalism. A lack of professionalism indicates a lack of attention to detail, a lack of processes. For all that it is impressive that one guy was maintaining software that crashed so much of the internet, you don’t want to be the person who depends on them. If it depends on esoteric tools, if it depends on tools that someone wrote in their garage because making a new language called Rockstar was fun, don’t depend your company on some weird tool that could go away at any time. If you want a green flag, you want good level of professionalism. They don’t need to be super marketing. They don’t need to be pushy. Treat it as though it is a profession.
[Joe] (21:42)
I agree.
[Paul] (21:43)
Joe, do you want to say something about vulnerability disclosure and what you can read into how a project or a company or an individual deals with problems that come up in their software? What they say when things go wrong.
[Joe] (21:58)
Yeah, Elecia calls it professionalism, and I agree. And as you suggest, Paul, there are indicators of some of that professionalism, some of that discipline, some of that robust process that demonstrates an awareness that process matters. Part of that process, of course, is not just how often do you do your updates and how much do you disclose, but are you transparent and do you report or disclose vulnerabilities? That kind of professionalism. gives the ecosystem then confidence that you’re on top of problems. And let’s face it, nobody writes perfect code. But if the contributor, if the maintainer, are able to not just communicate and be transparent, but also then respond in a timely fashion, those become good indicators then of who has a robust process. It becomes very obvious who maintains their lawn, who trims their bushes.
And so those are indicators also of maybe higher code quality, maybe safer code, secure code.
[Paul] (23:03)
So, Elecia it strikes me that if you’re expecting the people upstream of you to have a decent, responsible vulnerability disclosure process, then you need one yourself. And if you’re just a small development team or a small company, a new startup, you need to plan for that in advance, don’t you? What do you need to have in place to make sure you can let your customers know what they need to know to keep the world safe?
[Elecia] (23:30)
My answer is we run around like chickens with our heads screaming and then we firmware update.
[Paul] (23:35)
Well, you mentioned the words firmware update. Sometimes, even if it’s easy to create a new firmware blob, and even if it’s easy to tell your customers about it, in the embedded world, it’s not always easy to push that out. What precautions can you take that will make it safer and easier and more palatable for your customers if there is a crisis that they need to react to?
[Elecia] (24:00)
Firmware update is one of my favorite architectural topics to talk about. And it’s one of my least favorite implementations. Devices do live for decades and they live in difficult to reach places, space, the bottom of the ocean, in trucks and ships with only occasional maintenance.
[Paul] (24:19)
in pump rooms that have not been unlocked and opened since 1989. The location of which has been forgotten.
[Elecia] (24:24)
Exactly.
And transmission can be difficult and expensive. Think about small communication pipes, like maybe a satellite phone. And then there’s the pain of creating a release that is up to date and tested and customers demand security, but hate updates. Do you update automatically? I think we’ve kind of been saying we don’t want our dependencies to update automatically, but we also want our customers to update reasonably before they get attacked and then blame us.
[Paul] (24:58)
It sounds as though a fundamental part of this is the old cliche, less is more, can really, really help in the embedded world. The less you bite off, the less you have to chew, the less likely it is that your customers will need to change things very often.
[Elecia] (25:15)
Ideally, and yet we do want our systems to do everything they need to do and to be good and to use all of those features that we paid for on our chip. So think about Firmware Update from the get-go. It is not something you add in the last week before you ship. And if it means you have to burn a couple of boards because you got it wrong, that’s okay. You have to plan for Firmware Update.
[Paul] (25:34)
Right.
[Elecia] (25:43)
You have to plan for when your dependencies become non-maintained and what you’re going to do as a company, as an engineering team. And if you are biting off more than you can chew, if you are pulling in things you don’t need, if you think about how much it will cost to maintain that library, how much it will cost to stay up to date on its security, it starts becoming a cost benefit analysis.
Is it cheaper for my developers to do it where I have control or is it cheaper to pay somebody else to do it? Or is it cheaper to use the open source and keep an eye on it? They’re all very viable options. You need to engineer it. You need to be disciplined. You need to be professional.
[Paul] (26:26)
That sounds like security by design to me. It’s like crossing the road. Stop and look both ways. Don’t just run in blindly because one of these days it will end very badly. Joe, do you have anything to add to that?
[Joe] (26:39)
Yeah, I was just going to say, I do think that the tendency for many organizations is to do what’s expedient, do what’s going to get the product shipped in the first place. And sometimes people kick the can down the street, so to speak. Elecia outlines a couple of really good questions to ask yourself, including how are these updates going to be managed? I would add maybe a fourth one. I know she listed at least three. If you can take off.
[Paul] (26:53)
Yes.
[Joe] (27:07)
The exploitability of certain kinds of vulnerabilities and have that built in so you can reduce the potential for exploitation even when a patch is not available. Obviously, that’s near and dear to my heart is ways to address classes of vulnerabilities and not rely on simply patching and updating strictly. Inthat regard, Paul, coming back to your point, think secure by design, whether it’s in the planning phase or the architectural phase or in the security mitigation strategies in general, all of those things do need to be carefully thought through. And expediting to get something out the door, of course, feels good in the short term, but can pay huge costs in the long term to the extent that people can have more forethought about some of these trade-offs that Elecia identifies makes for better business practice and certainly better software development discipline and practice and ultimately more resilient system.
[Paul] (28:08)
Joe, I think that’s a great way to end because you’ve cemented what Elecia has been saying all along, that this is not something that you do afterwards. It’s not something you come back to when you’ve already thrown the dice. We do need to think about secure by design rather than papering over the cracks. And certainly, when it comes to open source, contributing back to the community, as Elecia said at the start, it’s not just a moral matter. It’s actually good for you as well as is good for everybody else.
So that is a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media. And also don’t forget to recommend us to all of your team so they can listen to the wise words of Elecia and Joe as well.
So once again, thanks to everybody who tuned in and listened, and remember, stay ahead of the threat. See you next time.


