What the 2025 SBOM Minimum Elements Mean for Software Supply Chain Security

October 23, 2025

 

The updated 2025 SBOM Minimum Elements  from CISA and DHS introduces new guidance on what data and context should be included in every SBOM they generate. In this episode of Exploited: The Cyber Truth, host Paul Ducklin sits down with RunSafe Security’s Kelli Schwalm, Director of SBOM, and Joseph M. Saunders, RunSafe Founder & CEO, to explore the  updates, how they impact embedded software development, and why more-detailed SBOMs are a benefit to software security..

Kelli explains the new technical standards, from component-level cryptographic hashes to generation context metadata that clarifies how SBOMs are produced. Joe discusses how these changes move SBOMs from static compliance artifacts to living tools for transparency and risk reduction.

They also dive into:

  • The challenge of implementing SBOMs in embedded and legacy systems
  • How build-time visibility improves vulnerability management
  • Why accurate license and dependency tracking is key for compliance and security
  • The future of SBOMs in protecting critical infrastructure and national resilience

If you’re a software builder, security engineer, or policymaker, this episode offers practical insights for adapting to the new SBOM landscape.

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Special Guest:  Kelli Schwalm, Director of SBOM, RunSafe Security

Kelli Schwalm is SBOM Director at RunSafe Security, where she leads the team developing RunSafe’s unique approach to generating build-time SBOMs for embedded software, particularly software written in C/C++. Prior to joining RunSafe, Kelli worked on embedded security technologies for mission-critical systems with a focus on Linux Kernel development.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:01)

Welcome back everybody to this episode of Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe.

[Joe] (00:20)

Greetings, Paul. Great to be here.

[Paul] (00:22)

And today we have very special guest indeed, that is Kelli Schwalm, who is Software Development Director, leading the SBOM development at RunSafe. Hello Kelly!

[Kelli] (00:35)

Hi, I’m excited to be here as well.

[Paul] (00:37)

And we have some surprisingly important stuff to talk about. This week’s title is “What the 2025 SBOM Minimum Elements Mean for Software Supply Chain Security,” which is quite a mouthful. Let me just introduce that by reminding everyone that SBOM, without a B at the end, doesn’t blow things up, it gets them into order, stands for Software Bill of Materials.

The important thing behind this episode is that CISA and the DHS, that’s the Cybersecurity and Infrastructure Security Agency and the Department of Homeland Security in the US have just released the first major update to their minimum elements for a Software Bill of Materials since 2021. This raises the bar for what suppliers should provide in their Software Bill of Materials. So Kelli let us start at the beginning. What has changed in this new draft of the minimum SBOM elements? What have they added and why does it matter?

[Kelli] (01:49)

They made some major and minor updates to a lot of the components that are either required or haven’t been required previously or have a higher level of detail that is needed to really satisfy that minimum level of what an SBOM should represent. So for example, a component hash is a new field that is required within the SBOM and that really just identifies what the component is.

A component can have the same name, can have the same properties, it can have the same information, but across versions, depending on who supplies it, that hash can change. We like to talk about how files could also represent a component and that file can be modified throughout the lifetime of a build. And we want to report that as distinct components. Other major changes are identifying generation contexts dependency relationship requiring transitive dependencies.

[Paul] (02:48)

Now, transitivity, for our listeners, is something that many people won’t have heard of since primary school, where you learn about all the special rules of addition and multiplication. But loosely speaking, transitivity means that if A depends on B, and B depends on C, then there’s no getting around the fact that A depends on C as well, and that set of links really matters.

It’s not just enough to go back one step in the chain, is it? You might have to go back two or three or four, depending on the industry you’re in.

[Kelli] (03:23)

Absolutely, and that’s a very frequent scenario that does occur. So when a user installs a package on their system that will often install other packages, they’re also required. So it is very important to track that as vulnerabilities can be present in any one of those used dependencies.

[Paul] (03:36)

Yes.

So it’s not quite like a cake recipe where it will expressly say, you need 50 grams of sugar and 150 grams of sodium bicarbonate. There might be an ingredient that’s just put in 200 grams of this other product. And when you get that product out and you look at its list of ingredients, there’s 50 more things in that and each one of those has 50 more things in it. And all of them make a difference in the end.

[Kelli] (04:08)

Yes, and as someone with a food allergy, I love that analogy because I’m frequently typing all of the ingredients on every component I add into my recipes.

[Paul] (04:19)

Yes, that’s a great analogy for transitive dependency, isn’t it? And that can really make a huge difference in software. You think you’ve got one dependency, but in fact you can have a whole web of intrigue in the background. Any component of which could be buggy because it’s not looked after well, or even worse, as we’ve seen recently in many cases, could have been deliberately modified by cybercriminals or even state-sponsored actors who’ve possibly spent a long time building trust in the ecosystem so they can poison something far upstream with the intention of attacking some specific target downstream and to hell with the consequence of all the other people they affect as well.

[Kelli] (05:03)

Absolutely. And the way we identify vulnerabilities is really interesting because we rely on package identifiers through CPEs, which are Common Package Enumeration. So that’s essentially an ID for a package that is well recognized and it comes from a central database. CPEs identify a package, which would be a dependency. They’re a way for the entire industry to recognize what that package is named, what it’s called, how to attribute it to then CVEs, which are the Common Vulnerability Enumeration, which is an idea of a vulnerability. So there’s a strict mapping between CPEs, the package ID, and CVEs, the vulnerability ID.

[Paul] (05:49)

So the idea there is if you have a particular CPE identifier in your product and then one of those CPEs turns out to have some critical vulnerability you will not only know you will be able to track it down and fix it. 

[Kelli] Correct. 

[Paul] So there’s an awful lot at stake here isn’t there?

[Kelli] (06:09)

There is, and interestingly, CPEs are only created as matching CVEs are needed. There can be a backlog in a CPE needing to be reported just to link to that CVE.

[Paul] (06:23)

So you mean unless and until someone figures out that a particular package might be bad, it’s kind of assumed to be harmless. Yes. And it doesn’t appear on the list. Now, is that why this idea of individual cryptographic hashes for the actual packages you use has been introduced? The files that I used are exactly these for the sake of repeatability. Is that the idea there?

[Kelli] (06:48)

I think there’s certainly a significant impact from that. For example, a supplier might modify a package. Many RTOSs, or real-time operating systems, package things like their own modified version of GCC, or it may be modified, we don’t really know. So those cryptographic hashes are able to say, is this modified? It’s also able to attribute a package to the vulnerabilities from a different supplier if needed.

I don’t know if that’s the intent of the cryptographic hashes, it’s certainly a byproduct that’s very compelling.

[Paul] (07:22)

It certainly sounds as though it forces people to provide a more definitive description of what they actually used, rather than just the title that it happened to have when they downloaded it.

[Kelli] (07:34)

I’m going to go on a tangent a little bit, but I think something that’s very cool about SBOMs is that they do comply with specific schema standards through Cyclone DX or SPDX. There’s also, I think, SWID, which is generally not used from what we’ve seen. Because these are well-defined schema formats, there are so many tools to analyze the data. So really, when it comes down to it, more data is better because this isn’t meant to be reviewed by a human. This is meant to be reviewed by some tool that takes in the JSON, for example, and can strip all the information they need and analyze it.

[Paul] (08:14)

So presumably the idea of that is it forces everybody to describe their packages A) in a machine-readable way, which helps with automation but does no harm for humans and B) so that you don’t have to spend hours and hours translating everybody’s data, perhaps slightly incorrectly everybody is singing from the same song sheet. Now Kelli, there’s a new item called

[Kelli] (08:40)

Yes.

[Paul] (08:44)

Generation context. Can you explain something about that?

[Kelli] (08:49)

Yes, so there are different categories of SBOMs and that really just defines when they’re generated. There is a binary-based SBOM that’s just generated on a resulting binary, no source code included.

 

[Paul] (09:04)

That’s basically taste the cake and go, yeah, probably a bit too much sugar in that. But maybe you don’t quite taste the arsenic that somebody snuck in there. Yes.

[Kelli] (09:10)

Yes.

Sometimes that is very useful. For example, with my food allergy, I can’t have gluten, so I will have my husband taste test food to be like, mm.

[Paul] (09:28)

Now that’s the way to do it.

[Kelli] (09:30)

This has happened a few times. I’ll look and be like, nope, that’s not safe. However, it doesn’t give as much information, as reliable as going straight to that ingredient list. And again, going on that analogy, my husband might taste the food and know there’s gluten in it just because of the texture, but he doesn’t know what the actual flour mixture is. So you still don’t have that ingredient list you’re expecting. You’re more so looking for a particular ingredient.

[Paul] (09:58)

Sounds like an excuse to buy a gas chromatograph just for hobby use at home.

[Kelli] (10:02)

You have a good point.

[Paul] (10:05)

Ha ha ha ha ha.

[Kelli] (10:07)

Another category is a source-based SBOM. So that’s when you’re just taking a static look at the source code. Right. And you’re saying, I know everything that could go in. So that’s all the ingredients in your pantry. You know everything that could go into that recipe. However, that may not go into the recipe. So you might have three types of flour, but only two are mixed in. 

And so then my favorite category is the build-time SBOM, where you really are just sitting at build, at compilation, and you’re seeing everything that’s coming in. So that’s someone essentially watching the recipe unfold, watching every ingredient go into the mixture and recording it.

[Paul] (10:50)

Presumably that provides strong protection against A) accidents and B) malevolence.

[Kelli] (10:58)

Yes, absolutely. And builds can be very complicated. They can also call out through external resources to then download even more dependencies that may not be on the system prior to the build.

[Paul] (11:12)

In other words, someone could in all innocence say, here’s my bill of materials, but they haven’t taken into account or haven’t noticed that when they do a build, they actually suck in more things. Perhaps a rogue update that just gets blindly accepted without human review, which means that the bad guys get their way. Yes. So Joe, if we just switch to what you might call the bigger picture for a moment, how do these regulatory changes move SBOMs into something that actually drives resilience and as you like to call it elevation of process in the software supply chain?

[Joe] (11:50)

I think increasing transparency is ultimately the goal so that software teams can share with their customers exactly those ingredients that are in the products that they deploy in their infrastructure, in their operational technology networks. With that level of transparency by sharing the information, including having complete and correct identification of components, you’re better able to share vulnerabilities and risk that might be in a software system.

If I put my RunSafe hat on for a minute, I take it in even step further to help increase the security posture. That is we want to encourage people to disclose vulnerabilities so everybody knows that a vulnerability exists. So the corrections and the patches can get applied. But if you’re already protecting the software, you can disclose with confidence. So if you have things like RunSafe security protections on the software and you identify a vulnerability, well, great news. You should share it with everyone so everybody knows.

And you should tell your customers you’re already protected. So the idea is to not only be transparent, but to disclose more readily sooner in the process so everybody can increase their defenses. And we don’t blindly assume we just won’t be attacked. If you combine good insight from having the best approach to generate a Software Bill Materials along with code protection, then you have the best opportunity to link to all those vulnerabilities to share with your customers and ultimately to boost transparency with your customers overall. And if you’re just relying on checking a box, you’re not that serious about security, then you’re probably going to miss some of these things and your customers will ultimately pay the price.

[Paul] (13:31)

Now I know that a lot of embedded software is still written in C or C++. It’s very well understood. The development tools are readily available even for esoteric embedded systems. But SBOMs in the C and C++ world have some special challenges of their own, don’t they?

[Kelli] (13:52)

I love talking about this because it is such a headache. Okay, so really with any kind of legacy C/C++, or really any embedded compilation software development, anything goes. I like to refer to it as the wild west of software development.

[Paul] (13:56)

Eee, that’s great! I’m all in.

[Kelli] (14:13)

People will copy and paste specific versions of files right into their code base. And you might see a commit of the references, which is really just a name assigned to a change set, a name and a description. And that may be your only indication if there is even version tracking in that repository. Other things might use Git submodules, but Git submodules is kind of the best of the worst.

So it’s a way of tracking those packages through some means.

[Paul] (14:47)

So that’s where you divide your project into some sort of logically useful subdivisions rather than just having one giant file called program.c.

[Kelli] (14:58)

Yes, which also can happen.  Unfortunately.

[Paul] (15:01)

Ha ha ha ha!

Now there’s also another hassle with even modern C code, let alone legacy code, isn’t there, that when you’re supporting lots of different chipsets or lots of different device types as they’ve evolved over the years, you can have what’s called conditional compilation, which depends on all kinds of build time stuff. So you look at the code and you’ll have 17 versions of some say hashing algorithm and which one gets selected cannot be determined and lessen until the software is actually built.

[Kelli] (15:36)

Yes, so that’s very true. And I like to use an example that kind of ties everything together. So in a compilation of that environment where you’re really building for a chipset, you might also build a library that is a dependency of that binary. That library may have the same name and it’s compiled for a particular chipset. And in the same compilation process, it has been linked into the final binary for that chipset. You have no way to differentiate them across the different chipsets because they are proprietary, they are the same name, they’re the same location, they have almost the same everything.

[Paul] (16:14)

Probably have exactly the same version number in, won’t they?

[Kelli] (16:17)

Correct. So build time is great because then you can look at everything coming in during that process and say, well, this library depends on these source files that are particular to this chipset.

[Paul] (16:31)

So the deal there is that by monitoring what happens at build time, you can not only tie binary objects or libraries back to specific source projects or sub-projects, you can also record the actual build parameters that were used, which is good for repeatability. But I guess it also means that if there’s a CVE for that particular library, you often do find CVEs that say this applies to ARM processor version only or Intel x64 only, that means that you can be much more precise about assessing vulnerabilities and vulnerability disclosure. So you also save panic and false positives as well.

[Kelli] (17:13)

Yes.

That notion of trying to pare down what vulnerabilities actually apply is called triaging. Triaging these false positives, alerting to these CVEs or these vulnerabilities. So being able to filter out the different chip set dependencies is a great way to triage and redevote resources to other vulnerabilities that do apply and you do need to mitigate.

[Paul] (17:43)

So what are some of the things that you can do to solve these problems? What actually went into the cake and what effect did that have on the cake that we didn’t think of before?

[Kelli] (17:54)

So my favorite approach for embedded is to look at the files that are being accessed. The files tell us a lot. They tell us the source code. So for example, many CVE descriptions will contain something along the lines of this vulnerability is present in these files of this package. We know the file’s coming in. So we can again do that filtering mechanism if we report by file. And we really operate for the most part on file-based operating systems. Builds are being done on systems where everything is treated as a file more than you would ever anticipate. So that includes libraries, includes applications, that includes the actual source files, that includes the artifacts that go from the source file to the compiled output. So treating things on a by-file basis really gives you all of the information you could want going into that final resulting binary.

[Paul] (18:54)

And presumably also gives you intimate insight into files that are generated temporarily and used, for example, as scripts that affect the build process, even if they’re then removed afterwards. Yes. Because my understanding is a lot of supply chain attacks these days involve targeting the build process with the intention of corrupting the build environment while nevertheless building a correct binary at the end in the hope that nobody will notice.

[Kelli] (19:24)

Yes. And like I said before, embedded is the wild west of compilation. People will actually generate a library for one chipset, generate it for another chipset over and over again, all in the same build process. Is that particularly hygienic for a build process? No, it is not. Well, if you do a file-based SBOM, you can see.

[Paul] (19:48)

How do you prevent that?

[Kelli] (19:53)

Every file that’s coming in, you can see, well, we referenced those 73 chipsets before we got to this one. Or through dependency mapping, which that dependency relationship mentioned in the minimum requirements, we can say, this binary actually depends on this file that’s really suspect. That’s weird. Where is that coming from? Another compelling alternative to build time SBOM generation, especially in the embedded world, is doing these hashes not just of components, but of different code snippets and trying to identify them through there, which is very useful with open source where you can gain all of these code snippets and recognize where they’re coming from. So the GCCs of the world, you can still recognize. Then embedded, so many things are proprietary. Those code snippets aren’t accessible. So that becomes a much more complex problem to handle.

[Paul] (20:50)

Do you think that means that vendors may need to be pushed to provide, perhaps under special licenses or NDAs, access to their source code? Or do think we’re stuck with the fact that some of the things that we compile into our software are just going to be intractable blobs that we have to take on trust?

[Kelli] (21:10)

I think if we lived in a software utopia, it would be great to have access to these code snippets to properly identify. However, we don’t. So I do think it is trying to push the market into a place that will never exist. So we do need other ways to identify components that go into an SBOM that aren’t just code snippets.

[Paul] (21:33)

We may have made it sound more difficult than it really is, even though the whole idea is that huge amounts of this can be reliably automated. So if a developer or product security manager listens to this episode and thinks, nah, it’s too hard, I’m just going to stick to checkbox compliance, Joe and Kelli, what would either or both of you say to try and help them get over that hurdle?

[Kelli] (22:01)

Personally, I would say to just jump into the community. There are a lot of people very passionate about educating everyone on what’s available, the complexities that occur, how to overcome those complexities. And people are very passionate about cybersecurity in this community. So just go to the conferences, look on LinkedIn, see who’s talking about it and ask.

[Joe] (22:27)

And from my perspective, and I’ve heard Kelli say this in the past, one of the great capabilities or know-how of RunSafe Security is to make it very simple to integrate and access a lot of information, a lot of intelligence, a lot of insight at that moment of build for improving your overall security posture and making it easy to deploy. It really is core competency of RunSafe to integrate at build time, make that very simple for people.

So you can extract all that insight that we’ve been talking about and all the complexity that we’ve heard is stuff that can be automated as you say, Paul, but really streamlined, given the standard formats and even the identification of these minimum elements that are needed.

[Paul] (23:10)

And vulnerability disclosures aren’t a sign of weakness, are they? Trying to sweep all this stuff under the carpet, like many vendors used to do in the olden days, just doesn’t work. Because if you don’t find the problem, the cybercriminals or the state-sponsored actors are going to find it for you.

[Kelli] (23:29)

And I do feel like there may be no better feeling than finding a bug before it’s ever encountered in the wild and fixing it and knowing that you just were able to resolve it. And everything’s good in the world.

[Paul] (23:41)

Do think that there are still people who think I’m just going to stick my head in the sand and I’m just going to try and do it the old way because it’s too hard to change?

[Kelli] (23:51)

It’s more likely that they are concerned about what might be revealed about their software through an SBOM.

[Paul] (24:00)

You mean they’ll give away some information about the secret source?

[Kelli] (24:02)

Correct. But generally the software is patented in some way and really the payout of being able to quickly mitigate any vulnerabilities before they are ever exploited is way higher than the risk of giving away too much information.

[Paul] (24:22)

Joe, I think you’ve said in previous podcasts that more or less 80% of the software modules that go into modern embedded software tools will be open source. If that’s the case, you’re not really going to lose much with your SBOM.

[Joe] (24:41)

I think the percent of open source across embedded systems probably varies tremendously. And some people are shifting perhaps from real time operating systems to embedded Linux. As a result of all of that, I do think the shifting landscape matters. And I also, dare I bring up a generative AI, but I do think that in some cases you could create a close version of an open source component, but make it your own using generative AI. And that from an outsider’s view would look like a brand new component altogether wouldn’t match as the same open source file that’s on there. I do think that could create some complexity as well. And so the true counts of open source may not be well known because there could be copycats or near versions. Part of my thinking on that is again, you want to have cyber defenses that work even if it contains the same vulnerability as the near clone one.

You want to make sure you can either identify those components or prevent the exploitation of them, regardless of its source.

[Paul] (25:43)

Even though it’s free and open source, the licensing requirements can get surprisingly complicated, can’t they? You might have a license that says you can do what you want with this, others say you can use this without sharing your source code, but you have to put this copyright notice in, and then there’s the GPL copyleft where you use my code, you have to show everybody yours. My understanding is that these new rules for SBOMs require people to get serious about providing genuine licence information so that these problems don’t arise.

[Joe] (26:17)

Well, I think part of the reason why it’s important is you certainly don’t want to ship product or distribute product that contains a restrictive license. If you in fact are not complying, having checks and balances in your build process, in your software development process to ensure that you do catch those more restrictive open source licenses becomes an important thing to ensure that your product itself beyond just the open source component that you included, but the full package, the full product doesn’t itself become.

So you want to be very careful about how you distribute software when it does involve open source licensed components.

[Paul] (26:54)

That’s not just for moral and ethical reasons, is it? It also can have severe legal consequences up to and including the position where you could be required to stop selling your product and even to remove existing products for the market.

[Joe] (27:09)

Yeah, and I think it also plays out if your company gets acquired, there’ll be a full review of those license checks as well. So it’s just a good practice to have a discipline internally to ensure you’re not violating those terms.

[Paul] (27:22)

So if we can finish up, my understanding is that the current minimum standards for SBOM draft document is still open for feedback. Joe and Kelli, what things do you think aren’t in the minimum elements standards that really ought to be there? And if you had to push for one or two things to be added, what would they be and why?

[Kelli] (27:44)

I still think these minimum requirements don’t adequately address embedded SBOM generation. For the most part, SBOMs are assumed to reference languages like Python, like Rust, that have this notion of a package manager. You can refer to the manifest that already exists.

[Paul] (28:04)

In other words, they’re relying on metadata that’s already well established in the industry. Like you say, in the embedded market where you’ve got perhaps legacy C code, it is the wild west, east, north and south, isn’t it? What do you do about that? What could be added to the minimum elements that would help iron out that problem?

[Kelli] (28:11)

Correct.

I think really it just needs to be a perspective shift. So a lot of the descriptions, they’ve tried to make them more general, but components can be a file, can be a library, can be an application. And yet few entities really understand that file is valid and file can contain a lot of very helpful information. So for example, we’ve been toying with the idea of mapping files to a package and figuring out how we can relate that the schemas don’t really address that scenario because in most cases you would just report the packages.

[Paul] (29:03)

So you just say this particular tarball, for example, and you wouldn’t concern yourself with the 10, 50, a thousand individual files and scripts that might be inside it.

[Kelli] (29:15)

Yes, those files can contain valuable information about licensing. They’re very important to include because it validates that reference that we make when we report a license or an author or a copyright, all things that are referenced in the minimum elements in some capacity.

[Paul] (29:20)

Absolutely.

The whole idea is to know all the ingredients that went into the cake, then just saying I took a package that said flavor enhancer on it and threw in some of that doesn’t sound like enough to me. But do think if you push for that, you will meet stiff resistance from the import package community, the node.js people and the Python people and so on? Do you think they’ll go, oh, no, no, you’re going to make it too hard, even though you would undoubtedly make it better and safer?

[Kelli] (30:05)

No, I think people are moving towards that, wanting to recognize all scenarios. So SBOMs are fairly modern. They’re a fairly modern concept where we had to hit the most value the fastest. So relying on those manifest files, getting the bare minimum information, which is why we had the, I think it was the 2021 minimum elements, that did a great job of priming the community to start down this process. But now we need to expand it to more cases to really understand that a lot of vulnerabilities for infrastructure, they rely on embedded software. And so if that embedded software is vulnerable, we have a problem.

[Paul] (30:49)

Absolutely.

[Kelli] (30:51)

Having gone to VulnCon, for example, in North America, a lot of people are very interested in solving that problem. It’s just not a lot of people know how yet.

[Paul] (31:01)

Joe, you’ve made the point in previous podcasts that if you want to attack America’s AI capability, then the obvious way is let’s break into the heavily protected super secure servers and let’s try and affect the software. But if you can’t do that, maybe just shutting down the VulnCon would be more than enough.

[Joe] (31:24)

Absolutely. And I think the resilience of the infrastructure is very important, whether that’s the cooling system inside the data center or the reliability of the energy grid itself. I do like to say without infrastructure resilience, there can be no AI dominance in the U.S.

[Paul] (31:33)

Yes.

So a final thought for our listeners, if I might ask, what’s the one thing that security leaders and software development can take on board today so that they’re ready for these new SBOM rules and regulations?

[Kelli] (31:57)

I would say just dive in. There’s a wealth of resources out there, a wealth of software that they can play around with to start generating SBOMs and see what comes of it.

[Joe] (32:09)

And I would just add, consistent with what we said around the analogy with the baked cake, you can have your cake and eat it too. When you generate a Software Bill of Materials to offer insight, not only to be transparent with your customers, but also to boost your security operations without slowing down your development process.

[Paul] (32:32)

Excellent. I think that’s a fantastic point on which to end. It’s not as hard as you think. You’re going to have to do it. So you might as well get started anyway. It will be good for everybody. So don’t delay. Do it today. That is a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. Thanks especially to Joe and Kelli for their valuable and thoughtful insights.

If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like us and share us on social media as well. Please also don’t forget to share us with everyone in your team. And remember, stay ahead of the threat. See you next time.

Collaboration in Cyberspace with Madison Horn

Collaboration in Cyberspace with Madison Horn

  In this episode of Exploited: The Cyber Truth, host Paul Ducklin welcomes Madison Horn, National Security & Critical Infrastructure Advisor at World Wide Technology, alongside Joseph M. Saunders, Founder & CEO of RunSafe Security. Together, they explore the...

read more
Risk Reduction at the Core: Securing the Firmware Supply Chain

Risk Reduction at the Core: Securing the Firmware Supply Chain

Firmware forms the foundation of all embedded and connected devices—but it’s often overlooked in cybersecurity discussions. In this episode of Exploited: The Cyber Truth, Joseph M. Saunders, Founder and CEO of RunSafe Security, explains why attackers are increasingly...

read more