2026 ICS Security Predictions: What’s Next for Critical Infrastructure

December 30, 2025

 

ICS security is approaching a turning point.

In this predictions episode of Exploited: The Cyber Truth, RunSafe CTO Shane Fry and Founder & CEO Joseph M. Saunders join host Paul Ducklin to examine the forces shaping industrial cybersecurity in 2026.

The discussion looks at the rise of active exploitation in ICS environments, the growing convergence of IT and OT attack paths, and why many industrial systems remain difficult to patch or update. Joe and Shane explain why organizations must move beyond detection-centric models and focus on reducing exploitability through Secure-by-Design engineering and memory-safe software.

The episode also explores how AI-assisted development, legacy codebases, and inaccurate SBOMs are changing the risk equation—making resilience the defining security metric for critical infrastructure.

Key topics include:

  • Active exploitation trends in ICS and critical infrastructure
  • Web-facing vulnerabilities in embedded and OT systems
  • Secure-by-Design as a practical defense strategy
  • The role of memory safety and runtime protections
  • Why resilience will matter more than response speed in 2026

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Guest Speaker – Shane Fry, CTO,  RunSafe Security

Shane Fry is the Chief Technology Officer at RunSafe Security, Inc., with over a decade of cybersecurity experience on both offensive and defensive sides. He began his career conducting vulnerability assessments on platforms like Unix/Linux, Mac OS X, Android, and cloud systems. His research covers hardware and software security, focusing on secure boot, memory corruption, and web vulnerabilities. Shane consults on secure system design for private industry and government. Active in the Huntsville, AL startup scene, he co-taught a course on product investment frameworks and led a team to first place in an automotive hacking competition with Intel/McAfee, contributing to a public report on Automotive Security Best Practices.

 

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:01)

Welcome back everybody to this episode of Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe.

[Joe] (00:20)

Greetings, Paul. Exciting conversation today.

[Paul] (00:22)

I’m pretty sure that you have predicted correctly Joe. So let me introduce our guest for this week, a close colleague of yours Joe, Shane Fry who is Chief Technology Officer at RunSafe. Hello Shane.

[Shane] (00:38)

Hello,  thanks for having me on the podcast.

[Paul] (00:39)

Our title for this week is Security Predictions for Industrial Control Systems in 2026. For our listeners, if you hear us talking about ICS, just avoid saying industrial control systems in full every single time. Shane, what do you think the biggest shifts in what you might call the threat landscape have been during 2025 and therefore, what we need to cater for in 2026 in industrial control systems, critical infrastructure, and in those what you might call IT systems that are not part of a traditional IT network and cannot be managed in the same way.

[Shane] (01:22)

What we saw in 2025 was an increase in active exploitation. We saw a lot of that with Volt Typhoon and Salt Typhoon with foreign actors placing capability into ICS systems and networks. It was really alarming, the breadth and the depth of those intrusions. And I think we’re going to continue to see more of that in 2026, not just in the U.S., but globally as well. The war in Ukraine has shown the power of attacking energy grids and other critical infrastructure systems and networks. I think we’re going to see a shift in that as that gets leveraged for geopolitical reasons and also for kind of advanced placement of cyber capabilities. And if I can add a bold prediction, I think we’re going to see a major ICS network intrusion through a web application vulnerability.

[Paul] (02:17)

In other words, even if the ICS system itself were perfectly secure, which it might well not be, it could be infiltrated by good old traditional IT insecurity, the same sort of problem that gives us ransomware attacks and data breaches and all that kind of stuff that is all over the news almost all the time it seems.

[Shane] (02:38)

We’re seeing more and more full-fledged Linux operating systems being deployed into ICS and OT devices. And with that’s going to come web servers running on those ICS devices.

[Paul] (02:54)

I see what you mean. In other words, instead of having, say, free RTOS where you build your software and the operating system into essentially one tiny compact binary and it stands or falls on its own, ICS systems are becoming more like general purpose laptop type operating systems that just happen to be used for one application and maybe they can also have a load of other ancillary stuff that wasn’t there before.

[Shane] (03:23)

Absolutely. And that shift has already started. We talked to a lot of folks that are using Yocto and Buildroot in their embedded ICS and OT devices. With this latest React bug, there’s going to be someone that has a Node.js web application out there on a control system that can’t patch or there’s no patch available or there won’t be one deployed. This CVSS 10 remote code execution vulnerability is gonna get used in an ICS context somewhere.

[Paul] (03:54)

And to be clear, those CVSS scores, the single score is basically the combined score of how likely is it someone will figure out how to exploit it properly, and how likely is it that they’ll actually do it. So 10 out of 10 is basically probability equals 1.

[Shane] (04:13)

There is a factor of complexity of the attack that’s needed to exploit that vulnerability. There’s also an impact. You traditionally see memory corruption, memory safety vulnerabilities like buffer overflows are typically high 10 out of 10 or 9.8 out of 10 because of the severity of what could happen if an attacker gets that vulnerability exploited. In this case, it’s a web application vulnerability with a remote code execution issue. And so there was a lot of IT work and a lot of software development work at the tail end of December to fix that bug. 

The fix is easy, just update your software. But when we see that in ICS environments, that is a really difficult problem. As we see more and more of these ICS devices getting connected to networks and providing web applications, there may not be a website for you to go to, but they may be getting pulled by some other component in the network for what’s a sensor value or a report about your operational environment. It feels bold, but I think we’re going to see a major network breach through a web application vulnerability in an ICS device.

[Paul] (05:25)

Now we shouldn’t be surprised at that news, should we? Routers that connect home networks to the internet. Although those aren’t technically industrial control systems, people normally refer to them as IoT, Internet of Things, it’s the same sort of idea, isn’t it? And web server bugs in those types of device have been around for years and years and years and indeed have formed the heart of some of the biggest zombie networks in history, like the Mirai botnet and other stuff, going back nearly 10 years. What are we doing wrong that prevents us from doing it right? If that’s not too cynical of you.

[Shane] (06:00)

Absolutely.

We’re humans. It’s really interesting because the typical web application vulnerabilities that you see, frameworks have existed for all the major web servers from Apache to PHP to Node to mitigate those vulnerabilities just automatically. All you had to do is use it and humans don’t use it. And for the longest time, that was because you had a lot of examples on Stack Overflow or random blogs that people were maintaining that had bad code. That’s evolved with artificial intelligence and the rise of LLMs and vibe coding. dear. A lot of those LLMs have been trained on some of this older incorrect and vulnerable code.

[Paul] (06:50)

Yes, and there’s more likely to be code that’s bad for LLMs to feast on than there are posts about the fixes to that code.

[Shane] (07:00)

Absolutely. You know, we are seeing rapid evolution there. As you look at the output from LLM enabled coding assistance, there’s been a lot of change in the last six months. I was at embedded world North America last month and there were a handful of companies actually that were doing LLM assisted embedded firmware development. Not just your application that sits on top, but the actual board support package that brings your ICS or OT device up the kernel drivers and kind of the super low level stuff. There’s a whole company standing up that are just focused in that area. One component of that for everyone that we talked to was how do we make this code secure?

[Paul] (07:40)

I imagine if you’re worried about a web server where if there’s a vulnerability on an embedded device you typically don’t end up with some special nobody web account you end up as root. If that’s bad, it’s much worse if you end up as kernel.

[Shane] (07:55)

Absolutely. In some of those embedded devices, you might have a bug in a bootloader where the bootloader cannot be patched. That’s there so that you can always have a good restore point. If the software update on your kernel and your operating system goes sideways, those bootloaders are typically not writable. And so if you get vulnerabilities there, it’s really great for attackers, but really bad for defenders. A lot of companies get really focused on static code scanning and dynamic code scanning for their application, but then they don’t have that visibility into the supply chain. There’s a lot to do in the ICS space that isn’t just, hey, we need an SBOM, you need an accurate software build materials. You need an accurate view all the way through your vendors and suppliers. You might know everything that goes into this one piece that runs on this processor, but you may not know this code that’s provided by a vendor for your wireless driver.

[Paul] (08:54)

Now those bills of materials, software bills of materials, we’ve spoken about them a few times before on this podcast. And that in itself is a fascinating field, isn’t it? Because some people go, well, all you need to do is to know the maximum set of things you could pick from. So they go in the larder and they open the cupboard doors and they look at all the ingredients that possibly go into a recipe, even if they didn’t use them. And even if somebody who is actually doing the cooking thought, we’ve run out of that brand of mustard I’ll use this other brand instead that I just happened to have brought with me. Or you basically simplify things by waiting until the cake’s made and then trying to figure out what’s in it but maybe one taste masks another. 

Creating a software bill of materials you need great visibility in the middle, notably at the very time when you actually choose the ingredients and actually put them into the mix and actually bake the cake.

[Shane] (09:53)

Yeah, absolutely. I like your cake analogy. I usually use pizza, but cake is good. It’s the holiday season. We should have some sweets in our vernacular.

[Paul] (10:01)

Actually, I hadn’t thought of pizzas. Everyone talks about layers in a software stack and pizzas generally come in layers. So maybe I’ll switch to pizzas.

[Shane] (10:11)

Pizza works great too. Another prediction I have for 2026 is we’re going to see a drastic increase in people doing build time software build materials. That’s right in the middle as you’re talking about starting early when the build happens and building that as you do your software build.

[Paul] (10:28)

Now that doesn’t exonerate you from knowing what your larder contains, does it? Right. You know what you’re supposed to have, but you can also then check that you actually picked from the list of ingredients that are authorised in your organisation and that you didn’t get some bait and switch that happened along the

[Shane] (10:46)

We’re already seeing demand from customers trying to move away from the post-build, the binary only SBOMs that are just kind of guessing because when it comes time to attest to the FAA or attest to the EU that this is what you shipped in your software, it’s got to be correct and it’s got to be complete and it’s got to be accurate. There are a lot of accuracy issues if you’re not doing it right at software build time.

[Paul] (11:11)

What the rest of the world needs, particularly embedded systems, which of my devices are in fact vulnerable? Right. Which ones do I need to focus on and which ones can I reassure my customers are okay?

[Shane] (11:24)

As people have been doing software composition analysis for a while, some people have been trying to answer that question, but it’s one of the big problems with the CPE, the Common Platform Enumeration methodology of looking up vulnerabilities today. You may not know that router that you bought or that PLC or other ICS device has a vulnerability.

And the vendor may not know they have a vulnerability unless they have a good SBOM with it. We see this all the time where someone says, well, I don’t have XYZ vulnerability because this device isn’t listed as affected by this vulnerability. But the people that are assigning the vulnerabilities to the products don’t know the whole universe of all the software and all the components that go in.

[Paul] (12:13)

It’s impossible for anybody who is just publishing information about a vulnerability to have an exhaustive list of everybody who might be affected because that’s beyond their control. That would be unreasonable to expect, wouldn’t it?

[Shane] (12:26)

Exactly. And that’s one of the really interesting things with the EU CRA. There’s a requirement for anyone selling software, which includes software that runs on devices that they’re selling, to provide those SBOMs to the EU for cataloging. And so there is a path. The EU is in a really interesting spot because they value privacy, but they also are big on collecting data and being able to act on that in the the interest of the public. It’ll be interesting to see how that plays out in, say, 27 when ICS device manufacturers by 2026 have to have problems being reported. Does the EU come out with some better way of matching software and notifying users of this vulnerable software? This is going to impact everybody. It’s not just going to be ICS device manufacturers. It’s going to be video game companies and Microsoft Office.

Once ICS folks are fully onboarded by the end of 26, I think we’ll see the significant uptick in accurate reporting of vulnerability data assigned to proprietary products.

[Paul] (13:38)

And this whole thrust towards accurate bills of materials in software, you’re not doing it entirely for altruistic reasons, are you? It absolutely benefits the community at large, if you’re honest and accurate about what you’ve baked into your cake, as he would be if you’re trying to warn people with food allergies about what you put in a food product. But it also helps you, doesn’t it? It means that if there is a vulnerability, you know which bits of which products need urgent attention and which bits can wait and which bits are actually completely unaffected.

[Shane] (14:13)

If the asset owners get those SBOMs delivered to them as part of their procurement process, which we’re seeing more and more requests for, then they can help hold the OEMs accountable, the people that are actually making and selling the devices accountable to securing their software. Because they know, hey, where’s the security alert? Because you have this vulnerable version of OpenSSL or LibC or whatever it happens to be. You’re supposed to tell us and have a patch for us and you have it.

[Paul] (14:44)

Shane, this is something we’ve touched upon in the podcast before, and it’s quite fascinating to see that people still think along these lines. What would you say to encourage those people who are in some way resistant to full and complete software bills of materials, because they somehow think that by enumerating exactly what’s in the product, they’re giving away their secret source, or they’re throwing away their proprietary advantage or their unique selling point?

[Shane] (15:13)

You know, security through obscurity doesn’t provide as much value as you would think. 

[Paul] (15:22)

Security by obscurity, I think in cryptography that’s what’s known as Kadikov’s principle. The secret key is what you keep secret, you don’t keep the lock or the algorithm secret because eventually someone will figure it out and when one person figures it out everybody knows it.

[Shane] (15:40)

Right. And from my perspective, if you’re more open and transparent about what’s in your software, that can drive adoption of better software update processes, better patch cycles. It can get your customers to deploy updates faster.

[Paul] (15:56)

Amen, brother. I agree. It’s definitely a marketing advantage as well as being moral and ethical and let’s face it, a technical advantage for your business and for your developers.

[Shane] (16:08)

Absolutely. And it might actually drive your customers to update software, to get new features that you’ve been trying to get them to deploy for six years, right?

[Paul] (16:18)

Yes, that’s a lot better than trying to trick them into buying new features by saying they have to pay full whack for a security update that they wouldn’t otherwise apply. Right. And I believe that the EU CRA is going to prohibit that. When you put a device on the market, you’re going to have to state in advance quite clearly and explicitly how long you will support it for. And they’re a minima, you can exceed them, but you have to tell people so they can make an informed decision before they buy.

[Shane] (16:48)

That’s driving a lot of development cycles, a lot of marketing cycles in the ICS space. I’ll add another prediction here. I think the companies that embrace that the fastest and with the most openness and transparency are going to see revenue increases as a result. They have to comply because the EU regulations compel them to, and they don’t want to close out their business accounts in the EU. But I think it’s something that everyone’s going to want to use as a marketing advantage. And so the faster they can get there, the better.

[Paul] (17:23)

It seems kind of obvious when you put it like that. If we take a regular world example, if you’re looking for a new credit card, all things being equal have two businesses to choose from. One’s been breached three times in the last two years and lost 17 million records worth of data, and the other hasn’t been breached at all. Which one are you going to choose?

[Shane] (17:43)

I mean, the one with security, hopefully. But there are definitely still people that are gonna pick the cheapest option. That’s unfortunately the sad reality of the world we live in.

[Paul] (17:49)

Yes.

So Shane, if cheapness comes into it, and unfortunately it does, what does that say about the thrust by regulators in the United States and in the European Union for Secure by Design? You build your products so that they do not always require papering over the cracks when there are problems. You actually invest strongly in security right from the start, before you even cut the first line of code.

[Shane] (18:23)

Well, in theory, if everyone has to do secure by design, everyone’s costs should roughly rise by the same percentage, right? Or the same amount. Now in practice, that’s not really ⁓ gonna work that way.

[Paul] (18:35)

Can’t secure by design, done well, actually cut your development costs in the long run.

[Shane] (18:41)

Development costs in the long run, absolutely. It’s not necessarily going to decrease your hardware costs. In some cases, you’re going to have to buy nicer processors because you need security features that aren’t on the cheapest version.

[Paul] (18:56)

Certainly things like memory protection, secure enclaves and trusted platform modules and things like that.

[Shane] (19:01)

Yep. Exactly. The ability to support secure boot and things like that. Right. Secure by design can absolutely over the long-term decrease your costs because you don’t have to spend as many developer cycles fixing vulnerabilities because you squashed them before they got released or you had protections in place before the vulnerabilities were found and therefore when they’re found, they’re not exploitable. Great. You can reduce the number of CVEs that you have to triage every month, every quarter, there’s definitely some benefits there.

[Paul] (19:34)

So those are things like runtime protections that you build into the software that does not significantly change its behaviour or its memory usage or its time to complete critical operations, but does make it a much harder target for a cybercriminal. It adds just enough non-determinism if you like that the core software still behaves as the regulator would like, but the software itself is more robust.

[Shane] (19:52)

Absolutely.

[Paul] (20:03)

Against an attack that somebody’s crafted in a lab. When they go and try and run their exploit in the field, it simply won’t work.

[Shane] (20:10)

We’re definitely seeing more requests from customers and people in the industry to do more security and to bring that device security to the last mile. That’s a challenge that the ICS and OT device manufacturers have is they’re not a systems integrator. They don’t own everything that goes into a deployment of their devices. And so to some extent, they are largely limited in what they can actually do to secure their devices because they can’t deploy a network intrusion detection system, or  a firewall on a router because they’re just a device on the net.

[Paul] (20:46)

Or they have 4 megabytes of RAM, not 16 gigabytes.

[Shane] (20:51)

Exactly, exactly. So device manufacturers through nudging from the NSA and EU with the CRA and their secure by design initiatives are starting to realize, hey, we’ve got to have some ownership of the security of our device and can’t just rely on whoever’s deploying this to do security properly. That’s a big piece of the EU CRA as well as making sure that you’re doing everything you can, you’re deploying that runtime application self protection on your device and things like that. 

And the nice thing about that for the asset owners, the people deploying these devices is it should make their devices and their networks and their systems more secure, which does help with operational efficiency because they don’t have to worry about patching as often. That helps their bottom line as well when the devices that they’re deploying are more secure, let alone the case where they get hacked and you have the JLR as maybe a fresh example.

[Paul] (21:49)

Waiting for you to mention that particular one. For those of you not familiar with those brand initials, Juliet Lima Romeo, it’s short for Jaguar Land Rover, which is an automotive manufacturer run by a massive multinational Indian company with huge manufacturing concerns in the United Kingdom, and their ransomware shutdown is currently being talked of as the biggest and costliest cyber intrusion ever in the UK. It was an IT-based intrusion, but the side effect was that their production line stopped. And my understanding is they couldn’t just restart it because they didn’t know what condition all the vehicles were at various places. So it was like turning off the current to a blast furnace. The steel coagulated. Yep. They were stuck for weeks and weeks and weeks.

[Shane] (22:20)

Absolutely.

I was talking to somebody last month about that particular incident. And one of the things that is a forever question that there’s never going to be a definitive answer to is when there’s a cyber incident, do you pull the plug or do you leave everything up and try and see what’s going on? And so in the case of JLR, it sounds like they detected the intrusion and then shut everything off to try and mitigate how much damage that intrusion could cause.

[Paul] (22:56)

Yeah.

[Shane] (23:13)

Maybe if they left the system up, they could have contained the intrusion, had everything else still working and operating.

[Paul] (23:20)

However, who would have trusted the cars that were on the production line during that uncertain period? I don’t think I would have gone out and bought one of that particular time range. Maybe it was more like they didn’t have a choice.

[Shane] (23:34)

Yeah, for sure. It’s not an easy question. I don’t envy the folks that had to make that decision at all, but they are slowly getting back to operational status and that’s going to be a case study. That’s going to be in a cybersecurity book 10 years from now.

[Paul] (23:49)

It certainly is. Now, Shane, at the outset, you talked about some of the risks of what is often the elephant in the room these days in cyber security discussions, namely AI, in particular, generative AI used to produce code. There’s also a lot of fear around cyber attackers using AI to generate code that doesn’t have to be perfect. It just has to be good, bad enough to mount attacks.

But what about the third part of that trinity, you like, namely the use of AI to detect anomalies so that you can focus on the ones that really matter?

[Shane] (24:30)

There’s a lot that’s happening in that space. If you look over the last 10 years, there’s been a lot of machine learning for doing some of that intrusion detection, anomaly detection and things like that. And we’re now calling that AI. There’s definitely some risk to fully automating that. And I think the sweet spot is giving good summaries and good data to a human analyst to then confirm or reject that information. And it can accelerate that process, especially when you have a lot of data. a lot of events to go through. 

Then there’s also a growing security space around using AI to find vulnerabilities. So before you ship software, there’s a lot of really cool things happening there and time will tell if they’re better than our standard static code scanning tools that we deploy today, which miss 97 and a half percent of your vulnerabilities. There are actual vulnerabilities being discovered in real software with exploits being developed alongside them using fully automated tooling today. And so that’s kind of neat from a developer perspective and a defender perspective, because it can open the world of cyber to folks that may traditionally just be IT folks. That’s really fascinating as a way to bring more people into the cybersecurity fold, which is desperately in need of more people.

And even more so when you look at the kind of constrained space of ICS and OT. Yes. It’s even harder to gain that knowledge and really build expertise in the space.

[Paul] (26:04)

So in other words, we shouldn’t be looking at AI as a way to be cheap, and so that we can fire all of our developers or all of our security analysts. We should really be looking at it as a way to do the drudge work, the repetitive work that nevertheless requires painstaking attention to detail, so that those humans can spend more of their valuable time looking at the things that really matter and will give the biggest security return on investment.

[Shane] (26:33)

Absolutely.

[Paul] (26:34)

Because certainly how I think about it. As a consultant and a contractor, maybe I’m duty-bound to say that. But I feel that way because I think it’s true.

[Shane] (26:42)

Yeah, I felt that way since the first machine learning based anomaly detection tools came out forever ago. We’ve gotten better, but we’ve also gotten way more data and way more incidents happening. Human analysis on its own doesn’t scale and fully automated solutions run the risk of going completely off the rails. There’s a lot of really great stuff coming out of the AI space that if you embrace it can make you more efficient and better at your job. But if you just let it run wild and don’t ever check the results, you’re gonna get some really bad results.

[Paul] (27:15)

Yes, and that doesn’t just hurt you. Some fault, flaw or misdiagnosis that you’ve baked into embedded software could be affecting hundreds of millions of people. And you can’t just push out a fix next Tuesday because you feel like it.

[Shane] (27:30)

Right, right, for sure. And as the LLM coding assistants get better, there’s even coding assistants now that are just focused on code review, which is really fascinating. And so that brings up a question of if you use a coding assistant or an LLM model to write code, should you use a different model to review that code that you wrote? I like to think yes.

[Paul] (27:52)

Yes, I’m hearing the words confirmation bias ringing around in my brain as you’re saying that.

[Shane] (27:58)

Yes. Although we’ve started adopting coding assistants to help accelerate and you know, we’ve got a heavy emphasis on code review and securing the code that we’re shipping and we’re putting extra emphasis on the reviews that were aided by a coding assistant. What’s interesting is some of them have a local code review so you can ask the code review tool to do a review before you open up a merge request or before you try and ship code to production. 

They’re inconsistent. And so sometimes it’ll say, Hey, there’s no problem. And then you open up a merge request and get a review from another human. And the coding assistant comes in and says, Hey, you got to fix this thing. And it’s like, well, hold on now. What you just said, this was fine. And now you’re saying it’s not. So it can definitely accelerate, but you need to do that with caution and with intentionality and how you’re reviewing that code that’s coming in. I think the best developers will be the next five to 10 years, even shorter timeframe, one to two years are gonna be those skilled programmers at any level that understand programming and understand security and then are able to let that LLM do work to accelerate while they can make sure that it’s still producing good code, safe code, secure code.

[Paul] (29:14)

Gentlemen, I’m conscious of time and Joseph Saunders, you’ve been quiet this episode, perhaps because Shane’s passion just took over. It’s fantastic. I wish we could just carry on and on. I’ve just felt we’ve scratched the surface. But Joe, maybe I can finish up by asking you to make one prediction for things people can do better in ICS security in 2026.

[Joe] (29:38)

As we talked about software development processes and software bill of materials and the types of potential effects that Shane boldly predicted early on here. It sort of begs the question to me whether ICS vendors will also then embrace the new found level of transparency and insight into software risk, factoring in and enhancing their ability to assess both exploitability and reachability and I think those two aspects become key aspects in the process. And my hope is people leverage the advantage that they have.  Maybe not any different than a lot of the principles and ideas that Shane has and predictions, but I just wanted to highlight maybe that angle as a potential way to see how security postures will improve in 2026.

[Paul] (30:26)

I agree, think Shane said it well when he said, security by obscurity does not work as well as you might think. And history suggests to us that that is perfectly true. Joe, thank you so much for rising to the challenge of rounding us out with such encouragement to embrace transparency in respect of security risk. And Shane, thank you so much for your passion. This has been a fascinating episode and I wish it didn’t have to end. But this is a wrap for this episode of Exploited the Cybertruth. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media, and don’t forget to share us with all of your team as well, I’m sure they will benefit greatly from Shane’s insights and passion.

Thanks to everybody who tuned in and listened. And remember, stay ahead of the threat, see you next time.