When Machines Get Hacked: A Manufacturer’s Guide to Embedded Threats

October 02, 2025
 

Cyber adversaries are exploiting the weakest points in manufacturing—embedded devices, PLCs, and legacy systems that keep industries running. In this episode, RunSafe Security Founder and CEO Joseph M. Saunders joins host Paul Ducklin to reveal how attackers infiltrate operational technology, gain access to system calls, and even turn software supply chain components into weapons.

Drawing on lessons from recent attacks and U.S. government red team exercises, Joe explains why memory safety matters, how Secure by Design practices reduce risk, and why runtime protections can neutralize exploits before they succeed. With the rise of AI and increasingly connected systems, the conversation underscores why manufacturers can no longer afford to treat cybersecurity as an afterthought.

Key topics include:

  • How adversaries infiltrate embedded and industrial devices
  • The role of nation-state motivations, economic espionage, and insider threats
  • Why memory-unsafe languages remain a root cause of critical vulnerabilities
  • How Secure by Design practices and runtime protections can harden devices without disrupting operations
  • What manufacturers must watch as AI-driven attack paths begin to emerge

 

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] (00:06)

Welcome back to Exploited: The Cyber Truth. I am Paul Duclin, joined as usual by Joe Saunders, CEO and Founder of Run Safe Security. Hello, Joe.

[Joe] (00:19)

Greetings, Paul. Great to be back.

[Paul] (00:22)

Today’s topic is very simple to say but difficult to deal with and that is “When Machines Get Hacked.” And our subtitle is “A Manufacturer’s Guide to Embedded Threats.” To put it simply Joe, we’re going to be focusing today on what you might call the lower levels of the Purdue model, the bits that affect the parts of the system that aren’t traditionally associated with IT, and are correspondingly difficult to look after because they could be anywhere, like buried inside a lathe or a pump house or a ship. What motivates today’s attackers to go after embedded systems?

[Joe] (01:07)

The answer to that question is the same thing that motivates attackers in other systems. The OT networks, the operational technology that helps to make critical infrastructure operate, whether it’s water systems or the energy grid or other areas, has maybe a special reason on top of ransom and financial gain or just ideology trying to do something to a nation state to disrupt their operations.

But I do think it carries an extra level of attention because if you are a nation state targeting another country’s infrastructure, there’s probably a motivation that at some point in the future, if there’s some kind of kinetic warfare or some kind of geopolitical tension and you can detonate, let’s call it a cyber bomb out of convenience, some kind of cyber exploit at a time of your choosing in the future, then you can really disrupt how citizens perceive their own government. And I think it’s as much about the strategic statecraft as it is about the financial motivations, maybe ideology in general that you would otherwise traditionally have.

[Paul] (02:21)

I guess you have the problem that even if an attacker turns out not to have precise control over every valve in a system, if they’re able to mess with one or two, that’s rather unsettling for anybody who’s worried about an attack that might unfold. And even if there isn’t what you might call kinetic warfare going on at the time, it’s still pretty unnerving isn’t it to think that whether they’re nation state attackers, or let’s call them old school cyber criminals. If you’ve got these bad actals wandering around in your water system, in your waste water pumping system, in your port, in your manufacturing plants, that’s not particularly cheery news to get, is it?

[Joe] (03:07)

Our society relies on a well-functioning set of utilities and public goods like water or energy or other things. so it is unsettling to know that some kind of cyber attacker, some kind of bad actor could manipulate those systems, could taint those systems, could disrupt those systems. And of course, if you took that to its furthest degree, what would really be unsettling is a mass unintended migration of people, because there’s no potable water, there’s no energy, and there’s no working systems. I don’t anticipate that day anytime soon, but I do think people are pre-positioning, and I say people, mean, nation state actors are pre-positioning, as we’ve seen in the US in critical infrastructure. And so they have the means to provide some level of disruption at this point.

[Paul] (03:57)

And worse than that, even if they’re not able to get a sufficient level of control to interfere actively with the systems, or even if they don’t intend to, just by snooping around, they get an awful lot of information, don’t they? Not only about how society and its systems function, but also you could say up to and including intellectual property about the technology that makes those systems work.

So it’s the worst of both worlds, isn’t it?

[Joe] (04:29)

It is the worst of both worlds. And China, for example, has its own massive domestic economic footprint. And so if China can steal intellectual property, use it for its own domestic needs and export that newfound technology for its own gain in its influence, economic influence in other countries, then you can see how simply the theft of intellectual property is a form of economic espionage and sabotage.

I say that just to paint the picture of how the theft of intellectual property has propagated into more of a technology offensive to influence other countries. As China has caught up technologically, they have certainly invested more and more the past 10, 15 years in R &D. But let’s face it, if they can get an advantage over a US company by exploiting or leveraging US technology and then repurposing it for their own gain with their own companies.

It’s every bit a part of China’s strategy.

 

[Paul] (05:30)

Yes, I guess I underspoke when I mentioned the theft of intellectual property, because whether it’s state-sponsored actors or cybercriminals out to make millions of dollars and leech it from our economy, personally identifiable information has huge value and we’ve seen terrible blunders lately, haven’t we? A recent example, not in embedded systems, being Allianz Life in the United States, who had to admit that the majority of their 1.4 million customers had their data stolen. You can’t really mitigate that once it’s out of the door.

[Joe] (06:06)

You can’t mitigate it in imagine with artificial intelligence and and good solid surveillance technology and analytical methods combining. say let’s pick social media for a second with Instagram and Tik Tok and all the information that you can gather about user subscribers. That kind of information can be exploited, but I’ll take it a step further. Paul in 2014/2015 I should say the OPM hack in the United States. 

[Paul] (06:38)

Yes, huge amounts of data about everybody most government employees, right down to the kind of information you’d need for identity theft.

[Joe] (06:47)

Exactly. It leads to identity theft, but it happens to be all the personnel that have security clearance in the United States. And guess what’s in that information? All that personally identifiable information contains birth dates and identifiers of children of security clearance holders. And so as those kids, let’s say they were born in 2010, they’re now 15 years old today. Their identifiers are out there. Maybe if they were born in 2006, they’d be 18, 19 years old.

You can combine their social media behavior with connection to that security clearance data pretty easily using analytic methods. And so I think there is a long-term warfare around identity theft and cyber attack and generally the theft of the intellectual property that all amounts to a form of economic warfare to undermine U.S. corporations and undermine U.S. systems. If I can anticipate what people’s needs are, I know what systems they’re using. I know what they’re consumer trends are, and I know which water systems affect certain groups, then I can execute a long-term strategy to disrupt an otherwise well-functioning society and start to influence how they perceive their own government.

[Paul] (07:59)

So Joe, given that we’re supposed to be focusing on embedded systems, but the whole picture matters because after all, if you’ve got somebody’s personally identifiable information and you can maybe figure out their password or you can masquerade as them to trick some IT person with social engineering skills to get access into a network, that can further your ability to dig deeper and deeper and go down right to the lowest level.

Can you share a recent example of an attack on embedded systems or what you might call industrial control systems rather than just an IT based attack?

[Joe] (08:34)

We’ve talked about some of these in the past on different podcast episodes, but I do think the Cyber Avengers affiliate with Iran’s IRGC is a really good example of targeting programmable logic controllers inside water systems. And we saw that in 2024. We saw attacks in Texas on water systems where attackers were able to control and maybe even overflow certain water tanks and water systems to disrupt service.

And then we also see them separately from those two PRC sponsored attacks through actors like Volt Typhoon and even Salt Typhoon, targeting different elements of critical infrastructure. And in a lot of cases, the methods are meant to gain access to these systems and then find ways to control them remotely or administer arbitrary code to do something that the system wasn’t originally intended to do.

And so those are three good examples. Think the state of Texas had an attack in 2024. Think generally we know what the Cyber Avengers have been doing that are affiliated with IRGC. And we have seen over the past couple of years, Volt Typhoon and Salt Typhoon in particular disrupting the telecommunications infrastructure with Salt Typhoon.

[Paul] (09:51)

Now, Joe, in the Cyber Avengers case, it may not be that they actually set out to target wastewater systems. They could simply have said, let’s wander around. Once we’re into the IT part of the system, let’s see what parts of the OT or the industrial control network have additional vulnerabilities that let us percolate, if you like, lower and lower down. So do you want to say something about how some of these attacks were actually pulled off. Like, what did they do and what blunders, let’s be blunt, did we make on our side, making it easy for them to get in?

[Joe] (10:29)

So a couple of things, there are access methods that nation states can use if they can compromise the human machine interface, which is that level two of the Purdue model that you’re referring to, to then access PLCs, the programmable logic controllers. That is a method to gain access.

[Paul] (10:48)

Now, generally that HMI, the human machine interface, loosely speaking, those are supposed to be things like control panels that are in a pump room, or that are attached to a lathe that have things like open valve, closed valve, emergency stop. And they’re supposed to be operated, again, loosely speaking, by someone who’s actually standing there. They’re not machine-machine interfaces. They’re just like the buttons you have on your TV on, off. And yet, if they have flaws in them, that means you can sort of skip over all the other layers. They can sit in a country far, far away and pretend that they’ve actually driven out into the middle of nowhere in Texas with a physical key for a pump room, open it up, gone in and press the actual button themselves. That’s quite a dizzying amount of power to hand over, isn’t it? So what do you do about that?

[Joe] (11:42)

Well, it’s not like it’s a physical lever that you’re opening and closing the valve these days. It’s the, our digital controls, obviously, and these systems are digitally connected and they’re not just digitally connected. They have communication ports and connectivity to the broader ecosystem. And that in fact is what offers the potential for attackers to gain access through HMI to these PLCs. And so you can obviously segment your networks more often. You can build in security into all those devices to try to restrict the ability to compromise those systems. And then at the end of the day, you also need to look at what else can be done with those PLCs that are connected to say the robotics on the manufacturing plant floor. All of those are attack vectors. 

In one sense, what you need to do is work with all your suppliers. If you’re managing a facility that’s now connected to the internet and you have a bunch of different vendors providing core components of your infrastructure, of your operational technology network, you need to understand the risk posture of those devices. You also need to then segment that network, like I said, and make sure that you have sort of the defense in depth. But I think the most important thing is that relationship with your vendors, because if the supply chain creates vulnerabilities that you don’t know about, then you don’t have a great chance of defending it. Operators and asset owners really need to understand the security posture of all their assets in their infrastructure.

[Paul] (13:13)

You get that in spades with embedded systems because they’re often using much older hardware, maybe with very limited memory, maybe with very limited CPU power. So while they might not be at risk from, let’s put this giant untested library in, they are at risk from the fact that it’s harder to design them to be secure in the first place. And many of them date from an era when people just weren’t doing that.

So in particular, what makes what are called these days memory unsafe languages a particular concern as you go down the Purdue stack, as you get to the smaller and more specialized devices at the bottom levels?

[Joe] (14:00)

The memory unsafe languages, languages like C and C++, are widely adopted across critical infrastructure. The underlying problem is they have inherent weaknesses that even the best programmers will leave exposed certain parameters or certain exposed points that allow a well-informed attacker to find a weakness in the underlying software and exploit it.

[Paul] (14:23)

Because those languages come from an age when people expected to be allowed to have direct access to memory to go back to the old basic language days, peek and poke memory at will in order to achieve things that were otherwise impossible because there was no library code or no well-designed interface to do it safely. And of course if a crook can stick in the knitting needle into the wrong hole, all sorts of trouble emerges, doesn’t it?

[Joe] (14:54)

All sorts of trouble emerges and we did a recent analysis on an embedded system. And we being RunSafe security. What we look for were those underlying gadgets that are reachable by an attacker by virtue of these memory-based vulnerabilities. And in general, you call those return oriented programming gadgets or ROP gadgets. And when you string a couple of gadgets together, they become a chain. So you have ROP chains.

And essentially what those gadgets and those chains allow an attacker to do, if he or she finds one, it allows them to gain control of, for example, sys calls or system calls inside the software. By virtue of gaining access to sys calls, you can achieve actions that weren’t otherwise available to you by the standard functions that were written into the program in the first place.

[Paul] (15:46)

So you might turn, say, a command to read from a file into the equivalent command, maybe even does by flipping one bit into a write command and now you’ve got control over the file. Maybe you tell it to add a password that lets you in later. Maybe you say, hey, here’s some new code I want you to run and don’t ask anybody. Like you think of the Pwn2Own and hacking competitions, the attackers, and that’s responsible disclosure. So I’m in favour of that. That’s great.

And they turn up and I think they get exactly 30 minutes to pull off their attack and there’s a timer that’s shown on video. But what you don’t see is that they may have spent a whole year practicing and preparing that attack so they can pull it off in the 30 minute period. And some of the top attackers, they only need seconds because they’ve practiced well enough that they just know the attack is going to work. The complexity of finding the attack is very different from the complexity of pulling it off.

Once you’ve figured it out, you can either sell it on to somebody else who wants to use it or hand it to a team of attackers who can use it at will. They don’t have to do that three, six, nine months of research for every valve or for every lathe or for every water drainage pump.

[Joe] (17:02)

And that actually is the economic equation on which RunSafe was founded and launched, which was if we could find a way to disrupt attackers, even if they know these blueprints, even if they know these methods, then we’re achieving something that’s costing them money and time and ideally force them to look elsewhere. And that is if you apply the RunSafe techniques to relocate where all those functions are, so you can no longer find the underlying weakness, the gadget, the ROP gadget, the ROP chain that I’m talking about, then the attacker will be disrupted because they no longer are finding those gadgets that they can grab onto and manipulate the system calls to do something different. A recent study a red team did, I should say a US government red team did was super interesting. And I’d like to share it, Paul. One of the more devastating attacks historically, or at least exploits that was out there was called Urgent 11.

[Paul] (17:57)

That sounds like a film, but it shouldn’t really shouldn’t laugh. It’s a nice sounding name, but rather devastating potential impact.

[Joe] (18:07)

Yes, and it did affect products in the energy arena. But the idea there is that with Urgent 11, there happens to be 11 underlying vulnerabilities that are accessible to attackers. And at least six, possibly seven of them are memory-based vulnerabilities that exist in the underlying operating system. And I think in that particular case, it was VxWorks. It tells the story of the supply chain.

You’re not aware in the VxWorks some of the underlying communication ports and TCP IP components that are accessible in where there are dependencies that allow an attacker to grab into a system. And so what the red team found was in a certain system built on a real time operating system with an application on top of it. There were brace yourself, Paul, 14,500 ROP gadgets found in that software.

[Paul] (19:04)

So those are little fragments of code that can be stitched together in arbitrary ways, although they don’t look like the kind of code a human would write. They can do things like add 6 to this number, subtract 5, access this memory address, jump to somewhere that I’ve chosen earlier, and by stitching them together in the right order, you can basically build any old program. It may look like a mess, and if you’re a human who wrote that kind of code, you get told off in a code review, but if you’re an attacker, you don’t care about the quality of your code. You just care, will it work in 999 times out of 1000?

[Joe] (19:39)

And in this case with Urgent 11, with those 14,500 gadgets accessible for the attacker, they only need to find 11 gadgets to do what they want to do. Out of 14,500 find 11. And guess what? Those 11 exist multiple times. I bet you can’t guess how many gadgets were remaining after RunSafe was applied,Paul. But I’m going to tell you.

[Paul] (20:02)

I’ve guessed in my mind, so you tell me and I’ll tell you whether I was right. How’s that for a casino bet, Joe? The player. Yes, I’ve got a black check, Joe. I’m not showing you my cards. Go on.

[Joe] (20:09)

That’s great, you can’t lose. Trust me.

OK, so I think you should hold up your number just to help verify. So with 14,500 gadgets accessible after RunSafe was applied, the number went down to zero gadgets available to the attacker. And so this is a monumental feat in computer science in my book, and the RunSafe team accomplished that. Imagine what that can do if you can virtually eliminate, or reduce to zero in this case, gadgets accessible to an attacker. That means that the vulnerabilities you do know about and the ones you don’t know about are no longer accessible to the attacker. And that’s why I feel so strongly, so passionately, as you’ve pointed out, Paul.

[Paul] (21:00)

I’m hearing it now, Joe. Our listeners obviously can’t see, I can see you on video, you’re getting closer and closer to the microphone and your smile is getting bigger and bigger and bigger, which is great because I guessed three that there would be three ROP gadgets left.

[Joe] (21:15)

I’m really curious why you pick three. I guess maybe there’s your three favorite ones that you always know exist that no one else knows about, but the red team certainly didn’t find those. Yeah. But there are tools to count gadgets. And I think it’s a wonderful metric. Gadget counts before and after. From that perspective, you can eliminate the risk of these memory vulnerabilities. And we got on all of this because of memory unsafe languages. The issue, of course, is that memory unsafe languages.

What I like to consider really efficient code, C and C++ is everywhere in critical infrastructure. It’s on medical devices, it’s in the energy grid, it’s on automobiles, it’s in aviation systems. And certainly the memory safety set of vulnerabilities that are implied in that need to be addressed in our own little way. RunSafe is trying to help make these unsafe systems safe by preventing these memory based exploits from being exploited in the first place.

[Paul] (22:16)

And your skeptics might say, well, what’s the big deal if you’re Windows 11 or a recent version of Linux, you’ve got adress space layout randomization, you’ve got all kinds of kernel options you can set that will load security modules. You can add all these flags to the compiler that add all these runtime checks and that fixes the problem. But you often don’t have that luxury on an embedded device, do you? If you were to wrap it in this protective cocoon, it might seem to work okay, but in an emergency, you couldn’t ratify that it would close the valve in time, wouldn’t meet its specifications, or that it might fail for other reasons, like suddenly running out of memory. 

So I guess what you’re alluding to here is the concept of secure by design, where you try and make sure that as far as you can, you think about security before you start, while you’re developing and afterwards while you’re supporting. But you don’t leave everything until the backend where you go well we’ll just add patch on patch on patch until you’ve got car body work that when you take the magnet out it doesn’t stick anywhere. You have that complexity in embedded systems don’t you? You can’t just make the changes that you want anytime you want to.

[Joe] (23:36)

And these systems last for a long time in the infrastructure. Yes, the compute resources and the power resources may be limited. I like what you’re saying. You can’t just patch and patch and patch. I would argue, you know, why patch if we’ve got bubble gum and band-aids? We can put things on these systems and prevent attacks that way and infrastructure should simply work. And obviously I’m joking, but the idea is that you want efficient systems. You don’t want patch systems. And if you can eliminate the risk of exploitation, you should do it.

But your point around all the compiler settings and flags and different ways you can build your system and build security into it by tools available from the operating system or otherwise comes also at a cost. And that cost could be increasing the dependencies, increasing the size of ultimately the binaries that get produced. But bringing in dependencies that have vulnerabilities is probably one of the biggest things that happens. And developers need to reduce dependencies and reduce vulnerabilities and reduce footprint and have the most efficient code out there. So you could potentially add all sorts of settings and flags, but that comes at a cost that requires further and further use of tools to perform hygiene and analyze where’s the next attack going to come from. My mind is always goes towards keeping it as clean and simple as possible, as efficient as possible, and still have a way to mitigate against entire classes of vulnerability.

[Paul] (25:07)

So maybe we can finish up if I ask you in the future, what are the emerging threats and trends that you think manufacturers of embedded devices should be looking out for?

[Joe] (25:19)

Well, I hate to say it, but I do think, because this is going to sound like the core topic of the day, but the emerging vulnerabilities around AI systems and generative AI systems, the more we see AI systems interacting with each other, there’s the chance for attackers to exploit the inputs of AI systems, I guess is a way to see it.

[Paul] (25:41)

Yes, it’s sort of like ROP gadgets for the AI processing engine, but it’s supposed to detect that you’re asking it a question that it’s not supposed to go there, but you word it in such a way that you bypass its protections and it goes and generates code that does something bad or suggests an action that is unsuitable. That’s always going to be a risk, isn’t it?

[Joe] (26:00)

Exactly.

It is a risk, but we see the advent in the really fast adoption of model context protocol, MCP, and two systems interacting. And if an attacker sits in between those systems and figures out how to manipulate the messaging, you certainly can see how that would disrupt critical infrastructure. So I think that’s way in the future for some of these systems because safety is in mind and maybe AI is not going to be used immediately.

But the world is changing fast and the way business is being done is changing fast. You said forward thinking; I’m looking at all the aspects of AI that could be exploited. With that said, the U S government, the Trump administration, put out just recently America’s AI action plan. And what’s interesting for me about it is how is the U S going to win the artificial intelligence race?

When China is such a formidable competitor in this arena. And part of the aspect that really struck me was the emphasis on secure, resilient data centers, secure, resilient energy grid that’s able to manage interoperability with distributed energy sources to adapt to the needs of the moment. And let’s face it, large language models consume energy and it’s part of the driver for the data center build out. And so it is natural to see the importance of a resilient data center and a resilient energy grid. And from a RunSafe perspective, we’ve been protecting components of data centers in the energy grid from our get-go. 

And so for me, I think that’s all that demand is only going to increase for us. And it’s potentially then a warning call to everybody else to look out for what’s going to spawn from the artificial intelligence race and what are the attack methods? I gave an example one of the equivalent of a man in the middle between two MCP servers, but also then the importance of the underlying critical infrastructure to avoid disruption of large language models from processing in the first place, because in the future, business and operations and critical infrastructure will depend on some form of artificial intelligence.

[Paul] (28:18)

Joe, that’s very well put and I think it emphasizes as much as we possibly can that cybersecurity is a journey, it is not a destination. It’s something that we all need to be thinking about. And if I can refer back to the podcast where we had Leslie Grandy as a guest, she said, you don’t have to use AI yourself. You may decide that it’s not for you and you don’t need it in your systems.

But you need to think like a red team person, you need to know what your attackers would find out about your system if they used it. Simply put, we can’t close our eyes to anything and I guess the price of freedom is eternal vigilance. I’m smiling but I’m not laughing when I’m saying that. So, Joe, thank you so much once again for your passion. It really makes me feel good about the future of cybersecurity to have people like you in the industry.

So thanks so much for your time and thanks to everybody who tuned in and listened. If you found this podcast insightful, please don’t forget to subscribe, please like and share us on social media and please recommend us to all of your team. And don’t forget, stay ahead of the threat. See you next time.

 

Can Taiwan Survive a Digital Siege?

Can Taiwan Survive a Digital Siege?

  Taiwan faces millions of cyberattacks daily, and with nearly 90% of the world’s advanced semiconductors produced on the island, the stakes couldn’t be higher. In this episode of Exploited: The Cyber Truth, host Paul Ducklin and RunSafe Security CEO and Founder...

read more
Build-Time Protections vs. Post-Production Panic

Build-Time Protections vs. Post-Production Panic

  In this episode of Exploited: The Cyber Truth, host Paul Ducklin and RunSafe Security CEO Joe Saunders explore a critical question: should we keep chasing patches or stop attackers before code ships? Joe draws on decades of experience in cybersecurity and national...

read more