In this episode of Exploited: The Cyber Truth, Paul Ducklin sits down with RunSafe Security CEO Joseph M. Saunders and MITRE experts Mario Zuniga and Matt Janson to explore why embedded systems create a fundamentally different and often invisible attack surface.
Unlike traditional IT environments, these systems are built to last for decades, can’t always be patched, and must remain continuously operational. That reality changes everything about how they’re secured. Attackers exploit firmware, hardware interfaces, and even air-gapped environments, while defenders are forced to rethink assumptions about visibility, control, and response.
The conversation dives into:
- Why embedded systems are often excluded from traditional cybersecurity thinking
- How attackers gain access through non-traditional vectors, even in isolated environments
- The limits of IT-centric threat models in cyber-physical systems
- Why patching isn’t always possible and what to do instead
- How MITRE’s Embedded Systems Threat Matrix (ESTM) helps map adversary behavior
- The shift from prevention to resilience in long-lived systems
The key takeaway: embedded security requires a different mindset. Organizations that build resilience into system design and rethink how they model risk will be better positioned to defend the technologies that quietly run the modern world.
Speakers:
Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.
His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and helping all of us to raise the bar collectively against cyberattackers.
Joseph M. Saunders: Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.
Guest Speaker – Mario Zuniga, Cyber Operations Engineer at MITRE
Mario is the Capability Area Lead for Weapon Systems and Critical Infrastructure at MITRE Cyber Infrastructure Protection Innovation Center (CIPIC). He spent 20 years in the U.S. Air Force, flying in USAF E-3 AWACS as an Air Battle Manager and serving in various roles at US Cyber Command as a staff officer. After retiring from the Air Force, Mario joined MITRE in 2019, where he has led efforts to integrate cybersecurity principles into weapon systems and enhance the resilience and security of cyber-physical systems.
Guest Speaker – Matt Janson, Cyber Architecture and Resiliency Lead at MITRE
Matthew Janson is a Cyber Resiliency Lead at The MITRE Corporation, where he supports Air Force organizations in securing complex systems and addressing cybersecurity risks in operational environments. His work focuses on improving cyber resiliency, advancing continuous monitoring, and strengthening the security of mission-critical platforms.
He brings over 20 years of experience in military communications, cyberspace systems, and cybersecurity, including service as an Air Force Major in the Ohio Air National Guard.
Matthew holds a Master’s degree in Aviation Cybersecurity from Embry-Riddle Aeronautical University and multiple industry certifications, including CISSP.
Watch the Full Episode
Episode Transcript
Exploited: The Cyber Truth, a podcast by RunSafe Security.
[Paul Ducklin] (00:04)
Welcome back, everybody, to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of RunSafe Security. Hello, Joe.
[Joe Saunders] (00:20)
Greetings, Paul, great to be here and really look forward to today’s discussion.
[Paul Ducklin] (00:24)
Yes, we have not one but two special guests for this episode. We have Mario Zuniga and Matt Janson of MITRE. Now, Mario and Matt, you have quite complicated job titles, so I’ll allow you to introduce yourselves. And perhaps either or both of you would like to kick us off once I’ve given the title for this episode by just letting everybody know what MITRE is and does.
Certainly, for some of our listeners outside the United States might not be entirely familiar with that. So the topic up for discussion today is a really fascinating one. It is the invisible attack surface, cybersecurity for embedded systems. Mario and Matt, over to you to tell us a little bit about MITRE and what it is doing about embedded systems these days.
[Mario Zuniga] (01:21)
Thank you for that wonderful introduction. My name is Mario Zuniga. My role at MITRE is very lengthy. It’s called Capability Area Lead for Weapon Systems and Defense Critical Infrastructure.
[Paul Ducklin] (01:33)
Rolls off the tongue, Mario.
[Mario Zuniga] (01:35)
That’s right. does. Translates to, in the commercial side, would think of aircraft, ships, maybe power plant facilities and things of that nature. But I’ll let Matt introduce himself and Matt, can you explain a little bit about MITRE as an FFRDC?
[Matt Janson] (01:51)
Yes. Hello everyone. My name is Matt Janson with the MITRE Corporation. My job title is Cyber Resiliency Lead. A little bit easier to say. For folks that don’t know, the MITRE Corporation is a not-for-profit organization that operates multiple federally funded research and development centers. You might be able to Google that as FFRDCs. So think MIT Lincoln Labs, the Aerospace Corporation. Those are just other examples of FFRDCs, and you know, we’re really here for the US government to solve those wicked-hard problems. I’ll just leave it at that.
[Paul Ducklin] (02:27)
Excellent. And Mario, it sounds from what you’re saying that you are really focused on the kind of systems that are a little bit different from the ones that we have on our desk like our laptops or in our pocket like our mobile phones. Embedded systems these days do power everything from things like airplanes, ships and so forth, industrial controls in factories and manufacturing plants, all the way up to defense.
[Mario Zuniga] (02:41)
Yes.
[Paul Ducklin] (02:55)
Indeed. But somehow they seem to get less attention publicly in the media from a cybersecurity point of view than those traditional IT systems that we’re all already quite familiar with how to keep up to date and patch. So why is that? Why aren’t we as passionate about embedded systems, perhaps as we are about our own mobile phone?
[Mario Zuniga] (03:17)
That is an excellent question. One that I’ve been focused on for about five years now, since I joined MITRE. I would say, by and large, a big problem is tied to mindset. And I think you just hit on it that when people think of cyberspace, they often think of your laptops, your desktops, your Dell servers, your Cisco routers.
My understanding that predates me is when cyberspace was first created decades ago, and at this point, security wasn’t designed into it. But when it was quickly adopted by industry, the banking industry, maybe communications industry, very creative individuals started hacking these systems. So security became a necessity to protect the services that industry was developing or providing. I think it’s a mindset problem that goes back decades that we as a people or society tend to think of cyberspace as the technology that we’re using to have this conversation today and not Tesla’s as an example or maybe your HVAC unit, Nest devices you can remotely control and through your phone. All this connectivity that does have a foundation in the IT side is largely overlooked. This operational technology, these embedded systems, are not thought of as extensions of cyberspace. To me, it’s a very long way to say, I think it’s a mindset problem.
[Paul Ducklin] (04:46)
It’s almost as though there’s a sense of out of sight and out of mind. If you receive some spammy or fishy message in your email, that affects you personally, it’s right in your face. But when you think about things like pump rooms or factory welding machines, they don’t feel like they’re part of the internet. But these days they very much and very often are, aren’t they? So what makes those systems, if you like, technically and perhaps intellectually different from traditional IT environments apart from that out of sight, out of mind. What other significant differences are there in how you design, build, install and manage them compared to a traditional IT network?
[Matt Janson] (05:38)
As you said, out of sight, out of mind. In some cases, could have organizations that are acquiring these embedded systems and not maybe have a cyber person involved in those acquisition decisions. They’re not looked at as traditional IT. And so since they don’t look like traditional IT, they don’t follow the same rubric when you’re acquiring these systems and managing these systems from a cybersecurity standpoint. I think that’s huge.
[Joe Saunders] (06:07)
Yeah. And I just add to that, there’s a couple significant differences, especially in the industries we talked about, whether it’s communications or transportation, even defense, it’s also very, very hard to update these systems. There ends up becoming a backlog of patches for vulnerabilities that might develop over time. These systems are built to last 10, 20, 30 years, and they’re not necessarily operating on shall I say an old term, internet time, where you can do patches and updates multiple times a day. Fundamentally, we’re not looking at ethernet based networks that are using Nessus scans that might be used in traditional enterprise IT infrastructure. And there’s no credential stuffing the entire framework for thinking about cybersecurity, the timelines, the tools you have accessible.
And as Mario had said, even kind of the culture and the mindset, those all factor in to make these embedded systems far different.
[Paul Ducklin] (07:14)
And Joe, there’s a very different set of, if you like, social and legal and even political machinations involved with these systems relating to things like regulation and liability, isn’t there? If you think of safety of flight, if you think of autonomous driving regulations, biomedical devices, even if you wanted to update them six times a day, you wouldn’t be allowed to from a safety concern, which is
At least as important as cyber security, isn’t it?
[Joe Saunders] (07:46)
Yeah, the safety concern, obviously very, very important, but even more generally, just across critical infrastructure, you can’t afford any downtime. Even taking systems out of operation for a brief time to do updates is very, very challenging. You may not notice it like you would on your phone on a day-to-day basis, but we all know when we lose power, we lose water. These are major significant high consequence kinds of events that we take for granted but are very vital to a well-functioning society.
[Matt Janson] (08:17)
And with that in mind, right, as you kind of look at that CIA triad, confidentiality, and availability. Availability is king. And I think to Joe’s point, not being able to have the funding to have like a production environment, to run certain patches and do X, Y, and Z to ensure all these things are safe, but that comes at a cost. Unfortunately, the cost is high.
[Paul Ducklin] (08:40)
And guess there’s also an issue that for most people who are involved in, I’ll just say, IT, it’s become a world where, you’ve got to know everything about Microsoft Active Directory and you need to know about Linux configuration and virtual machine management. That knowledge very often does not transfer directly or even indirectly to the way that embedded systems were originally designed and have evolved.
[Matt Janson] (09:07)
It’s funny you bring that up because when Mari and I were chatting about training, when we talk about people, if you want to get into this business, you have to have a lot of specialized training. And sometimes that doesn’t exist out there. I guess here’s the MITRE Corporation plug, but one thing that I thought was interesting when I came to MITRE is they have this thing called the Embedded Systems Capture the Flag, the ECTF. It’s pretty new, but they basically work with different colleges and universities across the globe now actually to really bring that more embedded system mindset and securing some of these systems, training, specialized training, and just awareness is crucial. And I’m Mara has some things to add to that.
[Mario Zuniga] (09:47)
IT, like your laptop, is general purpose. It does a lot of different things. Whereas an embed system or an operational technology component, it is not typically general purpose. You have things that are designed to do one, maybe a couple different functions, but they do it really well, repetitive. It’s got to be done in real time. Just the design and its utility and its function is going to be very different. So we treat it differently. And I think that’s where it started to diverge from a cybersecurity perspective and how do we protect it. Trying to take the best practices of cybersecurity today is not a one-for-one from IT world to the embedded or the OT side. What we’re describing between IT and embedded is going to be similar, but it’s how they’re used, how they’re operated, how they’re maintained, and how they function are different enough that it’s just not a one-for-one.
[Joe Saunders] (10:43)
It’s almost as if in IT we have the risk management framework. It’s almost as if we need a framework for OT. And I don’t know if the folks from MITRE have thoughts on that.
[Mario Zuniga] (10:54)
We do. We have a lot of thoughts on that. When it comes to embedded systems in particular, one of the things that I think, Joe, you were hinting at was the embedded system threat matrix that we just made public a couple months ago. We pronounce it as the STEAM. We were asked to create something very similar to MITRE’s ATT&CK® framework, which describes Adversary Behaviors or Tactics and Techniques in Cyberspace. The original ATT framework, which used to stand for Adversary Tactics, Techniques and Common Knowledge. This framework got its start on what we describe as the enterprise side, which translates to essentially IT. So Windows, Apple devices and Linux.
[Paul Ducklin] (11:44)
And Mario, a lot of the terminology in that ATTACK was very much Windowsy and Linuxy, wasn’t it? Yes. And that doesn’t map very cleanly at all onto embedded systems. You don’t want to have 20 different users competing for a pump room switch. It’s quite a different challenge to map out the kind of threats that exist in a world like that.
[Mario Zuniga] (12:00)
No.Correct. We’ve been working on it for a number of years. We finally made it public as the title describes. It’s focused on an embedded environment. We took a very broad perspective on that where we wanted to be inclusive of embedded systems, operational technology, basically non-IT, where we saw that there were gaps on the MITRE ATT&CK® family, if you will, at the time. And so we created this additional threat framework, Threat Matrix that is meant to help with standardizing the language. When you say spearfishing, the term came up before MITRED, but we adopted and incorporated. We describe what are the activities that you’re going to see in cyberspace that an adversary is doing for spearfishing. Well, we did the same thing for embedded systems. How do we monitor? How do we detect? How do we protect these environments?
But we didn’t have a standardized industry language. And that’s one of the aspects that we’re trying to achieve with this.
[Matt Janson] (13:09)
different technology stacks and then it just a different attack surface. And so what we tried to do was put a threat matrix out there to help blue teams, purple teams talk about those different technology stacks and that other attack surface.
[Paul Ducklin] (13:23)
Matt, do you want to give some examples of things that come into the Esteem threat matrix that don’t really have any counterparts in the AT-ACT framework? Things that people probably wouldn’t even think of as a threat vector in their regular day-to-day IT life?
[Matt Janson] (13:42)
Maru can definitely give you better examples, but I will say that as we talk about that embedded environment, the adversary is going to have to adapt to have some sort of impact. When you look at the threat matrix, it’s very important to look at those initial access vectors because you might find yourself looking at an air gap to type system. Most people think the bad guy in air quotes can’t get to my system because there’s an air gap.
That’s not completely true. So knowing where the bad guy can potentially get initial access is important. And then once they’re in that embedded environment, how do they potentially laterally move using certain tactics and techniques? And I think this is my ploy for Mario to kind of come into play and talk about those descriptions within esteem.
[Paul Ducklin] (14:26)
Unfortunately, he must have panicked. He was so worried about OT that he suffered an IT disaster and dropped out of the call. Joe, I know you have some good insights and knowledge about how embedded systems and the cyber-physical systems are oftentimes semi-invisible.
[Joe Saunders] (14:44)
Well, I want to point out maybe a difference in what’s available from a exploit prevention perspective. It may be true in some operating systems in some compilers, you have access to things like address space, randomization. However, in real time operating systems in certain compilers, you don’t necessarily have the ability to add defenses in for what are fairly common vulnerabilities in embedded systems. And those are memory-based vulnerabilities that if targeted, of course, could lead to taken down systems or using a device for a different purpose than originally intended. And so there is a tremendous need, not only for the safety reasons and the security reasons, but really just to maintain a well-functioning set of systems to protect these devices at runtime. And I do think the considerations there for how you do that in embedded systems is unique compared to traditional IT as well.
And in particular, having defenses built in to prevent exploitation at runtime is one of those key areas because of the quantity of memory-based vulnerabilities that exist in these kinds of systems, given the nature of the languages, the compiled code that’s used. And yes, we may be using some compiled code in enterprise IT, but the languages are often scripting languages that don’t necessarily have that same attack vector. So I think the general principle I see is how can you reduce the attack surface on these embedded systems in these networks, in these OT environments? And one way is to eliminate the memory-based attacks in these systems.
[Paul Ducklin] (16:24)
So that’s what’s known as RASP, isn’t it? R-A-S-P. And that’s where you actually build self-protection into the program itself without making significant changes to it. Whereas in IT, you might just wrap it in a layer of EDR software or wrap it in some behaviour blocking software. But in real-time systems, A, you probably don’t have the memory to do that. And B, you probably won’t get the regulatory approval for the possible impact that could have on the real-time correctness of the system. So your proposal is that this can and should often go into the software itself so that it performs in the same way that it used to with the same sort of runtime guarantees but in a way that makes one attack against one device unlikely to work against any other thereby out-foxing the attackers if you like.
[Joe Saunders] (17:18)
I do think out-foxing the attackers is a great step. And the issue is we do need to look for ways to eliminate entire classes of vulnerabilities because without an asymmetric shift in cyber defense, it just continues that we are chasing one vulnerability or one exploit to the next. And that’s just an exhausting thing that doesn’t really work in an environment where you can’t always patch. So I do think building protections is a key way to go.
[Paul Ducklin] (17:48)
And when you say can’t patch, sometimes that’s because by design the thing does not take over the air updates and somebody has to go there and it’s 275 miles away as the crow flies on the top of a very high mountain. It isn’t like patching a web app or updating your mobile phone, is it? We’re absolutely stuck with trying to improve the self-resilience of the software that’s there in the first place. The sort of strength to resist attacks not just for days or weeks but for months or years or even a decade. Is that achievable in real life?
[Joe Saunders] (18:25)
I think it’s achievable. And I would just point out the problem that I’ve seen with some of our customers at RunSafe, which is they simply have a baseline software image that they are going to support for a period of time. It could be one year or two years or three years, and they are backing up all their new features that they’re going to release for two or three years. That includes patches. That also includes any new features. And so once they go through that three year life cycle, and they ensure that the safety requirements are met and the security requirements are met and the features work and do what they’re supposed to do. It finally does get released, but what it does is it leaves a window of opportunity for exploitation. And then you add on top of that, some users of those releases, those software updates may not actually apply the patch for another series of months or what have you. So there’s just a tremendous window of exposure that puts these systems at risk.
[Matt Janson] (19:23)
Joe, you make a good point. This is always our bumper sticker statement, but you can’t patch everything. You can’t detect everything or monitor everything. And so you really have to make those risk informed decisions. Some of these systems are mission critical or safety reliant.
[Paul Ducklin] (19:40)
So Matt, do you want to say a little bit about getting into cyber security resilience rather than just depending on prevention? When so much of cyber security does seem to be talking about, buy my technology and just stop the attacks and that will be great. How do you adopt an attitude of resilience rather than just, let’s hold out our hand and try and stop everything?
[Matt Janson] (20:05)
When you talk about resilience, start building it into your discussions about cybersecurity. You start to talk about, well, I want to do a better job of anticipating a threat. That comes down to better cyber threat intelligence, being prepared for bad cyber risk scenarios. So let’s build some cyber resilient attack scenarios and actually let’s get our users or stakeholders involved from the very beginning because they may see something that you don’t see.
I think resiliency comes with interacting with those users and stakeholder community first can get into more secure by design. Look at some of the new NIST special publications, 800-160 version two, you know, it talks about these things of anticipating, withstanding, responding and recovering how you engineer those things into systems. My simple answer is it’s not always about us cyber people. I really think talking to those engineers, those end users that operate these systems is in my opinion like step one.
[Paul Ducklin] (21:06)
If you’ve ever been to an Air Force base where the transport pilots are practicing landings when one or more engines are out, the fact that they practice that over and over until they just have it perfectly so if they need it in real life is a good reminder that A. Practice makes perfect and B. You have to assume that bad things will happen but they don’t have to automatically be instant disasters.
And that is very much a lesson that we can learn in cyber security, isn’t it?
[Matt Janson] (21:37)
Well, I’m sure you could talk to Marisk and all these other big companies that have had these cyber attacks happen. Yes. The issue I think we also get into is some real world impacts comes down to potential life or death situation. When we talk about esteem and there’s been some thoughts of us doing some use cases related to boats or cars or other things. There’s been news articles about pirate hackers that have taken over ships remotely and actually gotten into their embedded systems in some shape or form by pivoting from the IT to OT environment. Things like that just fascinate me. And I think that if we do a better job of not only understanding our IT side, but then for our tax service on the OT side, we can start to better identify where those seams exist and where we as defenders can harden our systems, not only right at that embedded system level, but maybe kind of a, you know, on the network side of the house or even from the user level.
Maybe there’s a user defense that needs to be put into place. I think it’s fascinating to think about those things because it looked like the hackers were just taking the ship somewhere they wanted and then the crews had no way to override it.
[Paul Ducklin] (22:45)
Particularly when you think that their goal is to take over the ship physically and make off with all the stuff. To demand some enormous ransom at the potential cost of human lives. So Matt, there’s a big buzz phrase at the moment which is talking about structured threat models. In other words, you’re trying to understand the way an attack might unfold in a better described fashion.
[Matt Janson] (23:13)
I’ll probably just talk to threat models in general. There’s been several threat models out there and they’re all beneficial. You have the traditional one, Stride, that came out I think of Microsoft. There’s other FFRDCs like Carnegie Mellon that’s introduced systems engineering risk analysis, threat archetypes, the Sarah threat framework and a few others. They do a good job of describing, hey, GPS spoofing, but how does the bad guy or bad girl do the GPS spoofing or the attack? You really need to think about how versus what an attack could be. And so to your point about the structured threat taxonomy, what’s been great about ATT is it’s all formatted in STIX 2.1 compliant format. STIX is basically the way that cyber threat intelligence is shared electronically through the globe. And what we’ve been able to do with the steam is model it after ATT&CK® to use that same exchange of information to organizations to help them model potential threat behavior within their community. You can model the traditional IT attack paths and vectors, but now you can go one step further and start doing that both from a cyber threat intelligence standpoint, like, Hey, are the bad guys or bad girls trying to attack my embedded systems? And if so, what indicators through sticks am I looking for? And then I can also start mapping out those attack paths. So you have more of a holistic view of your IT OT space.
[Paul Ducklin] (24:41)
So presumably the idea is not to just have a good idea of what could go wrong, but also to have a consistent vocabulary with which to speak about it, so that when you take a part of the problem and then discuss it with someone else, you basically start out on the same page.
[Matt Janson] (24:53)
Definitely.
Before I knew attack, when you would talk about an attack life cycle, I wouldn’t talk in terms of initial access, lateral movement, persistence, impact. That just wasn’t in my IT vernacular. so through using a cyber threat matrix and this common taxonomy, it’s been extremely helpful going into rooms with both non-IT people and IT people and having these discussions. Let’s open up the description. This is what I’m talking about. This is how the bad guy would do this tactic and technique and it does go a long way.
[Paul Ducklin] (25:33)
Stop you producing well-crafted plain English prose that gives a description of what you’re trying to do, but it does give you some, if you like, specific points that you can call out at the end of your description that allow people to know not only what kinds of attack you’ve been talking about, but also to chase down examples where they’ve happened before, so learning from history, and also techniques that you can use to actually prevent or remediate them in future.
[Matt Janson] (26:03)
Definitely. I think most folks right now, if you look at the attack framework and then look at those attack groups, they have case studies and papers based on TTPs. Organizations like SISA right now, they’ll put out advisories talking about a certain cyber threat group and they’re known to do these TTPs. You can do the same thing using a steam. I’d say we don’t always see more of those embedded system cyber threat advisories, probably for multiple reasons.
But you do start to see some of those popping up now. If you go to SZA’s webpage and you look through the threat advisories, I think just last year there was one related to the aviation system and they were talking more OT things versus IT. I think this is just one of those cultural things that as we do a better job of training our folks and then giving knowledge bases like Esteem to others, we’re just going to continue to improve the way we defend our systems.
[Paul Ducklin] (26:57)
So gentlemen, I’m conscious of time, so I’d like to finish up by asking a question to either or both of you, and that is looking to the future and what we need to do next. If you want to think ahead, say for the next 10 years of how we develop our cyber security resilience in embedded systems, what are the things that people should start doing today if they haven’t already? And what do you think are going to be the most important things that we can do over that next decade or so?
Stuff that we haven’t done before, but probably should have.
[Joe Saunders] (27:30)
One key thing is to think about our software development differently, building security in as we build and compile code. One of the lessons I’m taking away from this discussion today is that even with esteem, there’s kind of a framework for us to step back and think about it holistically. To Matt’s point earlier, it’s not simply the cyber defense, but how you recover from that and how do you become resilient. And to your point, Paul, around safety earlier, I think all those are very, very important. And as I look down the road, that is going to be different. And we’re already starting to see this and we can’t go a podcast without talking about AI, but I’m very, very concerned about the ability for AI to find vulnerabilities faster than we can patch and to develop new exploits faster than we can patch.
When there is a well-funded adversary of whatever nature looking to compromise systems, it turns out that the costs are going down to find those vulnerabilities and develop those exploits. So I do think we need to think about the five and 10 year horizon much differently, even though we’re thinking about it today. And part of that is the framework that we talked about. Part of that’s the culture that we talked about, but it’s kind of an urgent call to do something today. And my view is to focus in on your development practices most urgently so that we’re producing the highest possible quality of code and we’re incorporating security into the processes so that we’re not trying to chase AI exploits like we’re chasing patches today.
[Matt Janson] (29:05)
Joe, you’re spot on. Because we live in the cyber realm. Do we ever have just a day where we can kind of take a breath? I don’t know if that’s the case because like you just said, we’re always chasing something. And so the less chasing and the more focused time in certain areas to improve upon the better. So completely agree with you there. And AI that in itself is a huge problem. Defenders, we need to think like that now and we need to some degree up our game.
We need to truly know our systems, both IT and OT, using the things that we’ve talked about today. We need to really do a better job of our attack path analysis. What is the most likely, maybe most dangerous attack paths that we really need to focus our time and effort on, and then just continue to reduce the risk. The risk is never going to go away completely, but you’re going to have to have that focused time to do those things that make a difference.
[Paul Ducklin] (29:59)
Do either or both of you think that laws relating to liability, such as some of the clauses that are in the EU Cyber Resilience Act, might change the game and maybe act as a bit of a stick than a carrot to get us to take cyber security more seriously from the start?
[Matt Janson] (30:20)
You know, money talks, right? Money’s business. I just hope it doesn’t become one of those things where there’s enough money to pay off the cyber thing from happening. And then we go back to a point where we just don’t care about cyber. You know what I mean?
[Paul Ducklin] (30:33)
Absolutely, because that’s kind of what happened for a while with ransomware, isn’t it? Where people going, you know what, I could just restore my backups, but that might take three days. Maybe I’ll just pay them $400,000 in bitcoins and try and keep it quiet. And that really set very low standards for the world, didn’t it? We need to get out of that.
[Matt Janson] (30:37)
It did.
It did. I’m not saying liability is a bad thing. I would probably be on the side of yes, that sounds good because money talks and we want to put some sort of financial motive to do better cyber. But I think it’s going to take more than that.
[Paul Ducklin] (31:07)
Yes, I’d agree with that because if you just rely on liability, then you might still get people who go, you know what, if it all goes up in flames, I’m going to be liable for so much that I’m going under anyway, I’ll actually roll the dice instead of saying upfront, I want to do this properly regardless. Joe, as you like to say, we want to elevate the game, not merely to improve it. Well, gentlemen, that was a fascinating discussion. It’s so lovely to hear from MITRE, a group where the federal government of the United States puts in some money for the greater good of all to fight against attackers whether they’re cyber criminals or state-sponsored attackers.
[Matt Janson] (31:49)
Yeah, I just want to say thanks for having us on. Really appreciate you doing this, Joe, for reaching out and really helping just get this awareness out there. No one’s going to use it if they don’t know about it. So thank you so much for having us.
[Paul Ducklin] (32:00)
Absolutely. If you haven’t looked into esteem yet, especially if you’re already familiar with the ATT&CK® framework, then please go and take a look today. So Matt, where should people start?
[Matt Janson] (32:18) All you have to do is type in estm.mitre.org and you will get to the ESTM website.
[Paul Ducklin] (32:22)
ESTM, I see what you did there. That’s Esteem. Yeah.
[Matt Janson] (32:26)
Now I owe the esteem gods $5 because I’m not supposed to say ESTM. I’m supposed to say esteem.
[Paul Ducklin] (32:35)
Thank you so much and thanks to everybody who tuned in and listened. That is a wrap for this episode of Exploited: The Cyber Truth. If you enjoy this podcast and find it useful, please subscribe so you know when each new episode drops. Please like and share us on social media and give us a nice good review on your favorite podcast feed. That really helps us get the message out to everybody. So once again, thanks to everybody who tuned in and remember, stay ahead of the threat. See you next time.


