In this episode of Exploited: The Cyber Truth, host Paul Ducklin and RunSafe Security CEO Joe Saunders explore a critical question: should we keep chasing patches or stop attackers before code ships?
Joe draws on decades of experience in cybersecurity and national security to show how build-time protections—like automated memory safety, Software Bills of Materials (SBOMs), and code-hardening—shift the balance in favor of defenders. From aerospace to energy grids, patching isn’t always an option, and waiting on post-production fixes can leave life-critical systems exposed.
Listeners will learn how proactive defense strategies:
- Eliminate the “whack-a-mole” patching cycle
- Reduce the costs and risks of delayed software updates
- Improve resilience for embedded and operational technology (OT) systems
Tune in for a clear-eyed discussion on what it really means to build secure software and why patching after the fact is no longer enough.
Speakers:
Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.
His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and helping all of us to raise the bar collectively against cyberattackers.
Joe Saunders: Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.
Episode Transcript
Exploited: The Cyber Truth, a podcast by RunSafe Security.
[Paul] (00:03)
Welcome back everybody to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of RunSafe Security. Hello there, Joe.
[Joe] (00:19)
Greetings, Paul. How are you?
[Paul] (00:21)
I am very well, thank you, and I’m particularly looking forward to this episode. Now, I think both of us tend to shy away from doing a podcast that’s a sales spiel. In this case, there’s no reason why we should avoid talking about RunSafe’s products and services because they’re the background to all of this. So let me throw you in at the deep end. Our title is Build Time Protection versus Post Production Panic.
Everyday operating systems like say if you have Windows 11 at home, there is a bit of a post-production panic on the second Tuesday of every month when you get the notification, updates are available and then you sit back and think, golly, will my computer reboot? Will it start up again? Will my scanner still work afterwards? Now in the embedded software market, things like industrial control systems, operational technology, OT, you don’t always have that luxury to do monthly patches even if you want to, do you? So have to find another approach.
[Joe] (01:24)
You do have to find another approach and in the embedded software market, oftentimes that kind of software or that kind of code is embedded on cyber-physical systems deployed across critical infrastructure. And I think we all want infrastructure to be available. So the services that we have come to rely on in a well-functioning society still operate. Part of that is the software updates to keep software secure, to keep it up to date and the like, but you don’t want these systems to be unavailable and oftentimes they’re hard to reach and they’re hard to update.
What that means is we need to have greater and greater resilience in the software that we build and deploy in the first place so they’re not chasing vulnerabilities after the fact because as we know that’s what ultimately can put infrastructure at risk.
[Paul] (02:14)
And indeed, very name embedded software kind of says it all, doesn’t it? On regular Windows laptop, there are probably 10 different operating systems you can choose to use. And then on top of those, are dozens, hundreds, thousands of different software apps you can have or not have. An embedded system, the hardware and the software kind of come as a, almost as a sealed unit, don’t they? Because often they’re designed to do exactly one thing perfectly over and over again, possibly for decades. So they don’t have to be a spreadsheet today and a word processor tomorrow, but they do have to operate that valve exactly correctly to specification for years and years years and years come hell or high water.
[Joe] (03:00)
from hell or high water. then, so what that means is the software needs to be reliable. It needs to be secure. It needs to be safe. And there’s a lot of discipline that goes into making software trustworthy in that sense. If you look at the aviation industry, flight control software and airworthiness is a huge barrier in software development to get right. Because let’s face it, if you’re building a car, you’re building an airplane, you’re building something of this nature, you want to ensure that safety is realized.
And so that’s why there are pretty robust milestones that software developers seek to meet in order to ensure that their software remains safe and reliable. And part of that is determinism, knowing that everything that’s going to happen is going to happen as you say, in the fashion you expect it and in the time horizons and specifications that you anticipate it.
[Paul] (03:53)
Yes, and people forget that, don’t they, as we’ve said before, when they hear the term real-time software. They imagine that’s all about speed and performance and frames per second and no lag when you’re listening to music or watching a video online. But it’s a different kind of performance. A computer might by design be very slow, use very little power, but to fulfil its remit, it must perform specific tasks within specific hard limits every time. Even if those limits are a minute or an hour or a day, it can’t be a day in a second. And therefore every time you patch it and every time you try and plaster over any cracks in it and change the software even a bit, you run the risk that it will no longer comply with the requirements that were there at the beginning.
[Joe] (04:43)
100%. You look at the software development practices involved for some of these mission critical applications or systems or devices deployed in critical infrastructure. There’s a reason you have a robust methodology to ensure that there’s determinism, that there’s reliability, that there’s a ruggedness to the software itself so that there is a high expectation that there’s little or no downtime. There’s a high expectation that there’s little or no bugs that get found that could put the system in jeopardy.
And so there’s a lot of testing. There’s a lot of processes involved and lo and behold, if you issue an update, well, you still have to go through all that same testing anyway. What that means is there are probably fewer updates. It’s a highly fragmented market and it’s fragmented in the sense that there are many different use cases. have safety of flight. have autonomous driving.
We also have industrial automation and manufacturing plants that are…
[Paul] (05:27)
Yes.
[Joe] (05:42)
moving around robots and blades and different things that also have safety requirements.
[Paul] (05:48)
Yes.
That’s a little different than, dear, my Zoom call crashed and I had to join again. Yes. The sword-waving robot went nuts. That’s a very different proposition, isn’t it?
[Joe] (05:59)
Yeah, a sentient forklift is a scary thought.
[Paul] (06:03)
Yes, I was interested just as an aside to see that there’s recently been a lot of publicity about someone who hacked a product called Bellabot, which is a very, very basic food delivery robot that comes out of China that basically just brings your plates of food to your table. And they figure out, hey, well, I could eat your meal. I can have it directed to me. And you think if that’s a risk with something as simple as a wheeled robot that isn’t sentient or AI driven in any way.
How much more concerned do we have to be about safety of flight, safety of driving? One rotten apple really would spoil the barrel.
[Joe] (06:42)
Yeah. And since we’re on the topic of artificial intelligence and robotics and things like that, I do want to bring up a pressing point here in the States. And that is the U S government published a policy statement around America’s AI action plan. And it really has three pillars. And one of them gets to security and safety that we’re talking about in the software development process. Those pillars are the U S needs to win the large language model race in the AI race itself.
That’s pillar one. Pillar two though, gets to what we’re talking about, the infrastructure, the energy grid, the data centers, and how those need to be reliable and secure. Because as they say, there really is no chance of AI dominance in the world if you don’t have energy and data center resilience. Even in those industries, you need to have very reliable systems like cooling systems and other things that help make the data infrastructure and the energy get delivered.
[Paul] (07:41)
The idea that we’ve come to accept, at least for things like laptops and servers, is that, well, if something goes wrong, we can just wrap another layer of runtime protection around. We can stick in some antivirus. We can stick in some threat protection. We can stick in some address space, layout randomization, and all of this sort of stuff. You don’t quite get that luxury with embedded systems, do you? Because every time you wrap it in another layer of cotton wool, you’re taking up more memory.
You’re taking up more time. You can’t go, well, we’ll just add another 32 gig of RAM, another 64 gig of RAM, when you might be talking about systems that only have 64 kilobytes of RAM because they were built to be tiny, to be low power, and they were built 25 years ago to last indefinitely.
[Joe] (08:28)
Exactly. I think the cost of patching and the burden or the economic hurdle in these industrial use cases is high. And what that means is we want to put more and more in the front of the equation at build time. The whole analogy of playing whack-a-mole, a lot of times we just find a vulnerability and we patch it and issue an update. And you don’t quite have that luxury for the reasons we’re discussing. Absolutely. There is a better way. There’s a way to build security into the process in the first place.
And it isn’t strictly limited to having the most sophisticated advanced software development processes. You also need to supplement that with good techniques that can help prevent exploitation, even when a patch is not available, I think is ultimately part of the point. Yes. And that really gets to how you do things at build time.
[Paul] (09:22)
So Joe, there’s a sort of trinity of parts to the RunSafe security platform. If I can mention products by name, I don’t see any reason why I shouldn’t. And I’ll go for them in reverse order if I may. There’s RunSafe Monitor, which I hope we’ll have time to get onto later, but that’s more about watching software while it’s running. Before that, there’s RunSafe Protect, which aims to build protection into the software without wrapping it in so many layers of extra stuff that it performs differently. And even before that, there’s RunSafe Identify, which makes sure that you are building the right stuff to start with. Let’s start at the beginning with RunSafe Identify, which is a kind of super special version of Software Bill of Materials control, isn’t it?
[Joe] (10:12)
It is. What we try to do at RunSafe is identify risk, prevent exploitation of protect code. And then as you say, monitor software. If you think about identifying risk, you need to take a holistic view when you’re looking at your Software Bill of Materials in these embedded systems. If you don’t know everything that’s in your software and you can’t anticipate what the extent of the vulnerabilities are before you ship, then you’re going to have a hard time finding and proving that software is reliable in the first place.
So identifying and enumerating all the precise ingredients. Although it sounds simple, maybe on the surface, what happens is in these embedded systems, when you compile source code into software binaries, you are pulling in different packages that might have certain kinds of dependencies in them that you weren’t even aware of that would get pulled in into your final finished product. A build-time Software Bill of Materials that sees all of that stuff that goes into your binary is the best time to build a Software Bill of Materials.
[Paul] (11:16)
You have some fascinating statistics that you’ve shared with us before, maybe you want to do it again, about the number of ingredients that you get in a typical modern software recipe. We’re talking about not just tens or hundreds, but maybe thousands of different ingredients, and each of those ingredients could have its own chain of ingredients that it just brings in without you even realizing.
[Joe] (11:41)
Exactly right. There’s things that the compiler will pull in. There’s things based on the operating system settings that might change the output of what goes into your overall package that you’re shipping.
[Paul] (11:52)
Absolutely anyone who’s a programmer will know that sinking feeling when you look at a C source file and you see ifdef this else ifdef that ifdef the other and think dear like you’ve got an old ARM processor that doesn’t have floating point I know I’ll use this whole new library that you never even knew was there before That can be quite an unpleasant surprise when it comes to post-production hence potential pain
[Joe] (12:19)
100 % right. And with that, there are other folks who will derive software build materials from the binary. And it’s really trying to taste the food after it’s been cooked or tasting the cake and trying to derive exactly, precisely all the ingredients that are put into it. And, know, there’s some good taste testers, I think, that can identify a good number of them. But in the software world, you get about 80 % right.
[Paul] (12:41)
So does that work by looking for things like strings that might have a version number? You find open SSL space 3.5.1, you kind of guess that that’s probably the version that was compiled in.
[Joe] (12:54)
Yeah, exactly. And you might ultimately then rely on past experience or other code and heuristics that suggest, well, if I do see these components, then I likely have this other package. And in the end, what you end up is you have a software bill materials, but you probably have a lot of false positives.
[Paul] (13:11)
and a lot of false negatives. If you don’t even know what to look for in the first place, you’re never going to find it.
[Joe] (13:14)
Exactly.
And so this is why I think a lot of people in the industry will say the best Software Bill of Materials occurs and gets created as you’re building the binary itself, at least in these embedded systems. Yes. When you stitch together those object files into a binary, you want to have a robust, complete Software Bill of Materials that says, yes, this is exactly what went into my binary. Why? Because all those false positives and those false negatives that you highlight the problem just compounds because what you want to do is then start to associate the vulnerabilities that are existing in those components that you actually have in your binary in end product.
If you don’t have the bill of materials right, you’re relying on imperfect information to find vulnerabilities and you won’t find the whole story in a complex software supply chain where you have open source software, you have third party developers.
You have contractors, you have your in-house developers. And as you said, as a lead up to this whole discussion, you end up having hundreds or thousands of components for a small little piece of firmware that gets shipped. You might have thousands of components and 80 % of them are coming from the outside. So how do you identify, then, where your risk is? How do you build stable code? How do you ship reliable software that’s deterministic that meets those performance metrics? Part of the discipline is knowing exactly what goes in. And as we have said a couple of times on this podcast, Paul, we’ve said it today. It’s hard to know unless you really capture that data at the ground truth moment as you’re implementing the software itself.
[Paul] (15:02)
It’s easy to make an honest mistake when you’ve done hash include some file name and you kind of assume that the file you’re going to pull in is a library that you’ve been using for years. But today, because something else changed in the system or some other developer decided to upgrade something or some automatic security fix happened, suddenly what you’re building into the software may not be exactly what you expect. And that’s the goal of RunSafe Identify, isn’t it?
[Joe] (15:32)
Yeah, that’s part of it. It’s to identify the exact components that are in your software so that you can link to all those individual components linked to their vulnerabilities and understand then what kind of risk, what kind of set of vulnerabilities are you looking at? And then devise a way to go about ensuring that you build a very reliable set of software.
In addition to enumerating what’s in there, there is an opportunity to add in security, but there’s also an opportunity to enforce good discipline, good policy, corporate governance in the build process as well for exactly the reasons you see. You want to be able to enumerate exactly what’s in that software and then use that to help identify risk and then enforce policy, whether that’s license checks for open source license violations, or if it’s vulnerabilities that need to be resolved or mitigated or somehow addressed to prevent the exploitation thereof.
[Paul] (16:27)
So Joe, let’s move forward to the next stage in the equation, which is Run Safe Protect. Now, technically, I suppose you can say that’s runtime protection because it keeps track of things like potential vulnerabilities or potential exploits or software misbehaviors at runtime, but it is not injected into the software at runtime like things like antivirus and process monitoring and memory spying and all of those things that we’re used to in laptop EDR software. Do you want to say something about how that works and why?
[Joe] (17:05)
Absolutely. So what we do at RunSafe is we monitor what gets built by collecting some metadata about all the functions that are created and used in the software binary that gets produced.
[Paul] (17:21)
And even for a tiny, tiny program, say a valve actuator, that might be hundreds or even thousands of functions even in a modestly sized C program that compiles to a few tens or hundreds of kilobytes at the most.
[Joe] (17:35)
I think I think the average C or C++ application will have 217 functions, but many of them have a lot more and some will have less, but the average is I think 217.
[Paul] (17:48)
And that’s before it calls all the other components that went in there that each have their 217 functions.
[Joe] (17:54)
Exactly. It compounds, as you say. And so what we do is we identify all those individual functions at build time so that our process can then relocate where those functions get loaded uniquely every time the software loads out in the field. And the benefit of that is to then provide runtime protection by preventing exploitation of weaknesses or vulnerabilities that could in fact still exist despite your testing.
And despite all your best efforts, we’re preventing exploitation at runtime as you say, but the key is for those that are fans of the compilation process and the linking process, we intercept the linker process. We measure and collect information about all those functions so that at load time, when the software loads on a device out in the field, we can relocate uniquely only those functions and not the data associated inside that binary itself.
When you have fine grained randomization at the function level, like what RunSafe does, even if you find one card in the deck, you still don’t know the order in which all the other cards are in that deck. And so what that means is you are in fact denying the attacker the determinism they need for their exploit to work in the first place. And that’s what’s kind of so beautiful about this.
I like to say the software world is built on determinism and if you produce one copy of software or firmware or what have you, and you stamp that out a million times prior to run safe prior to address space layout randomization, the identical memory would exist in all 1 million copies of that. The functions would all load in the exact same spot with address space layout randomization. All you’re doing is you’re shifting the memory by a little bit and then everything else remains in the same order with run safe. All the functions are relocated uniquely every time the software loads. Even though the software remains functionally identical, the determinism to the attacker is broken. It’s logically unique. So the exploit that works in the lab no longer works on the device out in the field. You’ve denied the attacker the determinism they need for their exploit to work.
[Paul] (20:15)
And this reshuffling of the deck but with the same 52 cards in it will happen every time that you actually load that particular binary.
[Joe] (20:24)
Yeah, every time the software loads and the beauty is we’re not adding more cards. We’re not taking 52 cards and making it 78 cards.
[Paul] (20:33)
And I think what a lot of people forget about ASLR on something like say Windows, as useful as it is and as necessary as it is, they forget that once you’ve loaded program once, say notepad, or more importantly, a DLL that every program in the system uses like kernel32.dll, unless and until you reboot the system, once you know where it loads today, that’s where it will load tomorrow and week later and a month later if you haven’t rebooted. And I think people forget that. think, when I exit my browser I flush all the cookies, that’s a safety thing. So presumably when I load it again, I’ll also get new ASLR. Well, generally speaking, you won’t.
[Joe] (21:15)
Going back to sort of the fundamental principle that we had when we started RunSafe, it was to think about an asymmetric way to change cyber defense for critical infrastructure and for the embedded software that gets deployed in there. And so think about it this way, if you can make a simple change in your build process that doesn’t cost your build time and also is reliable in the sense that you’re not the functionality of software, you’re not adding new software onto the device, and you can prevent an entire class of attacks and entire class of vulnerabilities that represent a lion’s share, maybe 70 % of the vulnerabilities in critical infrastructure, in compiled code, in these embedded systems. If you can do that, then you are fundamentally shifting the economic equation of cyber defense. That is the principle on which RunSafe was found.
We wanted an asymmetric shift in cyber defense, denying attackers the determinism they need to exploit your devices so that your long lasting assets that get shipped out into infrastructure can last five, 10, 15, 20 years and be immune from a cyber attack even when a patch is not available.
[Paul] (22:30)
And Joe, these RunSafe products, Identify and Protect, they don’t require a huge culture or technology shift inside your development team or your organization, do they? It’s not like you’re asking people to rewrite their software, go out and learn a whole new language or throw out the compiler that’s the only one they’ve got for this particular device and try and knit another one. So it is comparatively easy to integrate this into your continuous integration, continuous development process, if you already have one, compared to something like saying, right, no more C, everything’s in Rust, even though we don’t have a Rust compiler for all the embedded devices that we support.
[Joe] (23:16)
Exactly right. I think if you can make an incremental change to your build process and have an asymmetric change on your cyber defense and all the while have a more complete understanding of all the individual components that go into your software, you have made a dramatic shift in your security posture by simply adding a couple steps into your build process to take advantage of this kind of tooling. Like I said, the premise was to have an asymmetric shift on cyber defense.
But doing it in a way that makes it very simple to adopt, very easy to implement and not have the downstream effects on system performance on the devices out in the field. The alternative that people face today, they face the challenge of, I rewrite all my software in a different language? Should I rewrite it in Rust instead of C++? And the answer is you don’t have to. And that is a massive shift.
Because even if you’re changing the language, then you’re thinking about the compiler, you’re thinking about the test harness, you’re thinking about the hardware, you’re thinking about the compute that you need. And all of those things become, you’re almost re-architecting your products in the first place. That’s not really viable in some of these industries. That’s not viable for airworthiness. You can’t just bring in new hardware and ship it out next month. You can’t do that in the auto industry. You certainly can’t do that in manufacturing plants where we make long-term investments to derive tremendous output in our manufacturing production.
[Paul] (24:45)
And in some of those environments, Joe, if I’m not wrong, there are regulatory problems that even if you wanted to rewrite some of your software and say Rust, the compilers for those might not yet be ratified. So even if you think they produce better, safer, cleaner, more memory mellow code, you may simply not be allowed to use them because understandably, the regulators figured we want to try and go with what we know rather than potentially introducing so many changes that it actually gets worse rather than better.
[Joe] (25:20)
Exactly. Again, our take at RunSafe has been that you won’t solve every single vulnerability, but you can prevent the exploitation thereof. Therefore, maintaining the determinism that the safety standards, the security standards, the compliance standards require to ensure that these devices will act as you expect out in the field.
[Paul] (25:41)
And you’re not being like the notorious three monkeys, are you? See no evil, hear no evil, speak no evil. You’re not producing these products so that people can go, you know what, I’m just going to carry on with all the bad habits of the past and RunSafe will somehow fix it for me. What you’re doing is saying if you’re going to spend time rewriting some of your code that you can, or adopting Secure by Design practices, why don’t we make it easy for you to do so?
So that you have more time to get a patch ready and pushed out so that you don’t have this post-production panic. You don’t have the SharePoint had a fix the next month that needed another fixed
[Joe] (26:22)
It would be a great world if our mistakes.
[Paul] (26:25)
HA! Sorry,
that’s not really funny, but I know what you mean.
[Joe] (26:31)
Well, it would be a great world if we could just cover up our mistakes, cover up our vulnerabilities, and no one ever did anything with them. And unfortunately, there are research, vulnerability researchers, there’s red teams, there are customers, there are nation states, there are hacktivist groups, there are cyber attackers of different flavors. There are those that do ransomware. There are those that seek money for different findings in bug bounty programs and whatnot.
[Paul] (27:03)
But apart from those few opponents that we have…
[Joe] (27:09)
Someone’s gonna find the vulnerability, that’s the point.
[Paul] (27:12)
Exactly. And they’re not necessarily going to tell you, especially if they are a cyber criminal who thinks they can make a million dollars out of it. Or even more importantly, if they’re a state sponsored actor that thinks, you know what, in nine months time, 12 months time, 18 months time, this could come in a lot of handy.
[Joe] (27:33)
And of course, we’ve talked about the prowess of China’s cyber research arm, if you will, and the attackers that come out of IRGC and Iran and North Korea and even Russia. And these are formidable research teams. They’re looking to exact some kind of outcome or some kind of effect at some point in the future at a time of their choosing because it aligns with their interests, their ideology, their nation state plans or what have you.
At RunSafe, we want to make critical infrastructure safe so that the economy can thrive and we don’t give the upper hand to China or other potential adversaries in the United States or Western aligned countries. And so the idea of using that as a mission that keeps us going is something that is true across RunSafe and our team. And the idea that you can add in security at build time at relatively low cost, both in performance and economic terms, and have an asymmetric shift in cyber defense is a great equation for everybody, including the national security reasons that exist to prevent exploitation in critical infrastructure.
[Paul] (28:48)
And ironically, paradoxically, don’t know what the right word is, astonishingly, amazingly, brilliantly, in fact, adopting technologies like RunSafe Identify and RunSafe Protect actually could give you the extra time you need to make the long-term changes that you want. So that instead of going, golly, I have to rewrite this whole thing in Rust. It’s going to take me forever. I’m never going to get it finished. You can actually bring some new culture into your organization without affecting your business, without giving your customers cause for alarm and without, as we said in the title at the beginning, without any post-production panic. So Joe, do you want to finish off, although strictly speaking, it’s not build time protection, but it integrates with RunSafe Protect in such a way that I like to think of it that way. Do you want to say something about RunSafe Monitor?
[Joe] (29:45)
If RunSafe Identify helps you identify risk within your software code and your software supply chain, and RunSafe Protect allows you to prevent exploitation of software at runtime, what RunSafe Monitor is designed to do is to help you identify indicators of compromise and potential bugs in your software code by collecting information about a software crash. And this is a passive monitoring that doesn’t cause runtime overhead or slow down of any stretch.
[Paul] (30:17)
So this isn’t like these Windows solutions that poke instrumentation instructions into your code at load time so they can call some hundred megs worth of antivirus or whatever. This is a monitoring system that can tell you either that, well, you’ve had a problem and this is what we learned about it, which means you can fix it more quickly. Or perhaps even say we’ve seen some anomalies, maybe not be an exploitable vulnerability, but
[Joe] (30:29)
Exactly it.
[Paul] (30:47)
If you want somewhere to look next, this is a good place to start. So it’s of preventative as well as merely detective, if that’s a word. I don’t think it is, but it is now.
[Joe] (30:57)
And the original vision for RunSafe Monitor was much like a signal flare. And it’s just to send up a message to say, wow, we just captured something that you need to be aware of. We sit there passively until a software crash happens. When a crash happens, we collect about 20 state variables at the point of the crash. And you can tell a lot about that, something far short of a core dump. You can start to get a good indication of what exactly went wrong at the moment of the crash? And should that be looked at by your security operations team, because it might be an indication of compromise, or does that look more like a bug that needs to go back to the development team to see if there’s something needs to be fixed?
Incidentally, what we found is it’s not just in production that RunSafe Monitor is useful. It also comes in handy during testing. RunSafe Monitor is meant not just for runtime production monitoring, but also in those simulations that you have testing out things in your test environment as well. But the idea then is to give feedback to the development team or the security operations team and give them a heads up that there could be an indication of compromise or underlying weakness or bug in the software.
[Paul] (32:11)
Because core dumps come with fascinating challenges all of their own, don’t they? They’re great for debugging, but in the real world they can actually be a little bit of a cyber security panic situation. Because the idea is, hey, let’s capture everything so that they can completely reconstruct this back at HQ, including passwords, authentication tokens, all sorts of stuff that was only ever supposed to be in memory wasn’t supposed to be saved. And I guess the other problem too, anyone who’s had windows telling them, your system is blue screened and now we’re going to prepare the crash dump. They can be absolutely enormous and good luck, A, fitting that in and B, being able to download it from some of the embedded devices out there.
[Joe] (33:00)
Yeah. And being able to store some of that information or, or send it off to your SIM without all that extra cost or risk associated with the overall core dump itself. We’re talking really small bits of data here when we’re talking, you know, some limited number of state variables that can really give you good insight what happened in a core dump. Obviously it can be a massive file can cost a lot in terms of data upload or download, depending on how you look at that. If you can collect this information, at runtime and you can use that to inform people, then you’re ahead of the curve in that regard as well.
[Paul] (33:37)
If you can get 98 % of the knowledge with 2 % of the intervention, the size and the risk of exposing information you shouldn’t. What a great thing that is. Joe, I’m conscious of time, so I think we better wrap up because I could just listen to you for another 30 minutes on this easily. And I really just want to say thank you that you have absolutely once again shown yourself to be the kind of person who says, you know what, I have these products to sell you because I think that the problems they solve are really important and this is a good way to solve it rather than saying, hey, I think these problems are important because I just happen to have the products to sell you. So if any of our listeners are interested in knowing more, they can just head to runsafesecurity.com.
[Joe] (34:25)
Yeah, come to RunSafeSecurity.com and we can help you identify risk, protect code, and monitor software.
[Paul] (34:31)
Well said, Joe. Thank you so much. That is a wrap for this episode of Exploited: The Cyber Truth. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like us and share us on social media as well. Please also don’t forget to share us with everyone in your team so they can hear Joe’s words of wisdom. Thanks for listening, everybody. And remember, stay ahead of the threat.


