In this episode of Exploited: The Cyber Truth, Paul Ducklin sits down with RunSafe Security Founder and CEO Joseph M. Saunders and Brownstone Consulting CEO Cordell Robinson to explore how compliance frameworks like NIST are evolving from checkbox exercises into true security enablers.
Cordell explains why compliance should extend beyond paperwork to cover people, processes, and technology, while Joe highlights the role of secure-by-design development, vendor accountability, and Software Bills of Materials (SBOMs) in strengthening embedded and operational systems. Together, they examine how continuous monitoring, vulnerability transparency, and supply chain visibility are reshaping trust across industries.
The conversation highlights practical steps organizations can take now:
- Conducting meaningful gap analyses
- Embedding continuous monitoring into daily operations
- Strengthening vendor due diligence and SBOM practices
- Investing in role-specific security training
- Preparing for increased regulatory accountability
If you build embedded systems, support critical infrastructure, or supply software to government agencies, this episode offers a clear framework for turning compliance into operational resilience and building trust in an era of rising regulatory and nation-state pressure.
Speakers:
Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.
His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and helping all of us to raise the bar collectively against cyberattackers.
Joseph M. Saunders: Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.
Guest Speaker – Cordell Robinson, CEO, Brownstone Consulting
Mr. Robinson is a decorated U.S. Navy veteran and former Senior Intelligence Analyst who holds degrees in Computer Science, Electrical Engineering, and a Juris Doctor from Georgetown Law. Now a cybersecurity executive and CEO of Brownstone Consulting Firm, he specializes in compliance, governance, and regulatory frameworks, having led major initiatives across the Department of Defense and the Department of Commerce aligned with NIST, FISMA, and OMB standards. He is an ISSA member with multiple industry certifications and is passionate about advancing cybersecurity through automation and leadership.
Episode Transcript
Exploited: The Cyber Truth, a podcast by RunSafe Security.
[Paul] (00:07)
Welcome back everybody to Exploited: The Cyber Truth, I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of Run Safe Security. Welcome Joe.
[Joe] (00:20)
Hello, Paul. Great to be here.
[Paul] (00:22)
And welcome as our guest for this episode Cordell Robinson, is CEO of Brownstone Consulting firm. Welcome Cordell.
[Cordell] (00:31)
Thank you. Thank you so much for having me.
[Paul] (00:33)
It’s a great pleasure. Now, Cordell, our title for this episode is, “From NIST to Nation State: Securing Embedded Systems Through Compliance and Trust.” So I’d like to kick off with what to you is probably an annoying question, and that is to look at what we really mean when we say compliance. Because compliance has a bit of a bad name with some people, doesn’t it? They think, it’s just something that
People do to get the certificate so they can put it on the wall. They’re just checking a box. But if you’re doing it that way, you’re doing it all wrong, you? Compliance should not be about the certificate and the paperwork only.
[Cordell] (01:15)
Exactly. People get compliance misconstrued, especially a lot of tech developers and engineers, because of the way that compliance has been implemented just to get their certificate. But in reality, if you look at the true definition of compliance and what it is, compliance is the umbrella to all security. Compliance covers people, processes, and technologies.
If you go through the entire gambit of compliance, you have to make sure that people are properly trained and they’re aware of the risk. You have to make sure that processes are in place and those people are actually following those processes. And then technologies, ensuring that you have the right technologies in your environment, you do not have overlap of technologies in your environment, and your technologies are configured in a secure manner so that you lower your risk.
[Paul] (02:09)
Now when you say there’s no overlap of technology, I guess one important thing that that covers that can be hard for people to find is what we call shadow IT, where you tell someone they can’t or shouldn’t do something because it’s insecure, so they go out and put a nine dollar bill on their credit card to do it via some public cloud service anyway.
[Cordell] (02:31)
Overlap is usually several tools that do the same thing. Right. You’re spending a hundred thousand here, 50,000 here, 25,000 here on all of these different tools. But if you would have done your compliance correctly, you would have someone come in and look at your tech stack to see what tools work together and what tools actually overlap your saving money. And you’re lowering your risk because now you have less tools that you have to maintain to ensure that they’re secure.
[Paul] (03:01)
Yeah, so not only is it cheaper, it’s safer because there are fewer vulnerabilities that you have to chase down if one of those tools has a problem. And also means, presumably, that the results of things like software builds and software tests will tend to be more believable and more manageable because he won’t get 72 opinions on the same thing.
[Cordell] (03:09)
Exactly. Especially when you do penetration tests and vulnerability scanning, sometimes you get a lot of repeat vulnerabilities or the same vulnerability over and over just because it’s going across different platforms or different technologies. Well, now you have to patch each of those and sometimes they take different types of patches because of the technologies are different.
[Paul] (03:40)
So as the old saying goes, if you can’t measure it, you can’t manage it. And the fewer things you need to measure, the easier that management of those things is in the end.
[Cordell] (03:52)
Exactly. Remember, many years ago, where a lot of organizations would have an issue, and so they would just buy a firewall, and then they will have another issue. They’ll buy another firewall.
[Paul] (04:03)
Heck! Firewall, Doesn’t matter how it’s set up, it just has to be there.
[Cordell] (04:10)
Right. And so now you have three, four, five firewalls, but someone’s still penetrating your network, and they’re penetrating inside of your environment. You didn’t find out what the actual problem was. What is your security stack and your tech stack look like? How are they getting in? That’s what you should be looking for. Now, okay, well, this firewall isn’t that strong. Let me buy another one because it’s going to provide an extra layer of protection. No, if you configure properly, you’re not going to need all of those layers.
[Paul] (04:37)
Now, certainly in the United States, my understanding is that a lot of the advisory slash regulatory drivers for cybersecurity compliance come from NIST, the National Institute of Standards and Technology. Now, in the past, traditionally, they had what you might call compliance checklists, which did allow people to do checkbox compliance,vtweak everything so we just get through the things we know we’re going to be tested on. They’re now moving those frameworks to something a bit more real-world and a bit more of a security enabler, aren’t they? So, do you want to say something about that?
[Cordell] (05:08)
Uh-huh.
Yes, so the security frameworks have evolved over the years. So they have the security technical implementation guides. NIST 153 is one of them. There are several other ones, but 853 is the big one. And so what they have are controls, control families. And so each of these controls cover something. There’s a technical set that covers your technologies. And then there’s a management set that covers your management. And then there’s operational to cover operations.
Over the years, they’ve made these controls more robust and more detailed so that you are able to identify what your issues are. That first, they were just checking in the box, was a checklist. But now with these controls, if you make sure that they’re implemented and operating as intended, then you’re going to have a more secure environment.
One of the big issues is there are a lot of people that don’t know how to properly assess these controls to ensure that they are implemented. When you test the control, you’re not just looking for a document to say, “since you have this procedure, you’re fine, you’re good to go.” No, are you actively practicing these procedures? If you have a policy, is everyone abiding by the policy? If you have these settings in your technology, are these settings factual? Assessors ask for screenshots, but those can be doctored. I’d say, ask for screenshots but also do some shoulder surf and say pull up this configuration. Let me see it live.
[Paul] (06:48)
Yes, a screenshot is very much, we were compliant at three minutes past 15 o’clock on the third of the month. Phew, the assessors have gone, okay. Let your guard down now, folks. That should definitely not be the way you were. You should aim to build a process that is manageable and secure so that at any moment you’re very likely to comply just as a matter of the way you operate. That’s a much better way to do it, isn’t it?
[Cordell] (06:58)
Exactly.
It sure is a better way. So what NIST has done also is they’ve introduced what’s called continuous monitoring. Continuous monitoring is you’re continuously testing these controls. The work around that some have done is like, OK, we’re going to do continuous monitoring like annually. Well, you should be doing continuous monitoring every single day.
[Paul] (07:36)
Yes, annual continuous monitoring sounds like a bit of a contradiction in terms.
[Cordell] (07:41)
It’s very expensive or very costly until you begin to implement processes and tools and different frameworks so that you can do proper continuous monitoring. And then you’re not going to have a problem. When auditors come in, you’re not preparing for the audit. You’re ready for the audit. When it’s time for your assessment, you’re not preparing for the assessment. You’re ready for the assessment. When I first started in corporate, I remember when the VP or CEO, they’re coming. And so everybody’s dressed up and on their Ps and Qs and like all these different things.
I said, shouldn’t we be acting this way every single day and not just when an executive is coming through? That doesn’t make any sense. If you’re going to be doing something with due diligence and doing it properly, that should be your practice every single day. And it’s the same thing in cybersecurity, same thing in clients.
[Paul] (08:30)
Joe, do you want to say something at this point about secure by design, which I know you’re very keen on, which is essentially a way of saying, let’s do this from the start. So we’re not trying to retrofit checkpoints onto something that we’ve just been doing for years and don’t want to change. Because I suspect, particularly in embedded development, where there are all sorts of esoteric complexities, that a lot of engineers go, no, this is just going to get in the way.
And it will if you’re continually having to have a day off to sweep things under the carpet to look good for the executives to arrive. But it needn’t be difficult if you get things right or righter from the very outset.
[Joe] (09:12)
We’ve been talking about those who are managing the assets inside their enterprise and certainly they have a duty to stay in compliance. And with that, I believe that their suppliers play a role as well. And folks who build systems or deliver software, especially software used by the U.S. government, having security built into the software helps everybody and raises all boats, if you will. Yeah.
Secure by design is a good development practice, and IT managers and OT managers can ask their suppliers what they have done. One of the challenges and the importance of having continuous evaluations and monitoring and updates even annually is that sometimes the controls change. So for example, with NIST 800-53, it was, think with version 5, some new controls were introduced around application self-protection. And so finding ways to offer those protections even when the security team itself doesn’t have a compensating control, it may be that the vendors have provided it as well. So I do think it’s a holistic view. Certainly, the internal teams having a good discipline and part of that discipline is asking for vendors what their security policies are and what they’ve built into the software and the systems they deliver.
[Paul] (10:33)
And that’s RASP, isn’t it? R-A-S-P. The idea of an application that has some protections built into it. Now, in a Windows environment, that might be ASLR and an EDR program that you’ve injected in megabytes or gigabytes worth of protective cocoon. You don’t have the luxury of that in the embedded world, but there are still some tools and techniques you can use. But those are not a replacement for doing things right in the first place.
The idea is that these go together so that it’s less likely that your application self-protection will be tested. But if you do overlook something, you might have a vulnerability that is not exploitable, which makes it much easier to respond in a calmer and more protective fashion. Would you agree with that?
[Joe] (11:22)
I would agree, and that’s part of the reason why I think that the relationship between vendors and their customers matters a lot. We’re seeing an increase in vulnerability disclosure rates. If you look at how active Cisco is in sharing vulnerabilities and disclosing vulnerabilities, it always presents a challenge. If I disclose this vulnerability, what happens to my customers ,and my customers are going to have to patch, and are they going to be able to schedule patches and stay in compliance and follow the expectations for their internal governance as well?
[Paul] (11:51)
But the alternative is to say our customers will be really annoyed, so let’s leave them skating on thin ice. As long as they never fall through, they might never realize. And that’s exactly the wrong approach, isn’t it?
[Joe] (12:04)
And that’s why I brought up the transparencies. I think those that are most transparent end up being the most secure and they have the most trust between vendors and customers. I’m a proponent of disclosing and everyone having similar information and being as transparent as possible.
[Paul] (12:21)
Now, Cordell, we look at vulnerability disclosure, where you’re honest about things that may have gone wrong with your software and you’re able to explain what you’re going to do to fix that in the future, not only is an important part of forthcoming liability regulatory changes, particularly in Europe with the Cyber Resilience Act, but it’s also a sign of maturity in your own software development processes, isn’t it?
I think a lot of people look at software vulnerability disclosures these days and go, oh look, there were 745 more vulnerabilities disclosed last year than the year before and so on, therefore software is getting worse. But what’s probably happening is that we’re just collectively becoming more concerned with and more honest about fixing problems that may have been there for years anyway.
[Cordell] (13:13)
Exactly. I think that the mindset needs to change when it comes to vulnerabilities and disclosure. People want to look at the negative and say, well, it’s so many of them. This is horrible. No, it’s not horrible. They’re disclosing. So that’s what you want. You want them to disclose so that you know what to fix.
[Paul] (13:33)
Yes, because if a company has zero vulnerabilities disclosed, you don’t know whether they’re the best software engineers in the entire universe, or they’re swept under the carpet by compliance types who just don’t tell you and hope you never notice.
[Cordell] (13:48)
Exactly. If a company has zero or very, very little, I would be extremely alarmed. What are you hiding? There is no such thing as a perfect system. It does not exist on this planet ever. And a lot of people get systems and they get what’s called a plan of action and milestones. Some people get really upset about it. Why are you getting upset about that? You know now what’s wrong in your environment. You know that this software has these vulnerabilities. It’s better to know than not to know.
[Paul] (14:17)
I’ve got a bald tire on my car, if I pay too much attention to it, I’ll have to go into the shop and pay money to have it fixed. If I just back off a little bit in the corners and I hope it doesn’t rain or snow, I’ll probably be okay. Well, you probably won’t be, and if things do go wrong, you may take other people with you. Right. And that’s a particular issue in software engineering these days, isn’t it? And increasingly so in embedded development. because of the degree to which open source is being embraced in all walks of life. Your software is probably not all written by you.
In fact, Joe, in previous podcasts, has suggested that a lot of US government software embedded projects these days have up to 80 % of code that came in from open source projects written by people possibly all over the globe. So you have to be aware of that supply chain and of vulnerabilities disclosed in that supply chain so that you can vouch for the security and the safety of the software that you’re shipping onwards. Yes. No one exists in a vacuum anymore in software engineering, do they?
[Cordell] (15:27)
No, not at all. They don’t exist in a vacuum. And if they did continue to exist in a vacuum, that would be very dangerous. So why even do that? The more transparent we are, the better and more comfortable I think everybody will be because you can react faster. You can clean everything up faster and you understand and know what’s going on immediately. It’s just like anything else I want to know. And I want to be able to fix it. And also a lot of times it comes down to dollars and cents.
Now I can plan for it. Now I can set my budgets properly because I know these things are happening and not get surprised because no one likes a security surprise because it ends up being extremely expensive.
[Paul] (16:09)
Yes, if you know how much it generally costs to fix vulnerabilities in advance, and you’re on top of that so that you know what vulnerabilities are coming down in your supply chain, then you can actually do something about not only making better software in the first place, but also making things more competitive and profitable for you by not having to run around so often trying to fix problems that should never have made it into the wild in the first place.
Now Cordell, you’ve done a bunch of work with the NOAA, the National Oceanic and Atmospheric Administration, and the National Weather Service. Some of it you may be able to talk about. So if you can, what lessons from that particular environment can you give us about working with embedded and what we call OT, operational technology systems? The systems that sit out there in the field and are not managed by traditional IT teams that set the rules for everybody like they do with Zoom and Teams and Outlook and so on.
[Cordell] (17:14)
It’s such a diverse type of environment because you have scientists, engineers of all sorts, business analysts, and different levels of executives. And then of course you have the security personnel, then you have the network personnel. So it’s such a range of different technical personnel. It’s a very unique and interesting environment, but it’s an amazing environment because the way that they run their compliance program is from top to bottom.
They ensure that they’re not just checking the box. They have a process where it takes quite some time. So they’re not rushing through anything, which I think is great. Because of that, everyone has really taken a fine-tooth comb to every single thing to ensure that there is due diligence and there is safety when it comes to the operations of NOAA. I can’t talk about the details of it, but those operations are done in such a robust manner that it’s really refreshing.
If there is an incident, are we prepared for changes without disrupting operations? Because when you’re in such a high operational environment, their operational environment, believe, is 99.59 behind the decimal point, which means that they basically cannot be down. And when you have your operations that high, you need to ensure that your compliance house is in order from top to bottom.
[Paul] (18:38)
What sounds really, really refreshing here is, if I were a meteorologist, I would want to be studying weather systems and I would want to be modelling things for the greater good of all. And if someone came to me and said, now you have to concern yourself with cybersecurity as well, I’d go, but I’m a meteorologist. And yet, if you can get a world where meteorologists and, as you say, the business leaders and the compliance folks all see cybersecurity as an integral part of everything that they do. So it’s important to them, although it’s not the primary thing they have to do. You’re in a very strong place indeed, aren’t you?
[Cordell] (19:17)
Yeah, definitely, because it creates operational resilience. You know and understand that your operations are going to continue on and you’re not going to have any major hiccups or major downtimes if something were to occur because you have those processes in place to pivot, to move forward, to get things fixed in a very fast manner because you have embedded it into all of your cycles.
[Paul] (19:42)
So what’s your advice for business leaders, particularly in the embedded and the OT space, for taking these policy frameworks and security frameworks, which can be quite intimidating because they’re not three pages long, and they sound as though they’re going to cost an awful lot of money, and translating them into possibly modest technical changes inside the organization that deliver this secure by design?
Where you actually start being secure to the point that when the assessment comes, at any time if the assessors just roll up when you didn’t expect them, they’ll be very pleasantly surprised rather than have you running for cover.
[Cordell] (20:24)
I say the first thing is for the business leaders, and even look in the mirror themselves, is do a gap analysis and see where your issues are and just be very open to heavy scrutiny of your organization. Do that gap analysis and see what the issues are.
[Paul] (20:41)
So if you’re an engineer and someone says, you’re going to have to change your programming practices a little bit, they’re not telling you that you’re no good. Right. They’re just saying, hey, here’s a way you can be even better.
[Cordell] (20:54)
The biggest issue is always going to start with the people. Yes. Get the training done. Not just the overall annual cybersecurity awareness training. That’s very generic. It’s a starting point.
[Paul] (21:05)
Yes! Don’t click weird links, who knew? I mean that’s important, but it’s not really enough when you’re building software that’s going to be put into an embedded device that’s going to be buried in a satellite and sent into orbit where it might stay up there for 25 years. Once a year, checkbox training is not going to fix security issues in that kind of environment, is it?
[Cordell] (21:09)
Right.
Exactly.
It really isn’t. And so I say, look at the roles and find robust training for the role and ensure that they understand their role, what they need to do. Also to security personnel, be very collaborative with these leaders on how to make their environment better and stronger instead of being combative. A lot of times things get a little combative because people get into their feelings and personalities. The most important part is the mission.
What is the mission? What is the operational purpose of your organization to ensure that it is not just running smoothly, but is secure. If it’s running smoothly, it’s not secure. Then it’s not going to run smoothly forever. Everyone’s going to be screaming and pointing fingers. If you explain to them why it’s important, they’re more amenable to having that conversation and actually making those changes. But if you just tell them, you have to do this because it’s not secure and you’re in violation.
[Paul] (22:16)
Absolutely.
[Cordell] (22:29)
They’re not going to want to do it. But if you say you’re actually learning more of a skill set on securing these devices, it’s a different conversation.
[Paul] (22:37)
I’m reminded of something that always makes me laugh when I see it. And I don’t know whether you get this in the United States. You’re buying a used car and you get adverts where the seller is saying lovely condition before an accident. And of course they put a picture of what the car looked like before it was wrecked. Now, Joe, Cordell mentioned the idea of collaboration and we’ve talked about the supply chain already.
Well known in the automotive industry, have to account for that up to four levels away. And you’ve also got the collaboration with the people downstream from you to let them know that you’ve done the right thing, that it’s not a question of lovely condition before an accident. This makes me think of Software Bills of Materials, which are an increasingly important part of security frameworks and indeed of compliance in the US and in Europe.
Do want to say a little bit about that?
[Joe] (23:36)
Sure, we all know that Software Bills of Materials have been around for quite some time, but in the past three, four years, we’ve seen through executive orders that if you’re shipping software to the U.S. government, they are encouraged and if not required, and certainly many organizations are adopting them at this point. The challenge, of course, is that even scanning tools and penetration testing doesn’t find all the vulnerabilities in part because they can’t see all the software.
And so having a robust Software Bills of Materials that shows all the software that’s enumerated in that software you have deployed in your infrastructure saves everybody a lot of time. And so having a Software Bills of Materials is a good way to share and create that transparency I talked about earlier, but it’s also a good practice from a software support perspective.
When there is a question of whether you have the underlying software that has a critical vulnerability in it, oftentimes, the Software Bills of Materials can be a place to turn to make that communication more efficient and clearer. So adopting a good robust process to receive Software Bills of Materials, asking your suppliers to deliver Software Bills of Materials, and having a means to check that so you can understand your overall security posture. We all appreciate the benefit of penetration testing. We appreciate the benefit of code scanning, but understanding a Software Bills of Materials to really get an understanding of whether you have a component in your infrastructure in the first place can save a lot of time and certainly reduce risk.
[Paul] (25:05)
Gentlemen, I’m conscious of time and I have something that could take us a whole nother podcast to get through. You might think of it as the elephant in the cybersecurity room these days, so I’m sure you know what’s coming. It starts with A and it ends with I. AI is seen as both a great enabler for cybersecurity because it lets us look for problems in an automated way, much faster and more easily than getting humans to have to do these repetitive tasks. But we also hear it pitched as a great disaster because cyber criminals and state-sponsored actors can use it to automate their attacks, automate finding vulnerabilities. What do you think are the biggest opportunities and threats in cybersecurity, particularly in the embedded market where patching is often harder?
[Cordell] (25:57)
I would say the biggest opportunity that it brings is going to be time saving and efficiency. And if the automation is done properly with the humans running it, it’s going to take the stress off of them as well because they’re going to allow the AI to do it.
Now, the danger is that the race to AI is a little bit fast. And I think people need to slow down because I think that a lot of people need to actually take some training and learn about how it actually works. Learn how LLMs work. Learn how agents work. Learn how all of these different things work with AI. And then when you’re using a tool, figure out and learn how to actually use the tool and not use it as something to do your job for you.
[Paul] (26:42)
I guess the big risk there is that instead of freeing up your scientists and engineers to do higher-order tasks, you end up thinking, who needs scientists and engineers when I can just have a data center churning out answers for me? So you end up without the expertise, the sensitivities, and the innovation that humans are so good at. You throw out not only the baby and the bath water, but also the bath and even the bathroom it came in.
So Joe, how do you see AI fitting into cybersecurity, particularly in terms of freeing up humans to be innovative in a safe and secure way?
[Joe] (27:20)
Well, at RunSafe, we did a couple of things in 2025 that I think have been really profound for our company. And that is our development team has adopted AI into the development process in a pretty major way. And what we’ve seen is a tremendous productivity gain from our employees and also the ability to kind of enhance how we support our customers, both documenting artifacts related to certifications and requirements, including safety-critical systems.
We also embarked on a pretty deep survey around the adoption of AI across embedded systems. And it’s clear that it’s increasing almost everywhere. And so I’m a big fan of the productivity gains and certainly believe it’s transformative. With that said, I do see a couple of areas I am concerned about. One is just as we gain productivity gains in in-house set run safe with the code that we produce, we know that attackers can produce exploits faster than we can patch.
And so we do need asymmetric ways to defend systems, to add in security. And so we need to find ways to counter the proliferation and the speed in which exploits can be administered or implemented. And then the final point, going back to code quality and software quality, and what goes into your software when you deliver it.
And one of the areas of concern I do have is when software teams may look to rewrite systems, and they may rewrite an open source component to a near lookalike version, but not have quite the same attributes, if you will, and certainly not the same signature or hash value for that software component. And so what we do is we lose some aspect to link and to share vulnerabilities when we don’t have a standardized way to report a near lookalike software component.
[Paul] (29:05)
When someone announces a fix in some well-known product, let’s remember log4j, then if you’re using that product and not some weirdly modified version of it, then you know what the implications are to you and you can go and fix it very quickly instead of going, I wonder if that applies to this minor modification that I made because it seemed like a good idea at the time. To finish up, can I ask you both what you think
[Joe] (29:30)
Exactly right.
[Paul] (29:35)
will be the biggest changes in what security frameworks will require us to do, rather than just what they advise us to do, that both manufacturers and consumers in defense, government, and critical infrastructure should be thinking about over the next, say, three to five years?
[Cordell] (29:56)
The regulator is going to require all of us, no matter what your company type is. And it’s actually starting now. They’re going to require due diligence across the board because there’s been so many breaches. A lot of major breaches have happened because it wasn’t the actual company, was through the vendor that they got breached. So requiring due diligence and requiring more accountability through regulation is going to be one of the things that are being pushed across all industries right now. DoD is pushing it, but I think it’s going to go across all industries in the next two to five years.
[Paul] (30:32)
So Joe, sounds like what Cordell is saying is that we’ve got quite a big basket of carrots at the moment, but what’s coming is maybe a batch of pointier sticks to make sure we all do go in the right direction. So even if you don’t want to do it, you’re going to have to anyway.
[Joe] (30:53)
And I think the carrots and the sticks are just as important as Cordell identified up front. The tools, the technology, the people, the training, having a culture built around security is vitally important. In the EU with the EU Cyber Resilience Act, there’s certainly liability for product manufacturers if there’s a cyber attack and there was a vulnerability on one of their devices. And that is motivating to avoid any penalties, any fines, any
[Paul] (31:07)
Absolutely.
[Joe] (31:23)
loss of money and loss of income. The carrots and the sticks are helpful. These are inducements to help change our culture, change our process, change our technology, change our training for the better to improve not only software quality, but reduce the vulnerabilities and the bugs and the weaknesses that could lead to exploitation.
[Paul] (31:43)
Joe, think that’s a fantastic way to wrap up. If something is coming that you are going to have to do, it’s much easier and much more fun if you want to do it in the first place. And cyber security is certainly one of those things. So Joe and Cordell, thank you so much for your thoughtfulness and for your clear future vision that says we can want to do this rather than just waiting to be told to do it.
So that is a wrap for this episode of Exploited: The Cyber Truth. If you find this podcast insightful, please don’t forget to subscribe so you know when each new episode drops. Please like and share us on social media, and don’t forget to tell everyone else in your team about this podcast so they too can benefit from Joe and Cordell’s experience, knowledge, and passion. Thanks to everybody who tuned in and listened. And don’t forget, stay ahead of the threat. See you next time!


