Volt Typhoon and the Risk to Critical Infrastructure

April 10, 2025

 

The first episode of “Exploited: The Cyber Truth,” a podcast brought to you by RunSafe Security, features an engaging conversation between host Paul Ducklin and Joe Saunders, CEO and founder of RunSafe Security. The discussion focuses on Volt Typhoon—a nation-state threat group attributed to China—and the severe risks they pose to critical infrastructure worldwide.

Learn about the advanced techniques used by Volt Typhoon to exploit vulnerabilities in systems like routers, firewalls, and VPNs, and how they plant persistent backdoors to potentially disrupt transportation, communication, financial services, and energy grids. Joe Saunders shares his deep experience in cybersecurity, highlighting the challenges defenders face, from memory-based vulnerabilities to the complexities of securing legacy systems.

This episode also dives into the importance of adopting Secure by Design principles, rewriting software in memory-safe languages like Rust, and leveraging advanced memory protection techniques. With insights into the geopolitical motivations of nation-state actors and actionable advice for infrastructure owners and product manufacturers, this episode is a must-listen for anyone invested in cybersecurity.

Stay ahead of the threat and gain the knowledge needed to protect critical systems by tuning into “Exploited: The Cyber Truth.”

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 

Joe Saunders: Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Key topics discussed: 

  • How Volt Typhoon is targeting critical infrastructure in sectors like transportation, communications, and financial services to create potential footholds for future conflicts
  • Why memory-based vulnerabilities are a major risk, enabling attacks that are hard to detect and trace
  • Practical ways to secure systems—like rewriting code in memory-safe languages or using advanced memory protection techniques like load-time function randomization
  • How Secure by Design and Secure by Demand initiatives can drive adoption of practices to improve cybersecurity in critical infrastructure
Episode Transcript

Exploited: The Cyber Truth, a podcast by RunSafe Security. 

[Paul] Welcome, everybody, to the very first episode of Exploited: The Cyber Truth. I am Paul Ducklin. And today, I’m joined by Joe Saunders, CEO and founder of RunSafe Security. And this week, our topic is Volt Typhoon and the risks to critical infrastructure.

[Paul] And we’re going to be diving into the thorny problem of state sponsored cyber threats, and what you can and indeed should be doing about them. Joe, let’s get to it. But perhaps before we get stuck into Volt Typhoon and the bad guys, why don’t you tell us a bit about yourself and how you came to be so active and so passionate in the field of cybersecurity? 

[Joe] Well, thank you, Paul. It’s great to be here.

[Joe] Great to kick off the first episode of the podcast and certainly talk about things that are near and dear to my heart in terms of protecting critical infrastructure, from cyberattacks. Thinking about my journey in the cybersecurity space, I’ve always gravitated towards risk oriented solutions, helping organizations solve for risk. And I’d say about fifteen years ago, I started working on ways to help law enforcement agencies and other government agencies figure out ways to prevent the theft of intellectual property from The US. As you know, that is a form of economic espionage because if a nation state is successful at that, it can certainly undermine, you know, the ability for organizations to compete effectively. And that really set me on a journey, a journey, I think, today that is represented by not just the theft of intellectual property, but the protection of infrastructure of critical assets across all sectors.

[Joe]  And so that’s what ultimately, through a few steps along the way, led me to founding RunSafe Security, which is a software company designed to protect software itself deployed across critical infrastructure. And as we like to say at RunSafe, we make critical infrastructure safe so that the economy can thrive and the US doesn’t give the upper hand to China or other nation states or adversaries. 

[Paul] Let’s kick off by talking about Volt Typhoon. For those listeners who are wondering, that is Microsoft’s threat group name. Microsoft uses meteorological themes.

[Paul] When I first heard Volt Typhoon, I thought, well, typhoon, that’s reserved for groups that are associated with China. And I assumed Volt meant it was electric companies, but their interests are much, much, much wider than that. And they’ve been variously known, and I’ll read this list out as dev o three nine one, storm o three nine one, Bronze Silhouette, Insidious Taurus, Redfly, UNC3236, Vanguard Panda, and Vault Sight. They’re all the same group of people. Volt Typhoon, I think, is an easier to remember name, and it doesn’t cast them in any kind of fantastical light like Insidious Taurus. Maybe talks them up a bit much. But what is Volt Typhoon and why are they such a problem? 

[Joe]  Well, in the industry, we like to talk about, threat actors or advanced persistent threats that are targeting certain kinds of vulnerabilities and perpetuate certain kinds of attacks or create persistence on a device ultimately. But as you say, Volt Typhoon, it’s not just around electricity or utilities. It does affect transportation systems and communications and certainly financial services.

So it’s really a threat actor group ultimately that is targeting certain kinds of cyberattacks against critical infrastructure and, certainly, those sectors and beyond that the ones I mentioned. And so, ultimately, what they’re trying to do is demonstrate or perhaps, at a time of their choosing, administer a payload or some way to disrupt the normal operations of society, the normal operations of critical infrastructure. For example, disrupting transportation systems or rail systems, disrupting communication systems, or even the ability to get money out of an ATM or financial services in general. So Volt Typhoon is a threat actor group. I would attribute it to China.

I think it’s safe to say that. As you and I have talked in the past, it may not be only actors originating from China, but, certainly, the spirit of what Volt Typhoon is about is state sponsored, nation state types of attacks or research on vulnerabilities that can lead to attacks down the road. And so it’s certainly China threat actors that are steering the volts in the attacks here and identifying those areas of vulnerabilities.

These threat actor groups can do some amazing things. They are well funded. They do research, and, ultimately, they’re looking for vulnerabilities against which they can build exploits that can deliver certain kinds of effects at some point down the road. So Volt Typhoon is a well known one, and I think everyone’s grabbed on to the name from Microsoft because they’ve gotten a lot of attention in 2024 and a lot of media coverage given the extent in which they’ve, sort of immersed themselves and pre planted exploits inside critical infrastructure. 

[Paul] Now it’s important to remember that the big difference between a group like this and, I hesitate to say, your everyday cyber criminals, ransomware crooks are pretty troublesome in their own right, with companies being hit with ransoms with seven digits in US dollars. But there’s a very big difference here in that if you think about your typical ransomware criminals, they’re driven by the fact that with each attack, there’s a result that they wish to achieve, which is to make a bunch of money. And if they’re not going to make the money, they might drop that victim and move on.

Those sort of economic rules just don’t apply to state sponsored actors, do they? They can afford to take the time to find out all sorts of things that they can abuse in the future and put them in that cupboard under the stairs when they can get them out when they decide they need them. 

[Joe] Well, it is true that you need to really understand the motivation behind certain threat actors and certainly their techniques and what they’re striving for. It helps give you an indication of what’s going on. But I would say there are some nation states that do target ransomware attacks.

Iran, for example, does look for ways to fund different operations that they like to sponsor. So they’re well known for their proxy, relationships with the Houthis and others, and part of the way they raise money for that is through activities like this. And we’ve seen money find its way to Hezbollah or Hamas or others in general. So they can be funding sources. But when it comes to Russia, you might think more about influencing operations and the psychological effects of a cyberattack and even disinformation or misinformation related to, influencing how people perceive things and ways to interfere with the society that way.

But from Volt Typhoon, from China, I think part of the game isn’t necessarily the ransomware and the financial game, as you suggest, and it’s certainly not necessarily exactly what Russia does either. In the case of China, I think what they’re trying to do is demonstrate a couple things. One is they have a 50 to one manpower advantage when it comes to cyber warfare over The United States, and so they wanna project power in the cyber sphere just as someone might wanna project power in the Indo Pacific through navy strength. But two, perhaps even more importantly, they do spend a lot of time and research to find ways to get into critical infrastructure because that might be valuable down the road. It might be valuable in a negotiation.

It might be valuable in a trade war. If there is a chip that could, you know, let’s say a cyber bomb that’s inside critical infrastructure that could go off at a time of China’s choosing, that would disrupt the water supply or disrupt the energy grid or disrupt the financial systems, then that can wreak a lot of havoc. And it might have the effect of, in the case of if they attack Taiwan, on the ability for the Taiwanese government to provide basic government services. And so that could lead to a loss of confidence in the Taiwanese government, and that might be valuable to China in the future. So in the case of The United States or in Western Europe and the UK, having at its disposal the ability to disrupt things means it can add another chip to a negotiation or wreak havoc when something else is happening.

In fact, in 2024, we learned that part of the motivation for China is certainly to have those cyberbombs that they can detonate at the time of their choosing available if and when they create some kind of, you know, military strike on Taiwan. If the US is disrupted, will the US come to help Taiwan if itself is having to deal with critical infrastructure, cyberattacks? And it might just raise the alarm from American citizens in the population. Why are we supporting Taiwan if it means we’re gonna get attacked in our critical infrastructure? So I think there’s complex geopolitical reasons for motivation for these nation states.

There are, in some cases, money reasons, and there certainly influence operations. But here, I think it’s a asymmetric tool that really helps influence the outcome and becomes a a chip in the negotiating table ultimately. 

[Paul] Some interesting recent information that has come out about a vulnerability in Ivanti products that has been exploited by Volt Typhoon is a good reminder of some of the important tools and techniques that these groups use. They’re actually looking for vulnerabilities in the very devices on your network that are supposed to improve your security. Firewalls, routers, VPNs.

In other words, devices that, a, you’re supposed to have confidence in, and b, that tend to concentrate traffic, concentrate logging, in the case of a firewall, may even be preprogrammed so that it can decrypt data, analyze it, and package it up some of it to record for analysis later. And they’re actually using bugs in those products to implant themselves in a way that is very different from a ransomware attack. There are no flaming skulls on the wallpaper on your desktop. 

[Joe] Well, there’s all sorts of vulnerabilities, implied in all of that. But, as we’ve seen over time that bad actors find their ways in, they might come in as, you know, an insider. They might breach the network. They might breach the very defense that you’re talking about. And so you might be able to plug one hole. You still have to look elsewhere. But there’s another part of the attack, also, that’s important.

Let’s assume somebody gets in. What can they then do to compromise a device and listen, or exfiltrate data, or shut things off or hold it for ransom or lock things up in general. Not only do you need to prevent access, but you want to actually stop the ability for those attackers to move once they’re inside the network. In many, many cases in the cyber kill chain, there are memory based vulnerabilities along the way, and you need to shore up all the devices along the kill chain, if you will, in order to prevent real damage. And the the golden ticket for these guys is to get root access or at least be able to gain access to memory on a system and bring in arbitrary code or administer remote code execution on a device.

We spend a lot of time ensuring that cyber attackers won’t get in, but they almost always do. So a missing ingredient in all of this is to prevent them from doing the damage once they get inside the network. And that’s where memory based vulnerabilities are sort of a hidden crown jewel for cyber attackers when they’re going after critical infrastructure. It’s the most common, most devastating set of vulnerabilities in these systems, and that’s what Volt Typhoon targets. MITRE has come out with a ranking of the top weaknesses in software and the greatest danger to infrastructure, and memory based vulnerabilities lead the way. I think it’s something like eight of the top 25 software weaknesses or memory based vulnerabilities that allow for remote code execution. 

[Paul] And the  complexity there for defenders is that if you can exploit something like a buffer overflow or a use after free or one of those jargon terms, and you can inject code of your choice into memory that will execute on somebody else’s server, the server itself still works as usual. It still produces its normal logs. It still processes user requests normally. It still lets people log in if they have a password and rejects them if they don’t, etcetera.

But it doesn’t leave any trace behind on the disk necessarily. And I guess the flip side of that is if you can wander around in memory on somebody else’s device, there are an awful lot of secrets that you can extract that you can use again and again and again and again. Authentication tokens, usernames, passwords. So there’s a lot at stake there, isn’t there? 

[Joe] There is a lot at stake. These so called fileless attacks, they are quite bothersome for organizations. They have existed, in some cases, fifteen, eighteen years before even anyone found them. And the adversaries were referencing them or using them. They’re not as traceable. They’re not as obvious.

And I think one of the key things about that then is to think about software from a conceptual level. I would say the great promise of software, in some ways, is you can build one piece of software code, stamp it out on a million devices, and then the 1,000,000 devices always behave the exact same way. They’re deterministic. And so if you have a hundred drones in a fleet or you have 10,000 routers in a telecom network or communications network, and they all have the same software, they should behave the exact same way. So software is very, very powerful.

It leads to efficient, deterministic outcomes. The problem is that same determinism works in favor of the attacker. And if they find a weakness in one router or a weakness in one drone or a weakness in one HMI device in an energy grid or or manufacturing plant, then they can build an exploit that works in all 1,000,000 routers or all 100 drones in a fleet. And so that determinism works in the same way. And so if you find a weakness at runtime targeting memory vulnerabilities in general, that’s how you get remote code execution and introduce arbitrary code by exploiting those memory weaknesses.

And so if an attacker has the same determinism, if they can find the vulnerability in the exact same spot every single time, that works in their favor. 

[Paul] It’s very much a question of an injury to one is an injury to all, isn’t it, in that context? You get it working in your lab, and you practice hard enough, and you know that when you unleash this exploit on the world, it might work 999 times out of a thousand. And the one in a thousand that it doesn’t work will probably just be written off as, oh, something went wrong. Reboot the device. Oh, look. The problem’s fixed itself. 

Now I’m understanding with Volt typhoon, that’s one of their key ways of ensuring that they have long term access to a network is to leave remote backdoors behind, not just on tens or hundreds, but possibly even thousands or tens of thousands of routers. So what do you do about that? 

[Joe] Well, there’s lots you can do. And I think what we saw over the past couple years was CISA kind of leading the way, talking about Secure by Design, and all the mature software development practices to fundamentally get rid of vulnerabilities in software. And so good software development practice is a key thing, and I think the Secure by Design program, and even if you’re a utility, looking at Secure by Demand and ways to work with your suppliers to reduce the attack surfaces. 

[Paul] So Secure by Demand is where you don’t make the devices yourself, but you go to your vendor, you go to your supplier and say, I demand that you are Secure by Design, effectively. 

[Joe] Effectively right. And so one of the key things then in Secure by Design is to target memory based vulnerabilities.

And so one way to address that in the secure by design program is to simply rewrite all the software on all your devices in what’s called a memory safe language, which is a newer set of skills and languages for folks, out there but can lead to eliminating the risk of a bug allowing an attacker at run time take over a device. 

[Paul] So those would be languages like Go, which can be used on embedded devices, and perhaps most notably Rust, which has a lower overhead than Go. The problem is, it’s not a trivial undertaking, is it? You don’t just take your c code, feed it into some engine that magically converts it into Rust. There’s rather a lot more to it than that.

Particularly, if the thing you’re trying to build must perform in exactly the same way when it’s running correctly as the one you had before. 

[Joe] Exactly right. Yeah. So rewriting code, of course, is good for new devices. But in critical infrastructure, these devices last five, ten, twenty, thirty years, and organizations have made a capital purchase that they’re amortizing over a long period of time.

They’re not so excited about just buying a new version of a device because someone decided to write it in a new programming language. So there has to be other ways, but you’re 100% right. There’s other issues as well. There’s skill set in developers. Do they know the new programming languages? Folks in the utility industry, you know, they may have some existing software code that they want to maintain for a long time. So that’s one set of issues is just, you know, skill set, and training. 

Another one, though, is that more and more of these devices leverage open source software, but they also leverage real time operating systems that are produced by commercial software companies. And so, those systems may not be written in a memory safe language either. The underlying operating system could still be vulnerable if it’s not rewritten also.

Then the last thing is just compatibility. I think you were in part getting to that compatibility of a component written in one language like Rust and then another language like C or C++ does, in fact, present some issues in terms of how you support. If you can’t rewrite the entire system, you might want to rewrite maybe the most mission critical ones, but yet then you have to consider compatibility issues. 

At RunSafe, we’ve taken the Secure by Design principles to heart, and we’ve rewritten some of our components in Rust to be a good steward for our customers because we don’t want to be the weak link. With that said, I think it, you know, for 30,000 lines of code, it probably took us almost, well, four or five months anyway, maybe even a little longer.

There were many different functions we had to look at closely. Of course, the issue is you don’t quite get everything perfectly ported over to the new programming language exactly the way it was before. And so, you know, I think there are still yet other methods, and the NSA issued guidance, this is maybe the third recommendation for folks. The NSA issued guidance that said, Yes, please rewrite your software in memory safe languages if that’s viable, but then also look for ways to mitigate memory attacks using software memory protection methods where you don’t have to rewrite software, and that’s a really good compelling option, especially for existing code across critical infrastructure. 

[Paul] So that’s along the lines of things like data execution prevention and address space layout randomization, ASLR. It’s much easier to say it if you don’t have to read it out in full, that is built into modern operating systems. But for embedded devices, it’s not quite that easy, is it? Because there isn’t room for a full blown operating system to fit. So it’s not just a question of saying, oh, well, the underlying software is now magically more secure, provides a little bit of extra protection. You kind of have to knit it for yourself, don’t you?

[Joe] Definitely right. I think the real time operating systems play a really, really important role for these embedded devices. You know, they’re purpose built. They’re not generalized operating systems. They’re purpose built to do focused functions to enable these applications.

As a result of that, there’s a high predictability, a high reliability that a task will get done when it’s supposed to get done, and you can have high performance on these devices. You’re right. Not all operating systems have the compiler settings to enable some of these advanced features. And address space layout randomization in particular is tough because that’s been around for about twenty years, and it took about nine or twelve months before attackers figured out how to beat address space layout randomization. The issue is all it does is it moves functions in memory in a relative way.

It’s an offset, if you will. And the idea is that if I move it, then the attacker will look in the wrong spot or the exploit that’s looking to exploit a function in location a b c, it’s no longer there. And so it’s easily defeated because with one little information leak, you can then recreate the entire memory layout and then just adjust your own exploit to target where the weakness has moved. 

[Paul] Joe, if I can just interject at this point to make the observation that that’s why vulnerabilities in mainstream operating systems are often exploited in pairs, isn’t it? In other words, you’ve got address space layout randomization. You don’t quite know where this DLL or shared library is going to be, but you know it’s going to be somewhere, and all you need to do is wait for the log file that accidentally includes information about, hey, I saved this at memory location, so and so, and bingo. Once you’ve got that, if you’ve got patience, like a  Volt Typhoon, then you can take that information and unless and until that device is rebooted, you’re in forever. 

[Joe] The more modern methods for protecting like the ones that RunSafe has pursued is at a far more granular level. And so with that, often what we see is you need two or three functions in order for an exploit to really work.

And so if you do it at a more granular level, even if you find one, you can’t find the other two that you may need. And the reason for that is, you know, the average program has something like 240 functions. And if you think of all the permutations where all those functions can go, then even if you find one, you can’t find the others. And so with that said, I think things like load time function randomization are far superior than address space layout randomization. So load time function randomization relocates every function uniquely every time the software loads in in a way that’s a lot more granular than a simple offset itself.

[Paul] And I presume that also gives you the opportunity that although the software may fail, which strictly means that the attacker could mount what’s called a denial of service attack, they could deliberately crash it, The nature of that crash would probably alert you to the fact that this wasn’t just cosmic rays or a power glitch or water got in and shorted something out. This is probably something that needs a deeper investigation, particularly if it happens on several devices at the same time. 

[Joe] 100%. If you really go deep into the operating systems, there’s something called the interrupt handler. And the interrupt handler kicks in when a process aborts or crashes or software crashes itself.

And there’s little cookie crumbs inside the interrupt handler that give you an indication of exactly what was going on at the moment of that crash. And so we’ve done some research, and we look at that, and we say, is that crash a result of a bug, or does it look like someone was doing something that suggests compromise or an ongoing attack? And so imagine being able to collect twenty, twenty five state variables at the moment of the crash, interpreting those, and knowing, hey, this is a bug. Go tell the dev team, or, Hey, this is an attack. Go tell the security operations team.

And that kind of insight can be gleaned really fast. And so that is important. I think if you think about resilience in general, you know, yes, you want to prevent. A lot of what we’re talking about is prevention. You want to be able to recover and then diagnose what happened as well.

Prevention is certainly a key part because it costs the least, but being able to recover and having an indication of what happened so you can look into it further is a key aspect as well to resilience overall of these systems. 

[Paul] So, Joe, I’m conscious of time. So I’d like to now just focus on what I think many people see as the big problem behind this, that they may want to do all of these things. And that is the question of who pays? We have organizations like NIST and CSIRT giving strong advice.

There’s huge amounts of stuff you can download that if you digest just a tiny bit of it, it will help any well meaning software development team to build better and better and better software without having to rewrite all of it in Rust. That raises the question that in many Western countries these days, including in the US, the government doesn’t necessarily control or have authority or financial involvement in utilities. They’ve either always been private or some of them have been privatized. Who pays? And how do you get them to want to pay more than lip service to just signing the secure by design pledge?

How do you see that follow through so that the code does actually improve for the greater good of all? 

[Joe] Well, I do think it’ll be interesting to look at the EU and the Cyber Resilience Act and some of the responsibilities there. If you produce devices that have bugs, we’ll be curious to see what happens if those bugs lead to a cyberattacks based on what the CRA does in fact say. So that’s something for everyone to watch, and the CRA is set to be implemented come January 2027. We still have about just under two years to get there, but I know a lot of people are preparing for that.

What it means ultimately is getting to high quality code and trying to eliminate bugs in your code and ultimately improving your software development practices. And I think that’s part of the spirit of secure by design, so I think CISA has done a good job of encouraging Secure by Design. And yes, it is a pledge, but there’s solid guidance behind it and actionable things there. I do think that with that, there’s also other low hanging fruit that can be done, but the question of who pays is fundamentally the critical thing. 

What I’ve seen from product manufacturers, what drives their security adoption for their products, it’s internal governance policy, so is it good and standard for the company across all the products for them to have a certain benchmark of quality, so their governance requirements and policies.

And then industry standards help if it’s actually required. But then when there are known, advanced persistent threats, if we’re talking about Volt Typhoon and and these other various threats, those are things that product manufacturers want to go after because if their devices are constantly compromised or targeted, that leads to operational costs, support, fixes, bugs that they have to resolve, and everything like that, and it can wreak havoc. And there’s two other things that really drive their security adoption. The next one is often customer demand, and that’s where Secure by Demand. If Duke Energy or Duke Power is asking Schneider Electric, hey, what’s your Secure by Design posture? Or how are you solving software memory protection issues that lead to the compromise of our systems? Then that has an effect on people. 

And then lastly, I think if product manufacturers are looking for differentiation. So my point is there’s real reason why product manufacturers will pay for enhanced security that benefits their products, their operations, their infrastructure, their support, their ability to disclose vulnerabilities if they know they can prevent attacks. The flip side is the asset owners, the infrastructure providers, they bear a lot of risk if the attack happens.

And so what they’re trying to do is prevent the attack so they don’t, you know, have to go into incident response mode, and that’s seriously costly to them. And so somewhere in there, there’s a benefit for both of them to pay some for cybersecurity, and they certainly don’t have robust security programs.

[Paul] Yes. It’s as though Secure by Design, if you like, is the carrot, and perhaps Secure by Demand is the stick to go with it. And between those two things, let’s hope that we can quickly and continually build and rebuild the weaknesses in our critical infrastructure so that we’re not at risk from somebody who feels that they want to steam in in the future.

[Joe] And I will say, I do hope, real quick, I do hope that there’s a path for cyber insurance in these places that help change the economics and bring buyer and seller or producer and asset owner together on these topics. So cyber insurance is one way to break that. And as we’re seeing in The EU with CRA, shifting liability from maybe the asset owner to the product manufacturer will change things as well. So those are things to watch for, but I think those first five items I referenced are drivers for product manufacturers today.

[Paul] And if you are a software developer and you haven’t signed the pledge yet, maybe today is a good day to start thinking about it.

It could change all of our lives for the better. Joe, thank you so much for your time. Your passion is unbridled, and it’s lovely to talk to you. To everybody who tuned in, thank you so much for listening. That’s a wrap for this episode of Exploited: The Cyber Truth.

If you found this insightful, please be sure to subscribe and share this podcast with your team. Stay ahead of the threat. See you next time.

Madison Horn: “Understand the Why”

Madison Horn: “Understand the Why”

Today's guest is Madison Horn, CEO of Critical Fault and former US Senate nominee. In today’s episode, Madison discusses Critical Fault and her role there, how trends in cybersecurity have changed over the past 10 years, her thoughts on the Biden administration’s...

read more
Madison Horn: “Understand the Why”

Antoinette King: “Record Scratch”

Today's guest is Antoinette King, founder of Credo Cyber Consulting. In this episode, Antointte discusses founding Credo Cyber Consulting and her role there, the dangers of looking at cybersecurity as a "cost center", the connection between physical and cyber...

read more