AI vs. Vulnerabilities: Who Really Wins?

March 26, 2026
 
 

Artificial intelligence is rapidly changing cybersecurity, but not necessarily in the way headlines suggest.

In this episode of Exploited: The Cyber Truth, Paul Ducklin sits down with RunSafe Security CEO Joseph M. Saunders and Dataminr’s Joe Slowik to explore how AI is reshaping the vulnerability landscape.

Rather than a clear win for defenders, AI is accelerating both sides. Attackers are using it to automate reconnaissance, generate exploit variants, and scale operations, while defenders are leveraging it to process massive datasets, improve triage, and enhance decision-making. The result? Faster cycles of discovery, exploitation, and response.

The conversation dives into:

  • Why AI favors attackers in some scenarios—especially when “approximate” answers are good enough
  • How defenders can use AI to augment (not replace) human expertise
  • The explosion of CVEs and what it actually signals about security maturity
  • The growing risks in OT, IoT, and under-managed infrastructure
  • Why most organizations still aren’t ready for fully autonomous AI-driven defense
  • The importance of maintaining human oversight in AI-assisted workflows

The key takeaway: AI isn’t a silver bullet—it’s an amplifier. Organizations that understand how to integrate it into disciplined security processes will gain an advantage.

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joseph M. Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Guest Speaker – Joe Slowik, Director of Cybersecurity Alerting Strategy, Dataminr

Joe Slowik has over 15 years of experience across multiple cyber domains, including offensive and defensive operations with an emphasis on adversary understanding. Joe currently directs cybersecurity alerting strategy at Dataminr, leveraging AI and related technologies to identify signals of interest in various telemetry sources. Previously, Joe has had various roles at organizations such as the MITRE Corporation, DomainTools, Los Alamos National Laboratory, and the US Navy.

LinkedIn

 

Watch the Full Episode

Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

Paul Ducklin (00:07)

Welcome back, everybody, to Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and Founder of RunSafe Security. Hello, Joe.

Joe Saunders (00:20)

Hey, Paul, great to be here and great to have our guest today.

Paul Ducklin (00:24)

Yes, you’re both called Joe, but I’ll just have to deal with that. Joe Slowik is the Director for Cybersecurity Alerting Strategy at Dataminr. Dataminr is heavily involved in using AI for intelligence gathering. And our focus in topic relating to AI today is AI versus vulnerabilities. Who really wins?

So Joe Slowik, Joseph, is it helping attackers better or is it helping defenders better?

Joe Slowik (01:01)

I don’t think we’ve settled the matter even remotely close at this time. The main element to AI, because we’re talking AI, we’re talking large language models primarily. AI, general intelligence, and similar sorts of items are still moonshots as far as the technical development is concerned. So from that perspective, we’re still talking about taking lots of available data, lots of available inputs and being able to rapidly come up with assessments and predictions on them in large language land. So with that said, from both an offensive and defensive perspective, what we’re really seeing is a boost in velocity around the ability to handle large datasets with a certain degree of tolerance for maybe not errors, but at least questionable guidance within those datasets.

The models have gotten significantly better. I’ve been playing around with a couple of new iterations from a few different providers and I’m getting really impressed with what I’m getting out and seeing fewer of the hallucinations and fewer of  the what the hell are you talking about? Definitely really good at taking known data and distilling it into a general prediction or a general assessment. Having said that, from an offense versus defense perspective, I think right now, just given that defense has to assess and concern itself with all possible ingress vectors into an environment where offense only needs to figure out what are the routes to my target, that offense benefits from a greater degree of specificity in asking questions of models. For example, what would be the most common items used for remote access maintenance and monitoring in North American electric sector operations over the last five years, and what are common factory set credentials associated with these devices? That’s a problematic query to run that will return some results, which unfortunately may have some actual impact behind them.

Paul Ducklin (03:00)

Yeah. The annoying thing about that as well is that if that’s 76% accurate, attackers could, in theory, loosely speaking, get into three-quarters of the default devices they try, and the ones that don’t work really don’t matter. From a defensive point of view, looking then to go out and find which devices have these usernames and passwords, if they have 25 % of them not on their list, then they’re actually going to miss out, aren’t they? It almost sounds as though the inaccuracy or the apparently casual nature of some AI responses could be said to favor attackers rather than defenders because they don’t have to get it right every single time. Is there truth in that?

Joe Slowik (03:54)

Yes and no. I mean, I really like to push against the whole idea of like, attackers only need to be right once, defenders need to be right all the time.

Paul Ducklin (04:01)

I agree, I think that’s well said.

Joe Slowik (04:03)

But having heard that, in leveraging AI, I think that there is a greater scope for adversaries, especially adversaries that are just trying to do something as opposed to something against a specific organization or entity to derive a lot of value in interacting with a LLM to find something of interest. Whereas from the defensive standpoint, it’s more difficult, not impossible, just more difficult to ask the questions and come up with reasonable responses for what is a detection signature for this exploit? For example, you ask Gemini or Claude or whatever this question, you’ll get an answer, but it’s an answer that requires significant investment in terms of refinement and improvement, whereas from an adversary standpoint, I can probably get something I can try out of the box. It might fail, but if it fails, so what? I will try the next thing. There is a tension between the two sides with the models, especially if you’re using any sort of capability to pipeline open-claw and stuff like that is now the rage, although very scary. And I recommend not using it right now for various reasons, but we’re seeing really interesting developments in terms of accelerating the pipelines for agents, for AI in general, just running through elements and coming up with interesting answers quite quickly and potentially all independently of user input.

Paul Ducklin (05:25)

So when it comes to something like an AI agent writing code, clearly if that code is approximately correct, then an expert human could use that to accelerate their coding time, provided that they have the discipline and the intent to remove any bugs and to filter out obvious defects and to get it reviewed.

Joe Slowik (05:47)

Potentially, yes. I’ll fully admit I use AI to help me code because I’m a terrible coder and I need the help.

Paul Ducklin (05:54)

But you still need to get the code right in the end, don’t you? Whereas an attacker could take code that more or less works, say to target a particular known vulnerability, and just use AI to create 4, 5, 20, 500 different variants, perhaps much more quickly than our existing processes and procedures are able to react. Do you think that’s correct or have I exaggerated a little bit there?

Joe Slowik (06:23)

That’s correct, but I think it’s trivially correct because if you start thinking about their sort of iterative development, you talk to someone like Dave Aitel, who is very much involved in the AI space now, but has done lots of work on offensive security and vulnerability discovery historically. People have been doing fuzzing for decades when it comes to software inputs and similar. So it’s not that AI changes the game in terms of like, now I can do this. Like, no, we’ve always been able to do this. 

Where AI really steps in is the ability to do these sorts of actions with a greater amount of latitude and greater velocity. I can start doing things which may have taken a day in an hour, which may have taken a couple of hours, and do it in a couple of minutes, which really ramps up the life cycle for exploit discovery. 

Now you still need to be able to write code to exploit something. Just because I found a bug, like a memory leak or something like that, it’s like a great, I’ll take advantage of it and take advantage of it consistently is another open question as well. But in poking holes at these sorts of things, it’s becoming easier. Which, go topic adjacent for a second, is really curious if you look at the vulnerability landscape and the number of CVEs that are being issued is that we’ve seen quite the acceleration in CVE issuance over the last couple of years, partially driven by this phenomenon. And yet a lot of the CVEs in question are quite trivial in nature. Certainly still valid vulnerabilities, but at the end of the day, it’s like, really issuing a CD for this expected functionality in a Linux operating system or something similar.

Paul Ducklin (07:56)

More vulnerabilities does not necessarily mean worse code. It might mean that our ecosystem and our society as defenders is actually in a better position.

Joe Slowik (08:07)

Just a hallmark of being an active and capable researcher. They’re like, hey, I’ve got a CVE to my name. What’s the CVE? It doesn’t matter. It’s just, I have a CVE. There’s strong incentive on the researcher side to find these things combined with the technology platforms assisting in the ease of finding these things, thus explosion. That doesn’t mean that we’re necessarily sitting on all sorts of potentially under-exploit zero days that we don’t even know about right now.

I mean, we are sitting on some that goes without saying, but it doesn’t mean that things have gotten necessarily worse over time. It just means that our ability to identify these things has significantly improved in terms of our speed of doing so. Start tracing a line from about 2019 to the present. I remember during the pandemic thinking that it’s like, Oh, you had all these researchers who had nothing else to do while they’re sitting at home that are just pounding away at code, trying to find stuff. And so you saw all sorts of interesting things come out from late 2019 through late 2020 in terms of load balancers, VPN concentrators, and similar. And then that same sort of insight has then been accelerated technologically through the AI platforms, but the overall desire and interest hasn’t gone away.

Joe Saunders (09:20)

And it does feel like if you put these points together, it used to be when we founded RunSafe, it was assumed that nation states are well-funded and they have a lot of time on their hands. And we were always looking for ways to create an asymmetric shift in economics that would benefit the defenders. But if you look at the macro trends here, accelerating the ability to develop exploits or define vulnerabilities or to work on the offensive side to find things to exploit, the economics are starting to accelerate even from the attacker side. And I think that macro level shift is profound because you can imagine a well-funded operation that is willing to invest the time can now employ a lot more agents with them and scale exponentially in leveraging AI for these purposes.

Joe Slowik (10:14)

That’s true, but I would say that cuts both ways. So if you think about last year’s DEF CON, DARPA had an AI pavilion. It was actually really impressive. Had all sorts of crazy lights and…

Paul Ducklin (10:27)

Ah blinking lights, they make a trade show don’t they?

Joe Slowik (10:30)

They do. On a serious note, the AI challenge that they put forth in terms of teams being able to not just identify, but then automatically patch or fix bugs identified in given software was pretty amazing. I believe it was a joint team from Georgia Tech and South Korean University whose name escapes me right now that ended up winning the challenge. The idea is that, yeah, this does hand a really significant capability to threat actors, to adversaries, to offensive security researchers, to find bugs and exploit them. But the same tools are available to developers to leverage the same capabilities against their own code base to try to identify these things preemptively and address them. 

So we’re really talking about dual use in a good way here. We like to talk about the doom and gloom scenario of like, Oh my goodness, all the threat actors are going to be able to find the vulnerabilities in our software and we won’t be able to do anything about them. Like, well, there’s nothing stopping these same entities that own the source code from like, Hey, identify potential memory leakages or potential lack of input sanitization in these 1000 Lydans of C-sharp or whatever it is they’re working on. I wouldn’t say that the answer that we get back is authoritative, but it certainly helps accelerate some of that process of identifying potential issues. 

So if we take a step back and view artificial intelligence mechanisms and certainly what’s prevalent right now in LLM space as a tool that could be used for indeterminate purpose. We should be thinking about defensive applications in terms of code security right along with offensive applications in terms of vulnerability identification and exploit development.

Joe Saunders (12:09)

My point isn’t that it’s one-sided. My point is it does accelerate on the offensive side, which only necessitates defenders to find the same equivalent asymmetric responses. And certainly, a well-motivated defender of different types will invest in those things. And certainly, the software manufacturers or producers should be looking for all of those things. They need those incentives. And if there is a sufficient threat against a target device or system or software, then I do think they’ll be motivated. But I do think it’s lagging. It will maybe fall behind a little bit at the offensive side, even though they can leverage these tools and speed up.

Joe Slowik (12:54)

No, I completely agree. We’re on the same page.

Paul Ducklin (12:56)

Finding vulnerabilities is one thing, but turning them into reliable working exploits is very often another, especially in operating systems like Windows or MacOS or Linux, where there are lots of protections in place. How does this affect embedded systems, OT or ICS systems, that may still have undiscovered low-hanging fruit that’s easy for non-sophisticated AI-using attackers to find, but hard to patch because the devices are physically literally hard to patch.

Joe Slowik (13:36)

You know, we like to hammer on the OT, the industrial control side of things. And my first response to that is that sane deployments mean you’re digging through multiple layers of network identification, defense, and similar before we even hit that device. Having said that, there are non-sane deployments and we’ve seen threat actors take advantage of, I have my HMI sitting right there on the internet and you can find it on Showdown.

Paul Ducklin (14:02)

I shouldn’t laugh, but I hear you.

Joe Slowik (14:05)

People need to eat their cyber vegetables. We talk about all these sorts of things as far as new emerging attack types and similar. It’s like, why the hell is this even internet accessible? So I have a very amazing resource. I should start a company around it and get my series A in order to solve the security issue. It’s a website that’s called get your shit off the internet.com. That’s all it is. Get your stuff off the internet. Having devices immediately and unfiltered exposed to the broader internet is honestly the biggest issue with a lot of OT equipment, as opposed to having lots of forever days, things that you can’t patch because deployment would mean you’re never able to reach that without being able to compromise or migrate through a bunch of other items. So I think we have solutions to those issues in the critical infrastructure space that we’re just not applying as rigorous as we should. What really bothers me, and you were hinting at this, is your smart TV, my internet doorbell, and similar items.

Consumer devices are really what worry me because, you know, my mom doesn’t know how to patch these things. She doesn’t even know that she has to. And while they might not be immediately network accessible in terms of directly internet exposed, they have other items that allow for adversaries to discover them, or potentially they are directly exposed in a small business environment. And that leads to the creation of things like proxy networks, operational relay box networks, and similar that we’ve seen weaponized by advanced persistent threats by e-crime groups and similar.

Paul Ducklin (15:36)

You’re thinking of malware like Mirai and its many successors. Brian Krebs has just written about an even bigger network that’s out there right now. And in many of those cases, it’s just sitting on things like routers that people have at home, or maybe even in a small business where that router was actually provided by the ISP. It can’t be changed. It kind of never gets considered again. That’s not an AI problem.

Joe Slowik (16:02)

Exactly.

Well, there is an AI problem related to this because if I’m thinking about things like on the industrial space or even the enterprise network space, Schneider Electric, a Rockwell Automation, a General Electric from an OT asset vendor perspective, or investor-owned power company, or similar. They have the resources that they could do code reviews. They could leverage AI defensively and similar to try to keep pace with adversaries. Your local print shop or Indian restaurant around the corner that has a Linksys, Asus, or whatever router from seven years ago that they plugged in once, it runs, they don’t touch it anymore because it works. 

They’re not using these sorts of items, but threat actors are certainly probing these sorts of devices and doing so in a quite rapid fashion to weaponize them. What’s interesting about this, and this is actually the research I’m presenting at the RSA conference in a couple of weeks, there is a disconnect between these entities that get compromised and roped into these networks of like a botnet, a little more complex, bit more going on in terms of scope and complexity, but essentially a botnet at the end of the day that then allows for proxying traffic for really worrying sorts of intrusions, thinking about like a Volt Typhoon or a sandworm or similar sorts of activity. The weak link in that chain oftentimes is the corner shop. It’s a really interesting disconnect that we’re seeing all of these sort of intermediaries that have no business, no value whatsoever to any adversary other than ankle-biting ransomware actors or something, but being leveraged by pretty concerning state-sponsored threats to then impact other organizations that are pretty important, like water utilities or power companies or similar. 

And figuring that out with the AI question accelerating these items when the people in the middle have no inclination, no awareness whatsoever that they’re even involved is really curious.

Joe Saunders (18:01)

And along those lines, those entities are not necessarily well-funded and exploring AI and using AI for defense. They’re really vulnerable. It opens the door then for third parties to help these organizations. Certainly, there’ve been efforts since you brought up water. How can partnerships or government or private organizations help less-funded local utilities with their cyber defense tools and the like. And so I do agree that they can be exploited. And I also would recognize that they don’t necessarily spend their time thinking about cyber defense, let alone knowing that they are vulnerable in this way as they’re focused on other things to execute their business or their mission in general.

Joe Slowik (18:47)

Yeah, exactly.

Paul Ducklin (18:49)

I think Joseph, you touched on that, you, when you were talking about whether it’s a corner shop or whether it’s just somebody who needs an internet connection, perhaps because they are a utility company and they’re struggling to get some particular pump room or something online. think, well, I’ll just go to the local ISP. When you’re provided with, say, a router that came out 12 years ago that you just plug in and use, feels like the plug on the end of an electricity cord that you plug into the power socket. So you’re just inclined to trust it because it doesn’t feel like part of your IT or your OT network. When it comes to something like a security operations center, do you think that AI can really help a SOC for finding and triaging those potential problems? Or is it enough just to say what I’ll do is I’ll do a showdown search of my own network once a week?

Joe Slowik (19:46)

I think there’s room for all of those solutions simultaneously. I will be very blunt and very honest in saying that are we going to completely rip out existing SOC processes and replace it with an AI workflow? No, can’t do that right now, especially in environments with a high uptime, high availability requirement, because when a decision gets made wrong or incorrectly, it’s going to have significant repercussions. Having said that, people make mistakes too. Yes. So it’s not just AI that is fallible, like people are very fallible as well. Striving for a process that blends each world is probably the way forward for the next several years at least until AI puts us all out of a job. Thinking about a modern network environment, first off the number of organizations that could actually even have a sock to begin with, we’re talking about 5%, probably even less of organizations and network owners in the world.

And so of that global elite that have that capability over the last 15, 20 years, as we’ve seen the internet, internet-connected activity and similar expand precipitously, there’s a lot to look at right now. Thinking about where AI steps in as a function, as a tool, we’re really talking about its ability to accelerate and improve upon existing human workflows by processing larger amounts of data more quickly to allow for a human in the loop to make a decision more quickly. So even from a Dataminr perspective, I do a lot of AI stuff, but at the end of the day, it’s about generating some sort of an alert that goes in front of a person to then figure out what do I do with this? It is possible if you are so inclined that you can start automating even the further right-hand portion of that workflow and like, I get some sort of notification or alert, I’m going to automate my response to that as well. I don’t know too many people that are comfortable with that yet.

We’ve been dealing with Sims for over a decade at this point, source solutions for almost as long, although they certainly matured more significantly over the last five years. But even then, people are still very hesitant to turn over automated remediation, automated blocking, and similar other than very trivial use cases to an algorithm or something similar. There’s definitely lots of space and lots of good space for artificial intelligence or even more traditional approaches like machine learning and such to highlight things that a human needs to pay attention to and to enhance human decision making. But for complete autonomous action, still not there yet. And I don’t think a lot of people are comfortable with that yet. To take a sidetrack for a second, someone posted on social media a couple of days ago that they set up a open-claw workflow and then for whatever reason, it decided to RMRF their entire computer.

People are still worried about if the proverbial stuff hits the fan, how do I stop this if it’s part of a workflow from which I don’t have an easy escape valve? We’ll eventually get there, but it is an open question as far as when that will be.

Paul Ducklin (22:50)

You don’t have to throw out the baby, the bath water, the bathroom, the bath, and everything when you adopt AI. You could, as you say, use it more in its traditional machine learning phase of helping you classify threats into low, medium, or high, or possible malware, possible, not malware, just to free up your human experts so they don’t have to do those by hand. That would really help, even if the AI is somewhat inaccurate.

You’re not entirely relying on its output. You’re just using it to free yourself up in the same way that electronic digital calculators freed us from having to go to log tables every time we wanted to multiply two numbers together.

Joe Slowik (23:35)

You said log tables. I’m guessing that 30 if not 6 % of the audience listening to this if they’re under the age of 40, I have no idea what the heck that even means right now.

Paul Ducklin (23:45)

I’m thinking it’s probably more like 98%. Go look them up, they’re really fun. And with seven-digit log tables, you’re getting better than one part in a million accuracy. If you think back to the days when pocket calculators were coming in, people were understandably worried about them because they figured if people who didn’t really understand things like orders of magnitude in calculations just relied on them, they get the right answer but with the decimal point seven places off. There’s a sort of similar reasoning you can make with AI. If you blindly trust everything that it advises you, then you’re not really using it wisely because you’re not using it to free yourself up to do smart things faster.

Joe Slowik (24:33)

No, it’s like expecting a tool to do the work for you. Yes. The analogy I like to use is to go back 50 years or so. I like the idea of a hand drill where you had someone who had to actually crank something for a screw drill or similar. More efficient than doing so completely by hand, but still required an awful lot of work. Now we have cordless drills, very, very good ones, actually. Like, I have a really good one sitting on the other side of this monitor right now.

It doesn’t mean that we no longer have carpenters that work. It just means that we have carpenters and similar that work incredibly more efficiently. They do strip more screws than they did maybe back in the day because we’re talking about working at a higher speed, higher power than what was done by hand 30, 40, 50 years ago. But no one would go back to using a hand drill anymore because it’s ridiculous because we’ve seen and acknowledged how much more effective my Makita Dewalt or whatever it is that you’re using makes my ability to drill a hole or to set a screw than how things worked previously. So yeah, there are costs associated with it, but the benefits, as long as we’re using this as a tool to guide known processes, so far outweigh those costs that we wouldn’t even dream of going back to these days.

Paul Ducklin (25:49)

As long as those tools don’t replace the old motto, measure twice, cut once, they actually put you in a better place, don’t they? When we come back to things like cybersecurity and to cyberattack and cyber defense, if you’re using AI constructively, but not carelessly, as a defender, you should do better than attacker who’s just relying on AI because they don’t have a better way to do it.

Joe Slowik (25:58)

Agreed.

Paul Ducklin (26:17)

You can use the AI to speed yourself up, but you can still bring that human intelligence, that human intent, that human oversight to the final decisions that you make. So in that sense, maybe it does benefit defenders more than attackers.

Joe Slowik (26:33)

Once we start integrating this more tightly into workflows and understanding what those value propositions are, completely agree. We’re still in early days. The models are still pretty immature. We’re talking about a field that’s only existed in broad public acceptance or broad public availability for a couple of years now. In a field that’s only existed for maybe 20 years in any broad sense of the term for organizations that in some cases have been operating for hundreds of years. 

So still very early on in the maturation process, but as we start understanding more and more of this, like you were saying, the opportunities start presenting themselves quite quickly to flip the script from like, the attackers have all the advantage to like, no, you’re operating in my network. I control how this network operates and what’s available to you. So good luck now that I’m able to accelerate my own decision-making workflows to match or exceed what you are doing or trying to achieve.

Paul Ducklin (27:24)

Now, Joe Saunders, you’ve made a point in previous podcasts about the kind of people who might consider using AI to kind of cut corners. An example you’ve given is, hey, let’s take some code that’s under one open source license I don’t like. Let’s get AI to rewrite it so that it looks different enough. Then I can kind of pretend it’s my code and therefore also different enough that when there’s a vulnerability alert that particular source that you’ve used in your embedded devices doesn’t come on the radar as needing attention. Is that something we’re seeing already?

Joe Saunders (28:01)

I do think the point overall here is that fundamental methodology process approach, how you incorporate AI tools into your processes, that will never change. You need to incorporate the tools, and in this case, maybe the GenAI tools, to help your process, but your process has to be well thought through, and you do need to exercise some good discipline. Can that problem occur? Yes.

I do believe some people have rewritten some libraries already.

Paul Ducklin (28:31)

And do think they’ve done that just because they thought they’d make it better or where there may be slightly more dubious motives like, hey, I don’t like that license, I want this to be mine, air quotes?

Joe Saunders (28:42)

I tend to side with people have positive intent, but it could still result in some of that. Do see organizations trying to enforce some development practices using AI and noting that there’s deviation from standards that may not be caught just by a human manager reviewing code later. They might not be as careful looking at it.

Paul Ducklin (28:45)

Yeah, fair enough. That’s because they go this was generated automatically so it won’t have human type errors or do you think that it’s just sufficiently different that it doesn’t tickle the right nerve centers?

Joe Saunders (29:18)

Even managers, when they have a stockpile of merge requests to review, sometimes cut corners. That’s my suspicion. There are always these buildups of backlog. There’s all these things where human practice may not always be perfect.

Paul Ducklin (29:36)

That’s sort of like saying, hey, I’ve got the power screwdriver, I’m not going to drill a pilot hole, I’m just going to cheat and just try and jam the thing and most of the time it’ll be okay.

Joe Saunders (29:46)

Might have in my heuristic as a manager that Joe Slowik is a good coder. Maybe I assume he’s a bad coder. He admitted that earlier. Either way, I may have some biases as a manager. And my point is, I think there’s an opportunity then to enhance methodology and approach using AI to reinforce what the standards are. And that can lead to ultimately higher-quality code.

Of course, there’s downsides to it as well, but I tend to think that there will be motivated managers, teams, developers who find productivity gain, who also have to realize they need to take a step back, revisit their processes, and enhance them in light of new tools.

Paul Ducklin (30:32)

So gentlemen, I’m conscious of time. So I’d like to put a question to either or both of you to finish up. And that is, if there’s one misconception that people have about AI and vulnerability, stroke exploits, stroke cyber insecurity, one thing that they need to get out of their mind, what would that be and what should they be thinking about instead?

Joe Slowik (30:59)

AI is not a magic bullet. It is a tool. It is a function that can enhance, improve, accelerate, or similar human-led processes. But we’re not at a point where artificial intelligence is stepping in and spotting zero days and writing exploits for them. Maybe in some very trivial cases, but at the end of the day, this is still a human-driven operation. And I think we need to fix the misconception that AI has been so game-changing as to make completely autonomous these processes.

Joe Saunders (31:28)

Joe Slowik took the words right out of my mouth. AI is not a silver bullet or a magic wand of any type. It is a very, very powerful tool. And I do think that the world has changed because of it. I do think teams are more productive. They could be both offensive teams are more productive and I think defensive teams can be more productive. And I would encourage everybody not to wait, but to explore and to push the envelope a bit.

Paul Ducklin (31:34)

Exactly.

Joe Saunders (31:56)

Yes, you may make some mistakes. Yes, something might slip through, but the productivity gains and the other benefits that result from embracing the technology, I think, will far outweigh any of the negative consequences.

Paul Ducklin (32:11)

And Joe, would you agree with what we spoke about several episodes ago with Leslie Grandy when she said, even if you decide AI is not for you and you don’t need or want to use it in your business, you should see what results your adversaries are going to get if they try it, because they surely will. It is a question of be prepared at the very least, as you say, provided you don’t treat it as a magic wand.

Joe Saunders (32:40)

The behavioral psychology of ensuring that you do understand with full appreciation what all the factors are, I think, is very, very important. I think that’s a lesson from Leslie Grandy that’s very powerful. You should know what you might be stepping away from and have a good perspective on it so you can yourself make decisions what’s best for your particular.

Paul Ducklin (33:03)

Joe, think that’s a great way to end. Joe and Joseph, thank you so much for your passion and insight. I kind of wish we could just keep going for another half an hour because I feel we’ve only just got started. So thank you so much for your time and for your expertise. Thanks to everybody who tuned in and listened. If you find this podcast insightful, please subscribe so you know when each new episode drops.

Please like and share us on social media as well. We really love it when you do that. Once again, thanks to everybody who tuned in and listened. Remember, stay ahead of the threat. See you next time.

AI Wrote the Code—Who Owns the Risk?

AI Wrote the Code—Who Owns the Risk?

    In the Season 2 premiere of Exploited: The Cyber Truth, host Paul Ducklin is joined by RunSafe Security CEO Joseph M. Saunders  and embedded systems expert Jacob Beningo to examine how AI is changing embedded development and the new risks it introduces. While AI...

read more