Creative Resilience in Cybersecurity & AI: A Conversation with Joe Saunders and Leslie Grandy

July 10, 2025
 

What happens when you blend creative thinking with technical strategy? In this episode, Joe Saunders and Leslie Grandy uncover how tools like inversion thinking, “Premeditation of Evils,” and generative AI can reshape the future of cybersecurity. 

They discuss how to move beyond expert bias, prepare for unlikely threats, and why resilient cybersecurity starts with imagination. A must-listen for leaders navigating AI, risk, and the evolving digital threat landscape.

Speakers: 

Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.

His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and  helping all of us to raise the bar collectively against cyberattackers.

LinkedIn 


Joe Saunders:
Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.

LinkedIn

Guest Speaker -Leslie Grandy,  Lead Executive in Residence at University of Washington

Leslie Grandy is a global, first-to-market product executive, startup advisor, and author with over 25 years of experience innovating and delivering game-changing products in publicly traded Fortune 500 companies, including T-Mobile, Apple, Best Buy, and Amazon. She has launched multiple first-to-market products, such as the first Android phone and the earliest digital media subscription service from major content brands like MLB, NASCAR, and CNN, for which she co-authored a patent acquired by Intel.

Leslie advises startups and consults with large publicly traded companies, including Oracle, Starbucks, and Red Robin Gourmet Burgers, to empower individuals at all levels to identify innovative solutions and think expansively through her company, The Product Guild.

She is the Lead Executive in Residence for the Product Management Leadership Accelerator, a program she co-created with the University of Washington Foster School of Business Executive Education team. Her book, Creative Velocity: Propelling Breakthrough Ideas in the Age of Generative AI, will be released by Wiley in May 2025.

LinkedIn

 

Key topics discussed: 

  • The role of creative thinking in modern cybersecurity
  • Inversion thinking as a planning tool for resilience
  • How attackers—and defenders—are using generative AI
  • Why AI can’t replace human judgment in risk strategy
  • The critical need for Secure by Design software development
Episode Transcript

Exploited: The Cyber Truth,  a podcast by RunSafe Security. 

[Paul] Welcome back to Exploited: The Cyber Truth. I am Paul Ducklin joined today as usual by Joe Saunders, CEO and Founder of RunSafe Security.

[Paul]Hello, Joe.

[Joe] Hey, Paul. Great to be back.

[Paul] And we have a very special guest today, and that is Leslie Grandy, whose job title is Lead Executive in Residence at the University of Washington.

[Paul] Hello, Leslie.

[Leslie] Hi there. Nice to meet you, Paul. Thanks for having me on, Joe.

[Paul] Leslie, maybe the best place for us to start is tell us how you came to be so passionate about cybersecurity.

[Leslie] Over the course of my career in technology, I’ve worked with a lot of cybersecurity teams, and I think they are passionate about what they do, but they’re also sometimes a little, subject to expert think or authority kind of biases.

[Paul] I always laugh and then always say to Joe, I shouldn’t laugh. I shouldn’t laugh. I really hear you.

[Leslie] And often are so narrow in their focus because of that expert bias or that authority bias that they’re not willing to look at the whole landscape of how things could negatively impact what they’ve conceived or or the security measures they believe are foolproof.

[Leslie] And so when something does happen and something that’s unexpected happens, they’re less prepared. What my passion has been is that creative thinking is everyone’s job. It isn’t just the job of a designer or a marketer, whether you’re a financial person or a risk management person.

[Leslie] To be a creative thinker means you have to be open to diverse perspectives and challenging your own way of thinking, and that’s just not necessarily a common trait in the cybersecurity teams. And so when I’ve introduced this, they’ve actually recognized that this is an opportunity to expand their knowledge and reach as opposed to challenge their knowledge and reach.

[Paul] Absolutely. Because I think a lot of people get the idea that when you say creatives, you think about, oh, well, they’re they’re good at art, and they can write poetry, not just prose, and they’re great at acting, and they make fantastic short videos.

[Paul] But that’s the result of creative thinking. It’s not the thinking process itself, and you can think creatively and should be doing so in any field from digging ditches to building secure systems.

[Leslie] Absolutely. And I think people discount the notion that a creative mindset is a mindset anyone in the enterprise can have. And by that, I mean, they’re not just curious and flexible and open minded. They’re willing to actively engage in discerning what’s valuable and meaningful.

[Leslie] They’re also, I think, a little bit more likely to be balanced in their emotion, less defensive or less reactive. And that in that mindset, you’re much more open to a perspective that isn’t yours.

[Paul] Now, Leslie, you like to talk about various concepts such as, which I believe is a stoic philosophy thing. The premeditation of evil. But it’s not really good and evil in the modern Christian sense, is it? It’s really just being rational, confronting everything that could happen good and bad, and not ignoring the bad stuff because it is inconvenient to you right now.

[Leslie] Absolutely. I think there’s also another dimension of it, which is ensuring success by imagining failure. That’s the way I look at it. I think that you can actually make more robust and resilient plans if you imagine the things that could torpedo your goals or what assumptions you might have made that could be faulty.

[Leslie] I’ll just give a quick personal story. When I started before I got into technology, I was in the film industry, and I worked right out of college. I moved to Los Angeles to work in the film industry, and my parents thought it was a crummy idea. And they couldn’t think of a worse idea for me to move to Hollywood to do this, but to do it with literally no connections.

[Leslie] I had no knowledge, no connections. But my whole idea was the way I would fail and prove them right would be not to take work and not to meet people. Because the whole industry was built on my capacity to get known, to get referred to jobs, and to get experience so that people could do that referral. And so I worked and taking jobs that were really crummy.

[Leslie] And if I was discerning in my job search, I probably wouldn’t have ended up making it into the director’s field of America four hundred days later. And I think by thinking through what it took to succeed, I thought about what failure look like. Right? It becomes motivational when you know those failure cases to have a plan to deal with them. Because having that plan, that gives you more confidence you’re gonna succeed.

[Paul] So in the sort of militaristic terminology that cybersecurity embraces these days, it’s sort of what people undergoing military training might hear from their sergeant. Train hard, fight easy. You prepare for the worst in the hope that you will achieve the best.

[Leslie] My favorite story on inversion thinking is from Charlie Munger, who was the COO of Berkshire Hathaway with Warren Buffett. And he tells the story of how he was a meteorologist in World War two, and he wasn’t really trained as a meteorologist. And he didn’t think he could learn everything fast enough to be a good meteorologist.

[Leslie] So he actually went and interviewed a bunch of fighter pilots and said, what conditions will kill you? What are the things you fear the most? Because that’s what I wanna pay attention to, and that’s what I wanna help you avoid.

[Leslie] And by thinking through the worst, he actually was able to imagine how the plan for their success, because he knew what to look for and he knew how to be prepared if it happened. And I think that military readiness state really does apply very well to cybersecurity.

[Paul] Yes. It’s a sort of sense that if you fail to plan, you may as well be planning to fail.

[Leslie] Absolutely.

[Paul] I guess it makes you, in the end, much more resilient.

[Paul] So, Joe, maybe I can bring you in here by asking what does resilience mean in cybersecurity today? Because when I think back to my very, very first days as a student and in my first job, the IBM PC had just come out. It was the nineteen eighties.

[Paul] And a resilient, I’m making big air quotes with my fingers here, program was one that maybe crashed and took all your data with it no more than once or twice a week. But it has a very, very different sense today, doesn’t it?

[Joe] Yeah. I think it does. And I think, resilience in the cyber sense or in even a broader sense, a lot of people think about it as the ability to react, respond, and recover. But what I like about the inversion thinking, through the inversion process, it gives you the tools to kind of anticipate what you might have to react to in the first place.

[Joe] And I just wanna draw upon a couple experiences inside RunSafe Security where we apply those principles. And Leslie knows my co-founder Doug Britton from past work, together. And we’ve applied that kind of thinking in a couple different ways.

[Joe] And one of those ways is to think about if you are an attacker because we hire a lot of people that would be really good security researchers, and we ask them the question, if you were the attacker, what would stop you? And then let’s think about that backwards and think about what kinds of cyber defenses would disrupt you.

[Joe] And then we also apply that same kind of thinking in our own product development in a different way, which was simply to say, if we were put out of business a year from now, what would be the cause? And we worked backwards from that question, and that led to a dramatic shift in our product strategy going back about four or five years ago.

[Joe] And what we thought was, in order for our company to be resilient, in order for us to build systems that get adopted by the market, by industry, by manufacturers producing embedded software, we need to make sure that our technology worked in conjunction with the product manufacturers, not despite them.

[Joe] And we wanted their cooperation. And so we ended up, in effect, changing our whole strategy in terms of where we integrate our technology and how we develop our product. And so I just wanted to highlight two examples of thinking through what ultimately could be a really negative consequence and show how we adapted to it and how that ties to broader resilience because it’s that anticipation of problems that I think helps you think about how to be resilient.

[Leslie] I love that story. I think more founders should do that, Joe. What could create a situation where we are either no longer relevant or no longer capable of running this business. And so the idea that even the failure case is something you’re not afraid to discuss.

[Leslie] That, I think, was another thing that a lot of founders, they try to avoid the discussion of failure almost like it’s a jinx. And I think if you embrace that discussion, it really does create greater strength and resilience in your plan.

[Joe] And I recently had one on one sessions with everybody in RunSafe Security and asked them different questions about the state of RunSafe and where we’re going and things like that. And a couple of our engineers brought up that exact exercise as one of the best experiences they had to help, you know, with product development and really anticipate things that could change and saw an inflection point in the company as a result of that thinking.

[Paul] Now, Leslie, something that you might consider the elephant in the room in cybersecurity these days, even though the cybersecurity industry has been using it in various ways for years, it’s not really new, but from a marketing point it feels new, is AI, artificial intelligence, and in particular, generative AI.

[Paul] Now, there seems to be a bit of a tension here, at least in the media.

[Paul] Sometimes even in the same publication, you’ll see articles where one is proclaiming that AI is the arch villain of the piece because the cyber criminals and state sponsored actors are just going to use it to walk all over us, so we should just pretend we can regulate it away and and not go there.

[Paul] And you’ll find another article saying, we don’t need humans in cybersecurity anymore. That is so nineteen nineties. You just throw it in this magic differential calculus machine, and all the answers come out automatically.

[Paul] How do you navigate your way between those two extremes if you’re trying to either understand cybersecurity for your business or build a business that provides cybersecurity products and services?

[Leslie] It’s a great question because I think it’s true not just in cybersecurity. I do think it’s true in all sorts of industries, health care, finance, where there’s a lot of risk if people are bringing AI in and AI isn’t delivering the value.

[Paul] Yes.

[Leslie] The question of what’s the human role in that debate is actually the answer. The answer is the human must be in that debate to discern what’s meaningful and valuable, to decide what’s worth putting effort in or investing in.

[Leslie] A really great example of this is you can see that sometimes people wanna put more friction in the way of a consumer accessing something in the cloud under the guise of security.

[Leslie] But what you end up seeing is the consumer doing workarounds and behaviors that actually make them share logins and passwords and do things that actually undermine security because you’ve created such an airtight seal around the product they wanna use.

[Paul] Yes. Nothing quite inspires shadow IT like petty fogging rules and regulations that somebody or some computer program thought was a good idea at some point

[Leslie] Exactly.

[Paul] Without thinking through what the consequences of doing that would be or whether it actually had any real benefit at all.

[Leslie] Exactly. And that tension between what our customers expect from us from a secure and private interaction, but also what they expect from a easy to use customer friendly user experience when they engage with it.

[Leslie] How do I make something easy to use and at the same time, put enough security in the people aren’t fearful that they’re at risk? And that paradoxical thinking is another one of those types of sort of contradictory thinking methods that really can help a human navigate the tensions of the business.

[Leslie] And so when you’re not really sure yourself that you’ve got great answers, generative AI is a great one to prompt to have a conversation about this. Imagine a scenario where something is both secure and user friendly, and instead of having the things that drive shadow IT or password sharing, we create a user environment where they’re not inclined to do that without risking our security.

[Leslie] That’s a complicated question. And the patterns around that are something that AI is great at assessing.

[Leslie] Patterns of consumer behavior, patterns that they see in the cybersecurity world where people are hacked and what typically can happen. And then the human has to discern what actually makes sense, both from a brand and value perspective.

[Leslie] Is that good for our customers? Will they like us if we do that? But also from a technology dependability and reliability perspective. So the consumer of the data from generative AI, the human in that mix, has to really balance cost and benefit, risk and reward, but also values, which are really hard for AI to know. How does AI know what’s in brand and what’s not in brand easily?

[Leslie] And whether or not your customers will run away if all of a sudden you make a left turn to make something super secure, and now they can’t use it without five minutes of multifactor authentication conversations.

[Leslie] The idea is it’s both of those things, and it’s up to the human to make sure that they’re balanced. When I wrote the book, I put my exercises into AI because I figured people who read the book would do that. AI got many of them wrong because the AI bias is speed to answer, not evaluating every data point. And so I had to say, well, why did you ignore these three facts in the puzzle?

[Leslie] All the tools I put this in, ran it again, and decided other facts weren’t relevant. And so unless I was leaning in to ask why it made that choice, I wouldn’t understand what the flaw in the AI answer would be for my particular business. And so getting engaged and leaning into the output and really challenging the output to understand it, I think, makes for a much more robust plan.

[Paul] Joe, this idea of having a robust plan is something that I know that you feel strongly about. And if you think about this prida carteum malorum, thinking about things that could go wrong beforehand, You’re very, very keen on that, aren’t you?

[Paul] Because the traditional response in the cybersecurity industry, notably in the embedded software market where the shiny product is the thing that developers and engineers want to create and where fixing them afterwards can be quite hard. You’re very much against the patch and pray approach.

[Paul] Let’s wait till it all goes wrong and then try and fix it. And you’re very keen on something which is more ethically oriented, more community oriented, more socially oriented, which is has the general term Secure by Design.

[Joe] Exactly right. I think the notion that we should just keep running fast forever is an exhausting thought process in the first place and not a winning strategy.

[Paul] Yeah. Yep.

[Joe] And so one of the foundation elements that we thought about in forming RunSafe is to try to have an asymmetric shift in cyber defense in a way that eliminates an entire class of vulnerabilities and helps you remain resilient, protects your systems, protects your devices even when a patch is not available.

[Joe] And if you can do that for a majority of the vulnerabilities and the majority of the exploits, then you certainly save a lot of time, a lot of heartache, a lot of pain. And that in itself takes some extensive thought process how to do that well

[Joe] When I think about generative AI and how that can enhance cyber defense, I think of it across maybe three dimensions. It’s one, how can it be used to help the customer experience in support? Second one is how can it enhance your existing products themselves?

[Joe] And then a third one is how perhaps you can create additional value add through new solutions, leveraging generative AI for cyber defense. And I think in all of those areas, they’re actually quite different questions. What I don’t like is unbounded questions. How are we gonna use generative AI? It’s more like, how would we use generative AI to enhance the customer experience? Or how would we use generative AI to eliminate an entire class of vulnerabilities?

[Paul] Absolutely.

[Joe] It’s a little more precise and focused. And so that’s how I think about tying it all together and all with philosophical premise that RunSafe has an economic shift back for the defenders to be able to protect their software without working harder and harder and harder or continuing to run on that treadmill.

[Leslie] One of the things I would add to that, Joe, is I think that sometimes people get so wrapped up in engineering a really tight prompt to ask that question that they actually undermine AI’s capacity to look at a broad playing field of opportunities for discovery or exploration.

[Leslie] These structured frameworks, whether it’s a paradoxical thinking, inversion thinking, opposite thinking, these structured frameworks intentionally expect you to work at the abstraction layer before you go into that detail layer to make sure you’re gathering as many views of the situation as possible, relevant or not, whether it’s from other domains or other circumstances. So there’s an opportunity to actually expand your field of vision if you don’t jump right to the prompt engineering approach and open the door for maybe looking in other places for solutions.

[Joe] Yeah. My example isn’t meant to be too specific too soon. I think about it in the same way, and I know you have a film background, Leslie. So if I may, I guess, you need to know which genre you’re writing a script for. Right.

[Joe] And if you do and you stick to kind of the rules of that genre, you can exercise a lot of creativity in telling a compelling story. And so my point isn’t to limit the prompt itself, but at least define what you’re trying to do in a sense and have that well understood so that you can apply the creative juices towards the solution itself.

[Leslie] I hear that. I think one of my concerns though is I see that, a lot of people want me to give them the prompt to use.

[Leslie] Right?

[Paul] Do not take up any more time. I want the answer. Nah. It’s not gonna work, is it?

[Leslie] Right. And I think that’s more what I’m responding to is I don’t want people to hear you. Not that I thought that was your intention, but I don’t want people to hear you. Say, oh, he just gave me the question I should use. And so now I’m going to go put that in my roster of questions that you can flip through to say, alright.

[Leslie] This is the one I’m supposed to pull. I think what we really wanna do is expand people’s willingness to look at a broader perspective on the situation initially. I was more responding to how I see the audience just completely gobbling up these help sheets that you see all over LinkedIn.

[Paul] Seven ways to revolutionize your life with AI.

[Leslie] Right. Here’s how to do your LinkedIn post. Here’s the prompt for all these things.

[Paul] Be the same as everyone else.

[Leslie] Right. To me, I think in cybersecurity even more than so many other spaces, you gotta start with an open playing field. Right? You gotta start wider because that’s where your adversaries are. 

[Joe] Indeed.

[Leslie] And so unless you’re willing to broaden that look and not just start with a specific engineered prompt, you’re gonna end up with a very narrow answer that sort of is a self fulfilling prophecy. It gives you the answer to your question, but it doesn’t provide you additional answers that may also be relevant.

[Paul] Joe, that metaphor of wolves in sheep’s clothing applies very strongly to the modern software supply chain, doesn’t it?

[Paul] Where development teams tend to be quite small and they go, oh, I need to find the edges in an image. So I’m not going to write edge detection. I’m just going to go and find a project. Oh, look.

[Paul] There’s one on GitHub. I’ll just grab it. But you really may have very little idea about who are actually the personalities behind it. There may be people who no longer care about the product, or they could actually be impostors who’ve joined the project, got trust, and are deliberately using the project to inject malevolence. So there are a lot of balls to juggle.

[Joe] There are. And I think the obvious example based on the description you gave is that of a SolarWinds when, you know, legitimate change requests happened in the software development process.

[Paul] I laughed last time and then I regretted it.

[Paul] I’m trying to be very somber this time. Apparently, legitimate changes in the source code. Oh, look. One of our guys did it. Must be legit. Approve, build, ship, game over.

[Joe] Absolutely. And so those legitimate changes are very scary because it’s very hard to figure out if it is the source of vulnerability or introduced it in the first place.

[Joe] If you look broader than just that example, there are ways to look at your software supply chain and look at all the software libraries and packages and components that go into your product that you ship to your customer, and you find that they cut a lot of those components come from third parties and come from open source software. So I do think there is a level of screening and a level of evaluation and a level level of assessment that allows you to look for potential areas of risk in an otherwise seemingly clean supply chain, and we do need to be pretty diligent.

[Joe] That’s why I advocate for a Software Bill of Materials being generated, doing that across your supply chain, asking your suppliers to share a complete and correct Software Bill of Materials. If you have that, then you have a chance to really evaluate all the underlying components as much as the end product.

[Paul] And automated tools can help enormously there, can’t they? They can’t make the final decisions on which code you really should use and which project is going to suit your software culture or your company or your customers best.

[Paul] But it can help you find out the devil that’s lurking among the details as it were.

[Joe] Exactly right. And I think when it comes to risk, you do need to prioritize and look at all the angles you can. But leveraging generative AI in that process can certainly help. As Leslie said upfront, I don’t think it necessarily replaces the ultimate human decision. In fact, I believe that’s true when you’re doing vulnerability fixes, when you’re doing bug fixes on code.

[Joe] You can’t simply let AI take over, especially in embedded systems.They would never realize the safety that we come to expect in critical software deployed in critical infrastructure.

[Paul] So, Leslie, how do you avoid going down that rabbit hole where you just end up finding AI so useful that you start letting it trample on your own sensitivities, your own sense of ethics, your own sense of what’s right for your company, for your customers, for your community, for your society even?

[Paul] How do you keep the balance?

[Leslie] To me, that’s not an AI problem. That’s a human problem.

[Paul] Yep.

[Leslie] In that scenario, one of the things that I’ve been preaching is that humans have to understand these frameworks and how they work and have to be an active part in that partnership, Primarily, like I say, to separate out the gunk, the noise, the goop that comes back in some of those answers.

[Leslie] You’ll get a lot of content. It won’t always be accurate, and it won’t always be on brand or appropriate for the values of the company

[Paul] Exactly.

[Leslie] And creates a situation where employees are no longer trusted because people believe AI. And so what you need to do at the corporate level is not so much constrain the use of it, but train people on their responsible behavior when using it.

[Leslie] As a consumer, I get fatigued. And so AI doesn’t understand that because it never gets fatigued.

[Paul] No. It just keeps producing the same advice and assuming that there might be a different response.

[Leslie] Right. And it doesn’t understand that human reaction. It doesn’t understand what humans might do under that circumstance.

[Paul] It seems, let’s say, that a good example might be that it would be reasonable or at least it would be non incorrect, reasonable is perhaps the wrong word, for an AI to suggest that if you work in a call center and you wish to improve your call rate KPIs, you could do so by just randomly hanging up calls every now and then without telling people.

[Paul] And it would work, but it might not be very good for the company or not what the spirit or the ethics of your company wants to achieve.

[Leslie] A 100%. I love that example because I think it’s actionable. That feedback came back and it’s actionable. It seems practical.

[Paul] And it’s cheap and easy, isn’t

[Leslie] Right.

[Paul] Oh, dear. Did I hang up? What a mistake.

[Leslie] It’s programmable. It’s achievable. We could do that. But somebody has to recognize a consequence of it.

[Paul] Absolutely.

[Leslie] Corporations seem to train employees on their responsibility in the mix. Employees get used to seeing AI as a automation tool or efficiency tool. Because once AI is automating or once AI is picking up some of my grunt work and I’m okay with it, why do I think I have to behave differently when I’m opening up an exploration in the premeditation of evils?

[Leslie] Why does my behavior have to change?

[Paul] So Joe, with all that in mind, if we just sort of put AI slightly to one side for a bit, well, let’s just assume that anyone who’s trying to keep up in cybersecurity is using AI anyway. What’s the most exciting and forward thinking and beneficial change that you’ve seen a cybersecurity team make lately?

[Joe] I’m very encouraged by some progress that Carnegie Mellon University has made. And this may sound a little bit different than what we were saying earlier. There is this notion that the output of a static analysis tool can become an input to a generative AI tool that would automate the vulnerability and bug fixing.

[Joe] And for me, that’s very, very positive. And at least where I’m at today, I’m not an advocate to fully automate everything because I do think you still need to review things. But to the extent that you can eliminate some of the minutiae that drives people’s daily work and free up some resources to maybe even think more creatively or come up with additional ways to get things done.

[Joe] I do think that can be motivating for people, and I do think that if if it’s done in the right context, if it’s done with a cultural mindset to embrace the change, if it’s done with the right method to ensure that people have a chance to innovate and they’re not gonna lose their jobs, that we’re looking for ways to improve the company’s performance overall, then I think it can be very productive.

[Joe] And so I go back to that example. I find Carnegie Mellon’s research fixing vulnerabilities and bugs and using it instead of feeding the entire system into it and say, hey. Find all my vulnerabilities and fix them. If you can use something that does identify the ones that are known and can automate those fixes, then we can spend more time analyzing the software as a whole, testing the software differently, and not spending as much time on patching itself, but reapplying those resources to think creatively how attackers might compromise the software in a way that’s not known to us today.

[Paul] Leslie, what keeps you awake at night when you think about the risks that are coming in cybersecurity? What’s something that worries you the most, and how do you think we can get over it?

[Leslie] So one of the great things and also one of the risky things, I think, that is emerging in generative AI is vibe coding, where anybody thinks they can create code, they can engineer code, and if

[Paul] Now I am laughing. Move fast and break things again.

[Leslie] I think code that could come from an employee in AI, in an AI session that they get deployed into a company system to do some work on behalf of that employee or to provide a capability to a team internally, that could be an easy risk for a cybersecurity team to miss.

[Leslie] There aren’t necessarily the systems and processes to ingest that effectively, and so it worries me a little bit that you encourage people to do it, but then you don’t have a system to secure it, or you just tell people not to do it, and then they’re undervaluing the tool they have to maybe advance the company, the job, the customer experience, and outcome somewhere.

[Paul] Or they use the tool anyway and just don’t tell anybody.

[Leslie] Right. And so how do you enable it, but in a healthy way, in a secure way, in a way that allows people to be creative to advance the overall agenda of the business in a way that’s also not risky.

[Paul] So, Joe, that sounds like Secure by Design is important. You’re actually thinking about security at the beginning, the middle, and the end of your product development life cycle, and during the however many years you intend to support it, which is likely, if you do business in Europe, to get longer than you might have expected, thanks to the Cyber Resilience Act.

[Paul] You have to think about the sins of the past because you may have to live with them, not merely your customers.

[Joe] Secure by Design does give us a good framework to encourage thinking through how to incorporate good practices into the software development process. Just as Leslie described, we don’t know all the consequences of some of these automation steps that we take or or what will happen.

[Joe] And so adversaries are well funded. They’re highly motivated. And then guess what?

[Joe] As a result of that, it’s not simply brute force they use, but they do apply their own form of creative thinking. And they will use these tools, and we need to be adaptive, and we need to think through our own defenses.

[Joe] I certainly think adversaries will try to use Gen AI to accelerate the ways they can get things done as well.

[Paul] Leslie, perhaps we can finish up with me asking you now this may be a slightly tricky question. It’s like a little bit of a double negative.

[Paul] What do you think is the biggest risk for companies that try not to use generative AI to improve their security strategy? Is it something that you should be adopting, or could you just do without it altogether if you really wanted to?

[Leslie] I actually think a lot of what Joe said is what I think is that risk that you’re not willing to look beyond what your expertise is, that you’re not willing to consider the hackers are looking in places like this.

[Leslie] And so if you’re not at least willing to learn what they might learn going there, it’s just the same as, like, I wrote the book. I had the exercise, but I knew someone was gonna put it in GenAI, so I gotta see what the answer is that way. It’s the same thing here.

[Leslie] If your hacker is gonna do it, you should know what answer they’re gonna get. You should know what they might be pointed to as an opportunity. You should see what is exposed in an answer to an audience that isn’t internal.

[Leslie] That’s number one. I think that’s a huge risk. Right? Is that if you don’t use it, you’re not thinking like your hackers are. You’re not using the tools they are. Number two, if you don’t use it, you’re just fooling yourself that your employees aren’t using it, surreptitiously.

[Paul] Yeah. It itself is another form of shadow IT, isn’t it?

[Leslie] It is. Absolutely.

[Leslie] I think about over the last five, ten years at companies when more and more collaboration tools were coming to light, IT departments weren’t keeping up with them.

[Leslie] Well, it’s gonna happen with GenAI. I’m gonna use it whether they tell me I can or not because I can still use it on my phone. I can still use it on other devices I own. And if I’m doing that, then I’m also introducing risk.

[Leslie] So why not embrace it and give me the protocol for how to use it effectively and not put the company or the business at risk? Those two things are really the most important things for business to realize is employees are gonna do it anyway, and your adversaries will too. And if you’re not at least knowing what they’re doing and being aware of what they could learn from it, then you’re definitely falling behind.

[Paul] So very simply put, don’t be afraid of what you don’t know, but don’t stick your head in the sand either and say, I don’t want to know what I don’t know. Because sometimes it could give you a fantastic hint on how to build a safer, more secure product and service in the future.

[Leslie] Absolutely.

[Paul] Well, Leslie and Joe, thank you so much for that. I could’ve just carried on for ages because we’ve only really just scratched the surface. So thank you very much for your time.

[Paul] That is a wrap for this episode of Exploited: The Cyber Truth. Thanks to everybody who tuned in and listened. If you enjoyed this podcast, please don’t forget to subscribe so you can keep up with each week’s episode. Please like us, share us, link to us on social media, and be sure to tell your whole team about us. And remember, stay ahead of the threat. See you next time.

Iranian Hackers and the Threat to US Critical Infrastructure

Iranian Hackers and the Threat to US Critical Infrastructure

  As nation-state cyber threats grow more strategic, the United States’ industrial control systems (ICS) and operational technology (OT) are facing mounting pressure. In this episode, Exploited: The Cyber Truth host Paul Ducklin is joined by RunSafe Security CEO Joe...

read more
Hiding Vulns Sinks All Ships

Hiding Vulns Sinks All Ships

  As the maritime industry modernizes—from AI-powered vessels and cloud-connected ports to autonomous navigation—the cybersecurity challenges grow deeper. In this episode of Exploited: The Cyber Truth, RunSafe Security CEO Joe Saunders joins Duncan Woodbury, CEO of...

read more