In today’s fast-paced defense environment, speed and intelligence win battles before they begin. In this episode of Exploited: The Cyber Truth, Joseph M. Saunders of RunSafe Security and Arthur Reyenger of Ask Sage explore how generative AI is revolutionizing military operations—from accelerating acquisition and mission planning to enabling predictive analytics and secure collaboration.
They share powerful insights on:
- The evolution of AI in defense and why “do more with less” is mission-critical
- Real examples of AI accelerating approval processes by 95%
- How digital twins and synthetic data enhance readiness without risk
- Why COTS AI outperforms custom-built systems in agility and cost
- The importance of responsible, human-in-the-loop AI for national security
Tune in to hear how generative AI is reshaping decision-making, reducing cognitive load, and empowering the next generation of warfighters.
Speakers:
Paul Ducklin: Paul Ducklin is a computer scientist who has been in cybersecurity since the early days of computer viruses, always at the pointy end, variously working as a specialist programmer, malware reverse-engineer, threat researcher, public speaker, and community educator.
His special skill is explaining even the most complex technical matters in plain English, blasting through the smoke-and-mirror hype that often surrounds cybersecurity topics, and helping all of us to raise the bar collectively against cyberattackers.
Joseph M. Saunders: Joe Saunders is the founder and CEO of RunSafe Security, a pioneer in cyberhardening technology for embedded systems and industrial control systems, currently leading a team of former U.S. government cybersecurity specialists with deep knowledge of how attackers operate. With 25 years of experience in national security and cybersecurity, Joe aims to transform the field by challenging outdated assumptions and disrupting hacker economics. He has built and scaled technology for both private and public sector security needs. Joe has advised and supported multiple security companies, including Kaprica Security, Sovereign Intelligence, Distil Networks, and Analyze Corp. He founded Children’s Voice International, a non-profit aiding displaced, abandoned, and trafficked children.
Special Guest: Arthur Reyenger, Generative AI Strategy Executive for Channels and Commercial, Ask Sage, Inc
Arthur is a Generative AI Strategy Executive with Ask Sage, where he leads engagement development with customers across a wide range of use cases. Prior to joining Ask Sage, Arthur was a founding member of CloudInsyte. There, Arthur built a successful cloud consulting practice specializing in big data solutions for the gaming and hospitality verticals. Before CloudInsyte, Arthur led the cloud practice for International Integrated Solutions (IIS) where he developed sales training and enablement programs, designed & implemented hybrid cloud managed services, as well as brought those to market. Going further back, Arthur was a Hybrid Cloud Architect at NetApp’s Emerging Products Group and a Strategic Consultant to the Enterprise at Verizon covering the Northeast.
Episode Transcript
Exploited: The Cyber Truth, a podcast by RunSafe Security.
[Paul] (00:07)
Welcome back everybody to this episode of Exploited: The Cyber Truth. I am Paul Ducklin, joined as usual by Joe Saunders, CEO and founder of RunSafe Security. Hello Joe.
[Joe] (00:20)
Greetings, Paul. Look forward to the discussion.
[Paul] (00:23)
Now we’re joined in this episode, Joe, by someone who is not only a good friend of yours, but who is also in effect a colleague, because you just happen to be the chairman of the board at the company where he works. So please welcome our guest Arthur Reyenger, who is Generative AI Strategy Executive at Ask Sage. Welcome Arthur.
[Art] (00:45)
Paul, really nice to work with you.
[Paul] (00:48)
Right, we have a fascinating sounding title, “How Generative AI Is Addressing Warfighter Challenges.” Now Arthur, when I first saw the word warfighter I thought, that’ll be the marines that are parachuted behind enemy lines at the very start of a very special operation. But we’re talking about a much broader picture. Do you want to say something about the kind of challenges that we are trying to address?
[Art] (01:14)
Sure, it certainly does include those individuals that are the tip of spear that are actually enacting those missions at the front lines, but it’s also all the supporting teams behind them that are ultimately helping them to plan, prepare, and execute that mission, get the right technology in their hands. So it’s a much broader user community that we’re talking about here.
[Paul] (01:33)
So this is not just supporting some individuals in some specific missions, it’s also about enabling a larger community with intelligence, with information, with operational improvements across the board.
[Art] (01:49)
Absolutely, it’s all the same things. Do more with less, be able to differentiate, and have a tactical advantage when you’re going to execute that mission.
[Paul] (01:57)
Do more with less is a bit of a mantra for everybody these days isn’t it? It is. What makes the challenge different now than it was even just five years ago, let alone ten, fifteen, twenty years ago?
[Art] (02:12)
I think it used to be a much more straightforward approach. You had your domains well covered. It was land, sea, or air. Today, there are numerous fronts that warfighters need to be aware of and need to incorporate. We have disinformation campaigns. We have an electromagnetic spectrum that has to be considered. The speed of decision-making needs to be increased. And they’re falling under the same issues that others have with analysis paralysis. We almost have too much data coming from too many sources.
And that’s why you need to have tools that can ultimately help you sort through that and make the correct decision.
[Paul] (02:44)
That’s a similar sort of challenge that traditional cybersecurity folks face with, say, malware. Forty years ago, when a new virus came out, everyone got excited and they spent a month analyzing it. Today, we’re talking about hundreds of thousands of new malware samples per day. You can’t use the old methods to solve new problems. The volume is simply too high.
[Art] (03:08)
Now you have to interrogate those things and determine where you want to spend your time, where it’s going to be the most applicable, where you’re the most vulnerable. It’s very similar to the same challenges that cybersecurity faces.
[Paul] (03:19)
And I presume you also have a significant problem with misinformation that’s deliberately disseminated by your adversaries in order to try and enlarge your analysis paralysis as you might say.
[Art] (03:33)
Absolutely. That is a major concern and I would actually say that cybersecurity and disinformation is probably where things are starting more frequently.
[Paul] (03:41)
The idea of generative AI sounds very futuresome, but it’s not quite as new as people think, is it? And it’s something that has been of great value in the defense community for many years already.
[Art] (03:56)
AI is in no way, or form a new concept. It’s something that’s been leveraged within the DoD for a very long time. It goes back to 1950 with Turing’s paper, Can Machines Think? And since then, we’ve seen adoption through the use of both computer vision and autonomous systems within the DoD and the federal government. As a matter of fact, legislation was just amended in the DoD Directive 3000.09, and was last revised in 2023 to allow for lethal autonomous weapons systems to ultimately execute without human oversight.
So again, these are not new concepts or new systems or a new way to leverage technology. I think the difference here is the technology has evolved to the point where it’s now in the hands of every warfighter. Everyone can use it rather than it being very specialized for autonomous systems and computer vision applications.
[Paul] (04:51)
And those warfighters increasingly are not behind enemy lines or even on enemy lines. They may be in some secure bunker or office far, far away, setting the parameters for things like drones to do their work and not necessarily needing to pilot or control those drones for their entire mission. So can you give us an example of how generative AI is already being used to support mission outcomes?
[Art] (05:20)
At Ask Sage, we were fortunate enough to work with a large combat command that was really working to try to get new technology from the commercial segment, tested, authorized, and then in the hands of the warfighter. Being able to use generative AI to speed up that authority to operate processes and being able to test and vet that technology is really helping to accelerate to get the specific things that those warfighters need in order for them to be able to make a difference and have an advantage. I believe they said that we saved them 95 % of the time and the cost to be able to go through those processes.
[Paul] (05:52)
That’s not just using generative AI in a mission, it’s using generative AI to improve the speed at which you can produce deliverables of any sort. Software, firmware, hardware, whatever it might be.
[Art] (06:04)
Absolutely.
Then there’s aspects to red teaming and planning where this is invaluable to be able to run more scenarios, to be able to look at those different vectors and be able to provide more contingency planning. Then there’s something that we do here in the U.S. that a lot of our adversaries don’t, and that’s really test weapon systems. So being able to have generative AI be able to help you generate synthetic data to be able to test and vet the sensors on a particular system before it even winds up in a plane, before it even winds up on the field. puts us at a strategic advantage. And I know those things are being done today.
[Paul] (06:39)
Is that the idea of a digital twin? Yep. You can actually use synthetic tools to create test results that are considered equivalent in practice, not merely in theory, to crashing the real thing.
[Art] (06:54)
You nailed it. Instead of crashing 20 planes to determine what that maximum load is, now we can do that synthetically, ultimately validate that system, compare it against another, before it ever gets rolled out into production. Pretty incredible.
[Paul] (07:06)
Joe, you want to say at this point something about how generative AI can help the process of software engineering for weapon systems?
[Joe] (07:18)
The starting point is that the productivity gain accelerates execution of any of these programs and whether that’s software development or cybersecurity compliance or other initiatives. Generative AI can be super helpful. And I know folks at SAGE see 35x productivity gains in some cases. I’m sure Art has even higher productivity gains because he’s leveraging SAGE in his day-to-day work. Development teams can benefit if you can imagine a workflow where not only can you help shape and refine customer feedback, but you can define requirements. You can build some initial code.
You can really enhance development of basic components. So developers can focus in on the real creative aspects of what they might want to build. I think today in embedded systems, generative AI and the developer need to work together, creating artifacts, testing, developing some initial code, identifying some bugs, fixing some bugs, fixing some vulnerabilities. There’s a major impact on software development nonetheless, and we’re just in the early days. I mean, this generative AI stuff, despite AI being around forever, 50, 70, 80 years or what have you, the generative AI impact has been significant only in the last couple of years. And with that, I think we’re gonna make dramatic improvements in software development. And the bottom line is that all of this gets to driving innovation faster.
So an organization like the US Department of War can remain competitive or be the leader. With that in mind, there’s a massive impact that generative AI has today, but it’ll even increase going forward.
[Paul] (09:00)
Joe, you have in previous podcasts identified what you might call a feedback loop between today’s processing needs for AI to do the kind of stuff that wasn’t possible five years ago and legacy embedded systems in that the biggest, bestest, fanciest data centers with the best protected servers in the world are no good to you if somebody can jump in and shut off the air conditioning.
[Joe] (09:29)
Yeah, we’ve seen it in America’s AI Action Plan how energy and data centers are essential for the US winning the AI competition, if you will, or the AI race, need a flexible grid that’s adaptive, need resilient data centers. So the large language models can process. We all get excited to look at the next large language model, but with applications like AskSage and infrastructure that’s resilient, the idea of putting productivity tools in people’s hands to make a big difference today has a huge effect.
And if the data center goes down or the large language models don’t process or the energy consumption is outpacing what can be delivered to a base or a user that just undermines the ability and the progress that people want to make. Really, it will start to affect their day-to-day lives because people are becoming more and more dependent on generative AI solutions. I’ll say it for Art, there’s hundreds of thousands of users in the Department of Defense or Department of War in the US, and that’s a testament to the need to accelerate innovation for the warfighter.
[Paul] (10:37)
Arthur, do you want to say something at this point about AI in general or generative AI specifically in the defencse world? Because I think if you talk to the average person, when they think about AI in the military, they imagine conventional old school warfare conducted by autonomous robots. That’s not really what it’s all about, is it? As Joe said, there’s this slew of departments and people and organizations and private partners in the background who rely on generative AI just to make everything work.
[Art] (11:10)
I’ll give you a story first. We had a very interesting use case that came up with the Navy. The Navy has 3D printers on ships. They ultimately help them be able to machine parts that they need at will. But prior to this, they would have to go get all the documentation, the schematics. They would have to phone home through geosynchronous satellites and a network in order to get all that information. So now we’re able to provide lightweight, finely tuned models to deploy those on those ships, to be able to interface with those technicians and those warfighter supporting teams.
To be able to load those schematics and machine those parts without ever having to leave the local network on the ship itself. It’s not a very sexy use case, but it’s again, something that’s ultimately driving a lot of efficiency and providing a lot of tactical advantage. If you’re not leveraging AI to help, then you’re going to be left behind. That’s really what we’re doing here is we’re enabling folks to be strategic and more creative in the things that they’re doing, rather than having to get mired down within the minutia. To be able to have that engineer who is the human in the loop that can ultimately figure out exactly what’s wrong and then take that back to the 3D printer, get the parts that you need and you’re back on mission without having to stall out for standard maintenance on parts.
[Paul] (12:20)
Yes, it’s not just being able to make the part, it’s also knowing which part to make at exactly what time.
[Art] (12:28)
And is that part going to have a cascading effect on all the other parts that were ultimately put in place? Yes. Are there things that I need to anticipate that are further on down the chain? Is this a symptom of a larger problem? You get to leverage that incredible power of generative AI to be able to support those things while focused on, again, the very tactical need of I have to fix XYZ.
[Paul] (12:33)
Of course. It’s also, if you like, a way of reducing the cognitive load for everyone in the process. That instead of fixing the part and then having to have a series of lengthy meetings to decide, does that mean that there’s going to be problems elsewhere in the system? All of that could have been taken care of proactively.
[Art] (13:11)
It’s providing those strategic personnel with an army of AI assistance for whatever discipline that they need in order to get their work done.
[Paul] (13:19)
So this is very, very different from The Terminator and stuff like that. What happened is that the backroom boys and girls became many times more effective. The actual humans involved have been freed up to do what you might call higher order tasks.
[Art] (13:38)
Absolutely. If you just look at Ask Sage ourselves as our own use case, as Joe alluded to earlier, our founder saw a 35 and then actually a 50x velocity increase in the way that he was developing the platform. So the way that we like to say is Ask Sage wrote 90 % of itself. Instead of being that developer, you become that orchestrator of dev AI assistance in order to complete your mission.
[Paul] (14:03)
Most code only has, what, 1 to 10 % clever bits in it. Yep. The rest is just needed to support all of that.
[Art] (14:11)
If you’re trying to put a well architected framework in place and you have the blueprint for the application that you’re looking to deploy, to your point, I really only need to focus on the very innovative functions that I need. And then I can be very prescriptive with the AI. Let the AI assistant do what’s embarrassingly parallel and allow you to focus on what hasn’t been done yet.
[Paul] (14:31)
One thing that strikes me at this point is that defense contractors and the military are notoriously fussy about dotting the i’s and crossing the t’s with very, very good reason. So they have very well established processes and procedures that have evolved over decades. How do you take comparatively new technology like generative AI and add it into that system without forcing anybody to break the safeguards that they think are important?
[Art] (15:03)
That’s more of the art than the science, I would say. The way that we like to approach that problem is we’ll analyze the existing workflow that’s manual. We’ll start with that one step that would add a lot of value, and then we’ll bookend it from there. So then look at the other ways throughout that process where generative AI can be inserted. It’s my personal belief that technology should not be dictating the way that organizations define their workflows. It should be supporting them. If you’re doing it a certain way, it was because it was right at a time.
So let’s analyze what value that was providing you and make sure that that doesn’t get lost in leveraging this new technology to drive that out.
[Paul] (15:39)
So it’s almost finding a way to help the system evolve so everybody is satisfied with it, rather than insisting on a revolution that may lose some received wisdom from this.
[Art] (15:51)
Yeah, not interested in throwing the baby out with the bath water. We want to have those humans in the loop. You want to have those subject matter experts and you want to have the transparency and what the generative AI is doing through the process so that you can essentially be George Jetson. You can hit stop at the Spacely Sprocket factory when you’re ready to come in and make a change.
[Paul] (15:54)
Excellent.
A good analogy might be, if you think back to the Second World War and the First World War and military calculations then, like producing artillery tables, the word computer was a type of job that was performed by people who were really good at doing arithmetic calculations accurately. And those people now don’t have to do that anymore. They can go and actually design the systems and assume that someone can compute the necessary sines and cosines automatically and precisely every time. It’s the same sort of process, isn’t it?
[Art] (16:42)
Transistors were invented, but the technology wasn’t there where you could really use them. So you still needed that human calculation aspect. And then as the technology evolved and got smaller, then you could build more autonomous systems that became the computers that we know today for those calculations. Similarly with generative AI, it was the evolution of the GPU that really took this thing that’s existed for a while and really made it accessible.
[Paul] (17:05)
So what makes the defense community want to do this via public-private partnerships, rather than building their own completely separate AI engine and keeping it secret from everybody else?
[Art] (17:20)
Well, those are two very conflicting schools of thought. So that’s the standard COTS or government built systems versus COTS that are commercially owned and operated and ultimately presented back.
[Paul] (17:32)
That’s COTS, commercial off the shelf. And I presume there that the fear that somebody in the military might have is, well, if they sold it to us, what if they sell it to somebody else?
[Art] (17:42)
That is a concern, but with any of those solutions that are coming from the defense industrial base and the supply web that exists, there’s a series of vetting that has to take place between foci training and other things to understand where the investment and where the loyalty lies from the cap table of that particular company and how it’s going to ultimately be consumed. So there is that fear, but there’s also safeguards in place to ensure that that organization is aligning with the mission.
[Paul] (18:11)
Now, I can imagine why a long-serving general or admiral might be concerned about outsourcing the AI aspects of what they’re doing, and it’s something that’s concerning, say, copyright holders in the civilian world. What happens if the AI absorbs some information, but in such a way that it accidentally, unintentionally, emerges at a later stage in some other project when it wasn’t supposed to?
[Art] (18:40)
It comes down to again, how you’re going to architect the system, the checks and balances that you want to have in place. On the one hand, you could have very lightweight models that are either completely developed by the government or the third party or ones that are just open source that are finely tuned and trained that can be completely segmented off. But then you have to worry about drift and you have to make sure that you’re maintaining that model that hasn’t been trained on a huge corpus of data. And then on the other side of that, you could leverage the much larger commercial models that companies are putting out and then you have to trust that they are configuring these models with the right indemnities and the right controls in place. We’re agnostic.
We try to enable both of those strategies.
[Paul] (19:19)
So do you think if the Department of Defense tried to do this all on its own that would be a little bit of a fool’s errand because they would probably need an infrastructure as big as the one we already have for all the other uses of AI? Would that be an unachievable goal?
[Art] (19:36)
Iron sharpens iron. You don’t have anyone that’s competing in that space. And although I do agree that it provides additional security advantages without that innovation and without competition, you’re losing out on the innovative capabilities that ultimately leapfrog ahead. That’s why from an industry standpoint, they look at things that were built for government use or built by the government as ultimately being an inferior tool much of the time.
So instead is really try to take the best and brightest within the commercial space and what they’re doing, put some well architected framework around that technology so you have those checks and balances and be able to leverage those innovative capabilities to provide that tactical advantage that we’re looking for.
[Joe] (20:14)
Folks like AskSage really embrace the benefits of the COTS approach. The government having to build something on its own is really a waste of taxpayer dollars when a platform like AskSage exists. AskSage also then really has the philosophy, as Art was referring to, there’s a fundamental philosophy to put the choices of which large language model, which hosting in the hands of the users themselves so they can always take advantage of best-in-class and best models for the tasks that they want to get done.
And if those are predetermined and built into some kind of solution that limits the user’s choice, it will result ultimately in inferior productivity and will reduce innovation overall. The approach that the department has taken is to accelerate the adoption of COTS software, the fact that SAGE is agnostic to large language models and agnostic to cloud hosting helps avoid that kind of lock-in and that degradation of advantage over time by empowering users ultimately to make the best choices. And that’s a huge advantage, not only for the users, but for the US.
[Paul] (21:24)
I already mentioned one aspect that concerns people in the civilian world about the data it’s training itself on, and that is what happens if somebody feeds in private or personal data by mistake and it later emerges, so the data leakage side. What about, however, AI models that are relied upon for critical scenarios that have been deliberately poisoned by what you might call adversarial data deliberately injected by people with your worst interests at heart.
[Art] (21:56)
Do you really want to be able to build the necessary safeguards, do quality control checks against the outputs? You want to have benchmarks in place so that you can ultimately train for that. It’s knowing where the model came from and what data it had been trained on before it was released. A lot of these things are stuck in time. So if the indemnities have already been changed by that model to no longer train on data after it’s released.
Then there shouldn’t be a way for you to ultimately put additional malicious data or change the way that that model is going to leverage its parameters to put something malicious in a response.
[Joe] (22:29)
The AskSage platform really in some ways abstracts some of these security concerns that you bring up. The fact that there’s really advanced zero trust built around label-based access controls. And there’s also this context of AskSage being kind of a pioneer around fire and forget so that the data in its context doesn’t reside on the model side, but resides on the platform side ultimately prevents a lot of what you’re talking to.
[Paul] (22:57)
Right, actually if you really wanted to, you could regenerate the model, leaving out the data that you’ve now decided that you distrust, and everybody else’s use of that model or that service will have no effect on you at all.
[Joe] (23:11)
Yeah, so fire and forget means you submit your prompt in that query, and then that information is not retained on the model side, it’s retained on the platform side. And that platform then is in a highly secure environment to protect the users. You won’t leak information, and you certainly will be under less influence from any kind of hallucination or tainting of the models themselves.
[Paul] (23:35)
So I guess an analogy for that might be what we did with passwords in the late 1970s, early 1980s, when we decided that instead of letting the server store the actual password, we would store a cryptographic hash of the password so we can validate it, but the server never needs to remember or record what the actual password was. And if it doesn’t do that, then it’s impossible for it to leak it by mistake. Same sort of idea.
[Art] (24:02)
That is a great analogy. Yes. All of our communications with those models through that fire and forget API are ephemeral so that none of that data is ultimately retained and then trained in that model or into a future model. The fire and forget API that we’re leveraging stateless in the way that it’s communicating with those models.
[Paul] (24:18)
What does the idea of responsible AI mean in a military context, particularly when you’re talking about autonomous weapons? A vessel that might be running around at sea might itself decide to fire up some other aerial drone or something like that. How do you go about building in safeguards?
[Art] (24:38)
That goes back to the red teaming and having the human that ultimately decides that this is the plan, that this is the mission that we’re ultimately going to undertake. Right. It’s up to that individual to determine what those autonomous systems can actually do. And then once they’ve decided that that’s necessary to complete whatever the mission is, there’s no real need to necessarily have to follow the bullet once it’s shot from the gun. So I think it’s measuring twice and cutting once. You’re going through all the planning steps necessary.
You’re understanding what those contingencies and what the outcomes are if you were ultimately to take that action. And then after that, you’re ultimately letting the chessboard evolve based on that directive.
[Paul] (25:16)
I love that saying, measure twice, cut once. It’s easy to remember. And if only we did a little bit more of it when it came to making cybersecurity decisions.
[Art] (25:29)
Absolutely. Responsible AI is, and I’m really saying this is Arthur Reyenger’s opinion and not necessarily as an AskSage representative. Responsible AI, it’s a word that’s kind of been co-opted like a lot of other terms that become very, very important or hold meaning within the culture. For me, it’s referring to the way that you provide an ethical design, the development, the deployment, and then the use of that system. And I think that the United States and specifically the Department of War does a really good job in testing the systems that they’re going to use, making sure that they’re not putting bad weapons systems or capabilities on the front lines, and then knowing what those outcomes are going to be. So as long as you’re doing that, you can feel more confident that you’re leveraging these tools responsibly.
[Paul] (26:15)
And importantly, you can use AI to help you test those very tools more thoroughly without actually having to, as you say, build 20 planes and crash them deliberately.
[Art] (26:28)
Absolutely. And without having to waste taxpayer money or loss of life in order to do that, we’re really putting the measure twice, cut once into practice.
[Paul] (26:37)
And guess there’s a strong element of deterrence out of all of this as well. That may make your adversaries think twice or thrice before doing something from their side.
[Art] (26:48)
It could be as simple as based on the analysis and the planning of a particular scenario, you realize all you really need to do is jam comms at a particular port and you don’t actually have to deploy any human capital or resources in order to have a show of force or power or disrupt the adversary. I think this also gives you the ability to run more creative planning, more creative scenarios, things that save time, money, and hopefully, human life. So. Absolutely.
[Paul] (27:14)
Sort of less is more approach.
Do you want to say something about the kind of collaboration between research institutions and national labs?
[Art] (27:24)
Had a lot of success with the national laboratory space. The one that I can think of off the top of my head is Argon National Lab or ANL. They had some homegrown tools that they had developed like a lot of other folks, but they also had researchers that were working on best in breed and next generation applications for generative AI. And then they had all of their supporting teams within that same national lab that needed to be able to support those researchers in their efforts. So being able to come in and provide a single well architected platform to be able to take in all of the different generative AI technologies that they have and be able to make sure that they have that silo capability to do the work from a research standpoint, but also support them with some of the commercial models that those supporting teams would need. It fostered collaboration. It allowed them to standardize on security. really allowed them to gain a lot of the benefits from generative AI without them having to build these things from scratch. We’d love to use that as the micro example that we could apply to the Department of War at scale.
[Paul] (28:22)
I’m conscious of time, so to finish up, maybe I’ll put a question to you, Joe, specifically, and that is, if you could challenge one entrenched assumption, let’s say in the Pentagon, about the adoption of AI, what would that be?
[Joe] (28:38)
There’s been so much investment in generative AI and in large language models. And a lot of the mindshare and attention of folks gets them to a spot where they think the real value is in the foundation models themselves. That leads to vendor lock-in. We saw that in the cloud services world. I believe a key assumption in the department around assuming that the values in the foundation models has to be challenged. We want the users to have a choice and we certainly don’t want to hurt the warfighter. We want to drive innovation to the warfighter. I think the key thing for everybody to look out for is not just what is the best model today for the task, but what is the best path to drive innovation for the future.
[Paul] (29:27)
I think that’s a fantastic way to finish up, Joe. It’s sort of a warning that it’s okay to choose one path, but actually in the future you may find that you want to have different parts of your organization on different paths. And if you’re locked in, you can’t do that, can you? So a little bit of liberty goes an awful long way. So that is a wrap for this episode of Exploited: The Cyber Truth.
Thanks to everybody who tuned in and listened, and a very special thanks to Joe and Arthur for their very passionate and reasoned responses to my questions. If you liked this podcast, please don’t forget to subscribe so you know when each new episode drops. Please like and share on social media as well. Please share us with all of your team so they too can benefit from Joe and Arthur’s wisdom, insight and passion.
And don’t forget, stay ahead of the threat, see you next time!


