RunSafe’s 2025 AI in Embedded Systems Report

Download the Report

Download RunSafe’s 2025 AI in Embedded Systems Report

This year’s report reveals that AI is here to stay. However, security still has catching up to do.

AI has moved from experimentation to everyday use in embedded systems. Teams are now relying on AI-generated code in products that run medical devices, industrial equipment, vehicles, and energy systems. But the security practices surrounding that code need to evolve to keep pace.

This report shares insights from more than 200 embedded systems professionals working across critical infrastructure sectors. It examines how AI is being used in embedded development today, the risks teams are seeing, and the security gaps that remain as AI-written code moves into production.

The 2025 AI in Embedded Systems Report highlights the challenges, trade-offs, and priorities shaping the next phase of embedded security and where organizations need to focus to keep critical systems safe.

Key Findings At-a-Glance

  • 80.5% currently use AI tools in embedded development
  • 83.5% have already deployed AI-generated code to production systems
  • 53% cite security as their top concern with AI-generated code
  • 73% rate the cybersecurity risk of AI-generated code as moderate or higher
  • 33.5% experienced a cyber incident involving embedded software in the past year
  • 93.5% plan to increase AI usage over the next two years
  • 91% plan to increase investment in embedded software security

What’s Inside the Report

Verification Icon

  • The state of AI adoption: How teams across medical, automotive, industrial, and energy sectors are integrating AI into development workflows
  • Security concerns and confidence gaps: Why professionals worry about AI-generated code even as they deploy it at scale
  • Runtime resilience as critical defense: How 60% of teams are using runtime protections to address vulnerabilities AI tools may introduce
  • Current security practices: What’s working, what’s missing, and where traditional tools fall short in the AI era
  • Investment priorities: What teams want next, from code analysis automation to AI-assisted threat modeling
  • A security playbook for AI-era embedded systems: Four principles for managing AI-generated code in critical infrastructure

Download the report to access all the findings and recommendations.

Check Out Our Latest Blog Posts

The Top 6 Risks of AI-Generated Code in Embedded Systems

The Top 6 Risks of AI-Generated Code in Embedded Systems

AI is now woven into the everyday workflows of embedded engineers. It writes code, generates tests, reviews logs, and scans for vulnerabilities. But the same tools that speed up development are introducing new risks—many of which can compromise the stability of...

read more