Listen to the audio overview
Even with decades of hard-earned security wisdom and modern verification tools, embedded software encounters the same kinds of bugs. Why do these mistakes keep showing up in code written by seasoned engineers? How do you write software that’s both secure and shippable, especially when staring down a deadline?
To dig into these questions, we spoke with Rolland Dudemaine, Director of Field Engineering at TrustInSoft. Rolland has spent more than 25 years in the embedded software world, working on the design and development of safety-critical and security-sensitive systems. He’s a regular open-source and AUTOSAR contributor, and he’s seen the industry’s best practices evolve alongside the pitfalls that stubbornly refuse to disappear.
In this Q&A, Rolland offers a straight-from-the-trenches look at secure coding, from the easy-to-miss mistakes that cause the biggest headaches, to the right way to layer security tools, to what “memory safety” really means in practice. Whether you’re writing firmware every day or steering an organization’s embedded security strategy, you’ll find insights here you can put to work.
1. What are the most common coding mistakes that introduce vulnerabilities in embedded software and why do they persist despite decades of security guidance?
Rolland Dudemaine: In general, the remaining coding mistakes relate to the corner cases of the software. These often lead to runtime errors that can cause crashes, or worse, silent data corruption that may be exploitable.
Among those that remain, off-by-one errors (leading to buffer overflow/underflow) and computation overflow/underflow are the most typical, because they are not necessarily easy to reach during functional testing. When using a programming language that requires manual memory allocation, use-after-free remains a very visible cause of trouble. MITRE does a good job of listing such issues.
One of the reasons why these issues remain is that it is not possible to functionally test for these corner cases: there is an almost infinite number of ways to corrupt data. Instead, using appropriate tools is the only way to detect these kinds of issues.
2. In your experience, what’s the most overlooked aspect of secure coding in embedded C/C++?
Rolland: For projects reusing code, projects always underestimate the cost of ownership of open-source libraries. It’s not that these libraries inherently have lower quality; instead, the specific use of such libraries within the project may not be the same as the original intended usage, and it happens often that projects reach buggy corner cases. If you use third-party code, you become responsible for it.
For project-specific software, processes often focus on form over function. While using coding rules is the best way to improve maintenance cost, and well-tested, consistent code is likely to have fewer bugs, this doesn’t mean such bugs are eliminated! It’s only risk reduction, without guarantee.
During testing and code audits, using appropriate tools to check for mistakes is important. This includes static analysis, coverage tools and sanitizers. TrustInSoft Analyzer is one such tool that covers all of this in one go, but using separate tools is already a start.
3. Static analysis, formal verification, fuzzing, runtime protections—there are lots of tools out there. How should teams think about layering these techniques to get the best coverage?
Rolland: Security is all about layering. Much like serious network security always advises applying multiple network encryption schemes, code security goes through examination of the code through different angles and levels of protection.
“Security is all about layering. … Code security goes through examination of the code through different angles and levels of protection.”
That said, good security (and safety!) planning tries to avoid failures, and also plans for swift reaction in case of error after deployment. Similarly, static analysis, formal verification, and fuzzing are great examples of tools to be used during development, while runtime protection is efficient to ensure that any remaining failures will still be caught and handled gracefully in the field. RunSafe’s runtime protection is a state-of-the-art example of such a scheme that will detect and report any failure observed in production.
4. When evaluating code quality and security, how do you balance “perfect” code with what’s practical in the context of tight deadlines and limited resources?
Rolland: Perfection doesn’t exist in this world. What remains is how close the project needs to approach perfection. From there, various decisions can be made to focus on the pain points and dire consequences of field failures. And the effort to avoid them will have to be adapted in consequence.
“Perfection doesn’t exist in this world. What remains is how close the project needs to approach perfection.”
A similar pattern is to make or buy: Do you reuse third-party code? Open-source code? Use free or commercial tools to work?
When you start to get serious about your job, it quickly becomes visible that a thorough, reviewed, and if possible exhaustive approach should be used. Again, this can range from simply following coding guidelines and using systematic and formal verification tools to conducting an independent vulnerability analysis.
A heavy but interesting worldwide reference is the Common Criteria specification. Unless one has an extremely critical asset (think nation-level top-secret) to protect, the list is too extreme to be reasonably applied as is. However, it is a fantastic description of the methods to develop and verify software: selecting the right level for your needs and challenge will always push things in the right direction.
5. Can you share an example (anonymized or general) of how rigorous software verification caught a potentially serious bug before it made it to production?
Rolland: Based on feedback from our customers, the most common “potentially serious bugs” are accesses to uninitialized variables, as well as off-by-one errors. Since they are caught ahead of production, the true damage they could have caused is hard to predict, but can range from a mere malfunction to a potentially devastating bypassable security gate.
Another example that we/TrustInSoft recently advertised at the CYSAT Paris event is a series of bugs that we found in the NASA cFE (Control Flight Executive). That open-source piece has been used in many space devices in production (James Webb telescope, among others), yet we recently managed to find a few runtime errors that could be damaging, including access to uninitialized variables.
6. What role does memory safety play in modern embedded security, and how are you seeing teams address these risks differently today than five years ago?
Rolland: The adoption of systematic security audits, sanitizers, and other formal verification tools, such as TrustInSoft Analyzer, has helped raise the bar and limit the amount and types of bugs that pass through.
That said, everyone working on C or C++ language code has started to look at Rust and other “memory-safe” languages. We’re actually adding Rust support to TrustInSoft Analyzer and it will ship in our next release.
However, early analyses on customer projects using Rust show that, in fact, runtime errors persist, at a lower but visible level. One of the reasons is that the definition of memory safety in Rust isn’t that there is no risk anymore; rather, if a failure is detected at runtime, the code may deterministically panic instead of doing something unpredictable. It’s much better, but it will not prevent a DoS (Denial-of-Service), for instance.
All in all, adopting a new language presents an opportunity to transition to more modern practices: code is rewritten with enhanced experience, improved design, refined coding practices, improved testing, and more precise specification. The new language itself isn’t the sole cause of improvement, but it’s a pretext for change for the better, and an opportunity to use efficient verification tools.
7. Software supply chain risks have become top-of-mind for embedded teams. How important is it to have visibility into all software components, and how do you view SBOMs within the embedded security picture?
Rolland: The European CRA (Cyber Resilience Act), the US Cyber Trust Mark, and the Chinese CSL (CyberSecurity Law) all mandate SBOM for a reason: it’s the minimum security hygiene to list what you are using and shipping. If you don’t even know what you’re shipping, how could you even start to evaluate the risks?
Once such a list is established, it becomes possible, mapped to the system and software architecture, to determine which are the risky items in terms of attack surface, and consequently identify where the verification effort should be focused.
SBOM does not make the system inherently secure: it allows projects to gauge risks. So, it definitely goes in the right direction, even when this just looks like paperwork at first.
8. Looking ahead, what trends or technologies do you think will reshape how we write secure embedded code over the next 3–5 years?
Rolland: It’s all too easy to answer AI to this question, as AI is seen as the answer to everything these days. However, AI in this specific case is a risk: humans using AI for coding are no longer the designer/logician/artist of the code, but merely reviewers. There is consequently a higher risk of subtle security implications when adopting AI-generated code; it becomes truly important, consequently, to use either formal verification tools to verify, more thorough human reviews, or both.
On a more positive note, the move to memory safe programming languages is opening the eyes of many developers and managers to the fact that good practices lead to much better code quality. We see more interest in formal verification tools, and TrustInSoft Analyzer is being trusted more than ever to verify critical code, regardless of the origin.
“Good practices lead to much better code quality.”
Building Resilient Embedded Software with the Right Tools and Partners
Securing embedded systems is an ongoing process that demands rigor, the right tools, and a willingness to adapt as threats evolve. As Rolland Dudemaine’s insights show, achieving meaningful improvements in software security requires both technical discipline and strategic planning, from catching elusive corner-case bugs to layering defenses that protect systems long after deployment.
If your team is looking to strengthen its approach to embedded security, RunSafe Security offers solutions designed to neutralize the entire class of memory safety vulnerabilities and protect against runtime exploits without disrupting your development workflow.
Learn how our runtime code hardening eliminates memory corruption risks and see how we help organizations in critical industries ship safer, more resilient systems.
Explore RunSafe Protect.