How to Validate SBOM Accuracy for Embedded C/C++ Projects

Posted on March 18, 2026
Author: Kelli Schwalm

If you’ve ever run an SBOM tool on a C/C++ codebase and gotten results that felt wrong, you’re not imagining it. Teams evaluating tools like Black Duck, Syft, Trivy, and FOSSA on embedded projects routinely find that outputs are incomplete, inconsistent, or so noisy that they’re not actionable. One common experience is running two different tools on the same build and getting back two completely different component lists. Another: getting back nothing at all.

The problem isn’t that your build is unusual. It’s that most SBOM tools weren’t built for embedded C/C++, and the results they produce can’t be validated against anything concrete. That’s the gap this post addresses—not why tools fail, but how to tell whether your SBOM is actually accurate, and what to do when it isn’t.

The Core Problem: Most SBOMs Can’t Prove What They Contain

An SBOM that can’t be verified is worse than no SBOM at all. It creates a false sense of coverage, fails when a CVE lands, and falls apart under customer or regulator scrutiny.

For embedded C/C++ specifically, the accuracy problem is structural. Unlike Python or JavaScript projects—where a lockfile provides a ground-truth dependency list—C/C++ builds are assembled at compile time from a mix of static libraries, vendored code, and conditionally-compiled components. There’s no manifest to check against. The only reliable source of truth is the build itself.

This is why “I ran a scan and got a list” is not the same as “I have an accurate SBOM.” The question is how that list was produced, and whether it reflects what actually shipped.

How to Tell If Your SBOM Is Accurate

1. Check how it was generated

The generation method is the single biggest predictor of SBOM quality for C/C++. There are three approaches:

Repo or source scanning reads files in the repository and infers dependencies. It’s fast, but it reports what could be built, not what was built. Conditional compilation, architecture flags, and #ifdef blocks mean two builds of the same repo can produce different software. A repo scan can’t tell the difference.

Binary analysis works backward from the compiled artifact. It’s useful when source code isn’t available, but it struggles with static linkage, where library boundaries disappear into the final binary, and frequently produces false positives from heuristic fingerprinting.

Build-time instrumentation observes the actual build as it happens to observe which files are compiled, which libraries are linked, and which flags are active. It generates the SBOM from the same process that produced the artifact. For embedded C/C++, this is the only approach that reliably handles static linkage, vendored code, and conditional compilation.

If you don’t know which method your tool is using, that’s the first thing to find out. If it’s not build-time, your SBOM may be incomplete in ways that aren’t visible in the output.

2. Test for what’s missing, not just what’s there

The hardest part of validating an SBOM is that incomplete outputs can look like complete ones. A list of 200 components doesn’t tell you whether the list should have 300.

The most reliable validation approach is a ground-truth test: take a controlled build with known dependencies—a project that includes several well-known static libraries, at least one vendored component, and conditional compilation—and verify that every component appears in the SBOM with precise version information. If anything is missing, the tool has a gap. If versions are approximate or marked UNKNOWN, vulnerability matching against that component is unreliable.

Specific things to test for:

Static libraries (.a). This is where most tools fail silently. Statically-linked dependencies get compiled directly into the final binary with no separate package entry. Tools that don’t instrument the build frequently miss them entirely. If your SBOM doesn’t include static libraries, it’s missing a significant portion of your attack surface.

Vendored and in-tree third-party code. If your build includes open-source libraries copied into a /third_party/ directory—such as OpenSSL, zlib, or FreeRTOS—those libraries need to appear in the SBOM with version information. To a repo scanner, vendored code looks like your own source. To a build-time tool, it’s a dependency like any other.

Conditional compilation. Run the same tool on two builds of the same codebase with different architecture flags or #ifdef configurations. If the outputs are identical, the tool isn’t reflecting actual build differences—it’s scanning the repo.

All build outputs, not just the main binary. Firmware images, bootloaders, shared libraries, and auxiliary binaries all need to be represented. If your tool only generates an SBOM for the primary executable, significant parts of what you shipped aren’t covered.

3. Verify the SBOM is tied to a specific artifact

An SBOM that can’t be traced to the specific artifact that shipped is not useful for incident response or customer attestation. If a vulnerability is discovered six months after release, you need to be able to say with confidence whether that component was in that specific build.

This requires the SBOM to be cryptographically tied to the build output: a hash of the artifact, generated at build time, that can be verified against what was released. A repo-level SBOM or a “snapshot” SBOM associated with a branch is not sufficient for this purpose.

If you need to prove what components were in a specific firmware release from eight months ago, can your current SBOM tooling answer that question?

4. Run the same build through multiple tools and compare

If you’re mid-evaluation or suspect your current SBOM is incomplete but can’t prove it, one of the fastest diagnostic approaches is running multiple tools on the same build and comparing their outputs. Significant differences between outputs tell you that at least one tool is wrong. Digging into why they differ usually reveals a lot about each tool’s underlying approach.

This is also a useful exercise for internal validation: if your SBOM tool produces a component list and a second tool on the same build identifies components that weren’t in the first output, that’s a concrete gap, not a theoretical one.

What Accurate Looks Like

A reliable SBOM for embedded C/C++ checks all of these:

  • Generated from build-time instrumentation, not repo scanning or binary analysis alone
  • Includes static libraries, dynamic libraries, and vendored components
  • Covers all build outputs—firmware, bootloaders, shared libraries, auxiliary binaries
  • Every component has a precise version—no approximate ranges, no UNKNOWN where versions are detectable
  • Strong component identifiers present: purl, CPE, or cryptographic hash
  • SBOM is cryptographically tied to the specific build artifact
  • Output is deterministic: the same build inputs produce the same SBOM
  • Format is CycloneDX or SPDX, compatible with NVD/CVE matching workflows

The Manual Generation Problem

One pattern worth naming directly is that teams that have tried multiple tools and found them all inadequate sometimes end up maintaining SBOMs manually, tracking dependencies in spreadsheets, updating them when engineers remember to, and hoping the output is close enough to pass a customer review. This is both painful and fragile. Manual SBOMs can’t keep pace with real builds, can’t be tied to specific artifacts, and almost always have gaps that only surface at the worst possible time.

If your team is in this position, the solution is a generation approach that can actually handle your build environment automatically.

RunSafe’s C/C++ SBOM Generator

RunSafe’s SBOM generator is built around build-time instrumentation for embedded C/C++. It handles the environments where other tools fall short: complex build systems, static linkage, vendored dependencies, and multi-target builds. The output is tied to the artifact, not the repo, and designed for the vulnerability management and customer attestation workflows that product security teams actually need to support.

Want to put your SBOM to the test? If your assessment concludes that your current tooling isn’t up to the job, the next step is finding one that is. Read our follow-up post: Questions to Ask When Evaluating SBOM Tools for Embedded C/C++

Guide to Creating and Utilizing SBOMs

Latest Blog Posts

Questions to Ask When Evaluating SBOM Tools for Embedded C/C++

Questions to Ask When Evaluating SBOM Tools for Embedded C/C++

If you're running a proof of concept on Software Bill of Materials (SBOM) tooling for C/C++, you've probably already discovered that vendor demos don't tell you much. Tools that look capable in a sales presentation frequently fall apart when pointed at a real embedded...

read more
3 SBOM Generation Methods: Binary vs Build-Time vs Source Code

3 SBOM Generation Methods: Binary vs Build-Time vs Source Code

Your SBOM is only as useful as it is accurate, and the method you use for Software Bill of Materials (SBOM) generation determines the level of accuracy you will receive. SBOM generation method determines whether an SBOM captures what developers declared, what scanners...

read more