If you’re running a proof of concept on Software Bill of Materials (SBOM) tooling for C/C++, you’ve probably already discovered that vendor demos don’t tell you much. Tools that look capable in a sales presentation frequently fall apart when pointed at a real embedded C/C++ codebase, returning empty results, missing static dependencies, or producing component lists that can’t be tied to what actually shipped.
The problem is that most evaluation frameworks weren’t built for this environment. Generic SBOM evaluation criteria assume package managers, accessible source repositories, and clean dependency graphs. Embedded C/C++ lacks those things, and the questions you ask vendors need to reflect that.
This is a practical guide for product security leaders running SBOM tool evaluations. Use these questions in vendor conversations, POC scoping, or internal assessments of your current tooling.
Start Here: How Is the SBOM Generated?
The most important question you can ask is how is the SBOM generated? Most evaluations skip this entirely because vendors answer it vaguely.
Ask: Does your tool generate SBOMs from source scanning, binary analysis, build-time instrumentation, or some combination?
The answer tells you almost everything about what the tool can and cannot do. Source scanning reads repository contents and infers dependencies. It’s fast to set up, but it has no knowledge of what was actually compiled. Binary analysis works backward from build artifacts, which is useful when source code isn’t available, but unreliable for statically-linked C/C++ at scale. Build-time instrumentation observes the build as it happens, capturing the actual dependency graph from ground truth.
For embedded C/C++, build-time is the ideal approach that handles the full range of complexity, including static linkage, conditional compilation, vendored code, and multi-target builds. If a vendor can’t clearly explain their generation method—or describes it as “proprietary AI-based analysis” without specifics—treat that as a red flag.
Follow-up: What are the limitations of your generation approach?
Every method has tradeoffs. A vendor who can clearly articulate what their tool won’t catch is more trustworthy than one who claims complete coverage. If they can’t answer this question, they either don’t know their tool’s limitations or aren’t willing to tell you.
C/C++-Specific SBOM Capability Questions
Does the tool support your build systems?
Ask: Which build systems does your tool natively support?
Most SBOM tools were built around package managers. For C/C++, the relevant question is build system support: Make, Autotools, CMake, Ninja, Bazel, Yocto/BitBake, vcpkg. If a vendor lists npm and pip alongside CMake as equivalent capabilities, dig deeper. Package manager parsing and build system instrumentation are fundamentally different problems.
Create a list of build systems your team actually uses and confirm support for each one explicitly. Don’t accept “we support most C/C++ environments” as an answer.
How does the tool handle static linking?
Ask: How does your tool identify components that are statically linked into the final binary?
This is where many tools fail. Static libraries get compiled directly into the build output. There’s no separate package entry, no runtime trace, no obvious marker. Tools that rely on binary scanning often miss statically-linked components entirely or misidentify them through heuristic fingerprinting.
Ask the vendor to walk you through specifically how static dependencies are captured. If the answer is “we scan the binary for known signatures,” press on false positive and false negative rates. If the answer is “we instrument the build process,” ask what that instrumentation looks like and whether it requires changes to your build pipeline.
How does the tool handle vendored and in-tree third-party code?
Ask: If we copy an open-source library directly into our repository and build it as part of our project, will your tool detect it?
Vendored code—OpenSSL, zlib, FreeRTOS, and similar libraries copied into a /third_party/ directory—is common in embedded development and represents significant vulnerability exposure. A tool that only tracks package-managed dependencies will miss it entirely.
Ask for a demonstration using a project that includes vendored dependencies. This is a straightforward test that reveals a lot about how the tool actually works versus how it’s described.
How does the tool handle conditional compilation?
Ask: If our build uses #ifdef blocks or architecture-specific flags, does your SBOM reflect what was actually compiled into a specific build or what could potentially be compiled?
This matters for teams targeting multiple hardware platforms or maintaining builds with significant conditional logic. A repo-level scan reports what’s available in the codebase. A build-time SBOM reports what went into a specific artifact. These can be meaningfully different, and the distinction matters for vulnerability management and customer attestation.
Completeness and Accuracy Questions
How do you validate that the SBOM is complete?
Ask: How would we know if your tool missed something?
This question is deliberately hard to answer, and that’s the point. A vendor with a strong answer will describe testing methodology, known limitation categories, or comparison approaches. A vendor without one will pivot to marketing language. The absence of a clear answer is itself informative.
For your own POC, test against a build where you know the dependency tree. Start with a controlled project that includes a few well-known static libraries, vendored components, and conditional builds. Verify that every component appears in the SBOM output with precise version information.
How precise is the version identification?
Ask: How does your tool distinguish between OpenSSL 1.1.1k and 1.1.1w?
This is not a trivial question. The difference between those two versions is a set of critical CVEs. SBOM tools that return approximate version ranges, generic version labels, or frequent UNKNOWN entries are not useful for vulnerability management. They generate noise, not signal.
Ask what percentage of components in a typical C/C++ build return with precise, confirmed version information versus estimated or unknown values.
What component identifiers does the SBOM include?
Ask: Does the SBOM output include Package URLs (purls), CPEs, or cryptographic hashes for each component?
Without stable, globally-recognizable identifiers, downstream vulnerability matching is unreliable. A component listed as “openssl, version 1.1.1” with no purl or CPE requires manual resolution before it can be matched against NVD or a CVE database. At scale, this becomes an operational burden that defeats the purpose of having an SBOM.
Vulnerability and Compliance Readiness Questions
Can the SBOM support your vulnerability workflow?
Ask: What formats does your SBOM output support, and how does it integrate with CVE matching workflows?
CycloneDX and SPDX are the formats most commonly required by regulators and enterprise customers. Confirm that the tool outputs one or both, and that the component identifiers in the output are compatible with NVD/CVE matching. An SBOM in a proprietary format, or one that requires manual transformation before vulnerability matching, adds friction that compounds over time.
Does the tool support VEX?
Ask: Can your SBOM output be linked to VEX (Vulnerability Exploitability eXchange) documents?
VEX is increasingly expected in regulated environments. It allows you to assert whether a known vulnerability is actually exploitable in your specific build, which is critical for reducing false positives and responding to customer or regulator inquiries without triaging every CVE that touches a component you use.
How does the tool handle license compliance?
Ask: Does the SBOM include license information for every component, and are licenses normalized to SPDX identifiers?
Free-text license fields—”MIT License,” “The MIT License (MIT),” “MIT”—are the same license but won’t match in automated compliance tooling. SPDX-normalized identifiers (MIT, Apache-2.0, GPL-2.0-only) are required for reliable automated license review. Also, ask whether the tool identifies modifications to open-source components, which is a license compliance obligation under GPL and LGPL.
Operational Questions
Is SBOM generation automated and reproducible?
Ask: Is the SBOM generation process fully automated, and does it produce consistent output across repeated runs of the same build?
Manual SBOM generation, or tooling that requires significant human intervention to produce a complete result, doesn’t scale. Every manual step is an opportunity for inconsistency, and inconsistency creates compliance gaps. The SBOM should be generated automatically as part of your build or release process, and it should be deterministic: the same build inputs should produce the same SBOM.
Can the SBOM be tied to a specific build artifact?
Ask: How do you ensure the SBOM reflects a specific released artifact rather than a general snapshot of the codebase?
This is the question that separates SBOMs that are useful for incident response and customer attestation from those that are useful only for broad compliance checkbox exercises. If a vulnerability is discovered six months after a product ships, you need to know whether that component was in that specific build. That requires the SBOM to be cryptographically tied to the build artifact and not just associated with a codebase or a release branch.
Running Your POC
A few practical recommendations for when you get to the hands-on evaluation phase:
- Use a representative build, not a clean one. Vendors will often suggest a simple test project. Push back. Use a build that reflects your actual environment, with multiple build systems if applicable, vendored dependencies, conditional compilation, the works. The gap between demo performance and real-world performance is where most tool evaluations go wrong.
- Test for what’s missing, not just what’s present. It’s easy to look at an SBOM output and see components. It’s harder to verify that nothing is missing. Instrument a test build with known dependencies and check that every one appears in the output with precise version information.
- Compare outputs across tools on the same build. If you’re evaluating multiple vendors—which you should be—run each tool against the same build and compare the component lists. Significant differences between outputs are a signal that at least one tool is wrong. Understanding why the outputs differ will tell you more about each tool’s architecture.
- Ask for customer references in embedded environments. General enterprise SBOM references are not relevant to your evaluation. Ask specifically for references from teams doing embedded C/C++ development at a comparable scale and complexity.
Need a C/C++ SBOM solution? RunSafe’s SBOM generator is built around build-time instrumentation for C/C++ and is specifically designed for environments where other tools fall short. We’re happy to walk through these questions against our own tool and let the answers speak for themselves.




