Black Duck on AI and Application Security: What Security Teams Need to Know

Issuing Body / Source: Black Duck Software (formerly Synopsys Software Integrity Group), via a recorded interview with Dark Reading. CEO Jason Schmitt provided the technical and strategic positions outlined below.


What Black Duck Is Saying — and Why It Matters to AppSec Teams

Black Duck CEO Jason Schmitt, speaking with Dark Reading's Terry Sweeney, laid out a direct argument: AI is fundamentally changing how applications are built, and application security testing (AST) tooling must change in parallel or become operationally irrelevant.

This is not a vendor pitch in isolation. It reflects a real shift that SOC analysts and CISOs are already managing. Developers are using GitHub Copilot, Amazon CodeWhisperer, and similar large language model (LLM)-assisted tools to generate code at volume and velocity that traditional static application security testing (SAST) and software composition analysis (SCA) pipelines were not designed to absorb.


The Core Technical Problem

AI-generated code introduces several categories of risk that require updated detection logic:

1. Novel vulnerability patterns from LLM output. LLMs trained on public repositories inherit the flaws present in that training data. When a developer accepts a Copilot suggestion that mirrors a known-vulnerable code pattern — even one not directly matching a catalogued CVE — traditional signature-based SAST tools may not flag it. Research from Stanford's Human-Centered AI group found that GitHub Copilot suggested insecure code in approximately 40% of cases studied across common security-sensitive programming scenarios.

2. Accelerated dependency ingestion. AI-assisted developers pull in open-source packages faster than manual reviewers can audit them. SCA tools tied to static dependency manifests miss transitive dependencies introduced mid-sprint. CVEs like CVE-2021-44228 (Log4Shell) and CVE-2022-22965 (Spring4Shell) demonstrated that transitive dependency exposure is not hypothetical — it is the primary attack surface in modern Java applications.

3. AI-generated code that passes linting but fails security review. Syntactically correct code generated by an LLM can satisfy automated build gates while containing logic flaws, insecure deserialization patterns, or hardcoded credential stubs that reach staging environments.

Black Duck's position, as Schmitt describes it, is that AST tools must incorporate AI-native detection: models trained to recognize insecure patterns generated by other models, not just patterns catalogued in the National Vulnerability Database (NVD) or MITRE's CVE list.


Regulatory and Compliance Implications

No single regulation governs AI-generated code security specifically, but several existing frameworks create compliance obligations that this shift directly affects:

NIST SP 800-218 (Secure Software Development Framework, SSDF) requires organizations supplying software to federal agencies to document and implement secure development practices throughout the software development lifecycle (SDLC). The Office of Management and Budget (OMB) memorandum M-22-18 mandates that federal software vendors self-attest or obtain third-party assessment of SSDF compliance. If AI-generated code bypasses documented secure coding checkpoints, that attestation is at risk of being materially false.

Executive Order 14028 (Improving the Nation's Cybersecurity, May 2021) established software supply chain security requirements, including the use of Software Bills of Materials (SBOMs). AI tools that silently introduce unlisted dependencies corrupt SBOM accuracy, which creates downstream risk for any vendor selling to U.S. federal agencies.

PCI DSS v4.0, effective March 2024, requires organizations to review bespoke and custom software for vulnerabilities using manual or automated techniques — explicitly including in-house developed applications. AI-generated code that developers treat as first-party code falls squarely within scope.


Who Must Comply and What the Exposure Looks Like

Any organization that:

  • Ships software to U.S. federal agencies and self-attests SSDF compliance
  • Handles cardholder data and develops custom payment applications
  • Operates under SOC 2 Type II commitments with secure SDLC controls
  • Maintains ISO/IEC 27001 certification covering software development

...must treat AI-assisted development as a material change to their risk posture, not an internal tooling choice.

Failure to update AST pipelines creates audit exposure. An SSDF self-attestation signed by a CISO that does not account for LLM-generated code in the development pipeline is a documentation gap that federal auditors and third-party assessors will eventually reach.

Penalties under OMB M-22-18 non-compliance include loss of contract eligibility for federal software procurement. PCI DSS v4.0 violations carry fines from acquiring banks ranging from $5,000 to $100,000 per month, with escalation to card brand termination for sustained non-compliance.


What Security Teams Should Do Now

Map AI tool usage across the SDLC immediately. Identify which developer teams are using Copilot, CodeWhisperer, Tabnine, or other LLM-assisted coding tools. This is a prerequisite for any risk assessment.

Audit existing SAST and SCA coverage against AI-generated code samples. Run your current toolchain against code known to be LLM-generated and measure detection rates for OWASP Top 10 categories — specifically injection, insecure deserialization, and broken access control.

Update SBOM generation to capture runtime dependency discovery, not just manifest-based analysis. Tools like Syft, Grype, and Black Duck's own SCA engine support runtime and binary scanning that catches transitive dependencies missed by manifest-only approaches.

Revise secure coding policies to explicitly address LLM-assisted development. Policies referencing only human-written code are out of scope for the current development environment. Define code review gates that apply to AI-suggested code blocks before merge.

Brief legal and compliance on SSDF attestation risk. If your organization has signed OMB M-22-18 attestations and has not assessed AI tool usage in the SDLC, that gap needs to be documented and remediated before the next attestation cycle.

Black Duck's product portfolio — including Coverity for SAST, Black Duck SCA, and Seeker for IAST — is positioned to address parts of this problem. Security teams should evaluate vendor claims against their own pipeline telemetry rather than accepting marketing positioning at face value. Run proof-of-concept scans on representative AI-generated code before making procurement decisions.