Key Takeaway
Black Duck CEO Jason Schmitt argues that AI-assisted development tools like GitHub Copilot and Amazon CodeWhisperer are introducing vulnerability patterns and dependency risks that traditional SAST and SCA pipelines are not equipped to detect. Existing regulations including NIST SSDF, OMB M-22-18, and PCI DSS v4.0 create direct compliance exposure for organizations that have not updated their application security testing programs to account for LLM-generated code. Security teams must audit AI tool usage across the SDLC, update SBOM generation, and revise secure coding policies before their next compliance attestation cycle.
Black Duck on AI and Application Security: What Security Teams Need to Know
Issuing Body / Source: Black Duck Software (formerly Synopsys Software Integrity Group), via a recorded interview with Dark Reading. CEO Jason Schmitt provided the technical and strategic positions outlined below.
What Black Duck Is Saying — and Why It Matters to AppSec Teams
Black Duck CEO Jason Schmitt, speaking with Dark Reading's Terry Sweeney, laid out a direct argument: AI is fundamentally changing how applications are built, and application security testing (AST) tooling must change in parallel or become operationally irrelevant.
This is not a vendor pitch in isolation. It reflects a real shift that SOC analysts and CISOs are already managing. Developers are using GitHub Copilot, Amazon CodeWhisperer, and similar large language model (LLM)-assisted tools to generate code at volume and velocity that traditional static application security testing (SAST) and software composition analysis (SCA) pipelines were not designed to absorb.
The Core Technical Problem
AI-generated code introduces several categories of risk that require updated detection logic:
1. Novel vulnerability patterns from LLM output. LLMs trained on public repositories inherit the flaws present in that training data. When a developer accepts a Copilot suggestion that mirrors a known-vulnerable code pattern — even one not directly matching a catalogued CVE — traditional signature-based SAST tools may not flag it. Research from Stanford's Human-Centered AI group found that GitHub Copilot suggested insecure code in approximately 40% of cases studied across common security-sensitive programming scenarios.
2. Accelerated dependency ingestion. AI-assisted developers pull in open-source packages faster than manual reviewers can audit them. SCA tools tied to static dependency manifests miss transitive dependencies introduced mid-sprint. CVEs like CVE-2021-44228 (Log4Shell) and CVE-2022-22965 (Spring4Shell) demonstrated that transitive dependency exposure is not hypothetical — it is the primary attack surface in modern Java applications.
3. AI-generated code that passes linting but fails security review. Syntactically correct code generated by an LLM can satisfy automated build gates while containing logic flaws, insecure deserialization patterns, or hardcoded credential stubs that reach staging environments.
Black Duck's position, as Schmitt describes it, is that AST tools must incorporate AI-native detection: models trained to recognize insecure patterns generated by other models, not just patterns catalogued in the National Vulnerability Database (NVD) or MITRE's CVE list.
Regulatory and Compliance Implications
No single regulation governs AI-generated code security specifically, but several existing frameworks create compliance obligations that this shift directly affects:
NIST SP 800-218 (Secure Software Development Framework, SSDF) requires organizations supplying software to federal agencies to document and implement secure development practices throughout the software development lifecycle (SDLC). The Office of Management and Budget (OMB) memorandum M-22-18 mandates that federal software vendors self-attest or obtain third-party assessment of SSDF compliance. If AI-generated code bypasses documented secure coding checkpoints, that attestation is at risk of being materially false.
Executive Order 14028 (Improving the Nation's Cybersecurity, May 2021) established software supply chain security requirements, including the use of Software Bills of Materials (SBOMs). AI tools that silently introduce unlisted dependencies corrupt SBOM accuracy, which creates downstream risk for any vendor selling to U.S. federal agencies.
PCI DSS v4.0, effective March 2024, requires organizations to review bespoke and custom software for vulnerabilities using manual or automated techniques — explicitly including in-house developed applications. AI-generated code that developers treat as first-party code falls squarely within scope.
Who Must Comply and What the Exposure Looks Like
Any organization that:
- Ships software to U.S. federal agencies and self-attests SSDF compliance
- Handles cardholder data and develops custom payment applications
- Operates under SOC 2 Type II commitments with secure SDLC controls
- Maintains ISO/IEC 27001 certification covering software development
...must treat AI-assisted development as a material change to their risk posture, not an internal tooling choice.
Failure to update AST pipelines creates audit exposure. An SSDF self-attestation signed by a CISO that does not account for LLM-generated code in the development pipeline is a documentation gap that federal auditors and third-party assessors will eventually reach.
Penalties under OMB M-22-18 non-compliance include loss of contract eligibility for federal software procurement. PCI DSS v4.0 violations carry fines from acquiring banks ranging from $5,000 to $100,000 per month, with escalation to card brand termination for sustained non-compliance.
What Security Teams Should Do Now
Map AI tool usage across the SDLC immediately. Identify which developer teams are using Copilot, CodeWhisperer, Tabnine, or other LLM-assisted coding tools. This is a prerequisite for any risk assessment.
Audit existing SAST and SCA coverage against AI-generated code samples. Run your current toolchain against code known to be LLM-generated and measure detection rates for OWASP Top 10 categories — specifically injection, insecure deserialization, and broken access control.
Update SBOM generation to capture runtime dependency discovery, not just manifest-based analysis. Tools like Syft, Grype, and Black Duck's own SCA engine support runtime and binary scanning that catches transitive dependencies missed by manifest-only approaches.
Revise secure coding policies to explicitly address LLM-assisted development. Policies referencing only human-written code are out of scope for the current development environment. Define code review gates that apply to AI-suggested code blocks before merge.
Brief legal and compliance on SSDF attestation risk. If your organization has signed OMB M-22-18 attestations and has not assessed AI tool usage in the SDLC, that gap needs to be documented and remediated before the next attestation cycle.
Black Duck's product portfolio — including Coverity for SAST, Black Duck SCA, and Seeker for IAST — is positioned to address parts of this problem. Security teams should evaluate vendor claims against their own pipeline telemetry rather than accepting marketing positioning at face value. Run proof-of-concept scans on representative AI-generated code before making procurement decisions.
Original Source
Dark Reading
Related Articles
Latin America’s Labor Market Dynamics: Implications for Cybersecurity Talent Acquisition
A recent study reveals Latin America's potential as a cybersecurity talent source due to its youthful, technically skilled workforce. Organizations must address regional infrastructure, language, and compliance challenges to effectively recruit and onboard talent from this region.
FCC Mandates Pre-Approval for All Foreign-Manufactured Routers Imported or Sold in the US
The FCC now requires pre-approval for all foreign-manufactured routers before they can be imported, marketed, or sold in the United States, with applicants required to disclose foreign investor relationships and submit a U.S. manufacturing relocation plan. The rule targets supply chain risks tied to documented exploitation campaigns by groups including Volt Typhoon and Salt Typhoon, which compromised SOHO and enterprise routers to gain persistent access to U.S. critical infrastructure. CISOs, procurement teams, and network engineers must audit hardware pipelines, monitor DoD and DHS exemption lists, and pressure vendors for compliance timelines now.
SEC Cybersecurity Disclosure Rule: What CISOs and Security Engineers Must Do Before the Deadlines Hit
The SEC's cybersecurity disclosure rule requires public companies to report material incidents on Form 8-K within four business days of a materiality determination, and to disclose risk management programs and board oversight annually in 10-K filings. Large accelerated filers have been subject to incident reporting requirements since December 18, 2023, with enforcement precedent already set through the SEC's fraud charges against SolarWinds and CISO Timothy Brown. Security teams must build materiality determination workflows, align IR playbooks to disclosure triggers, and ensure 10-K disclosures accurately reflect internal security posture.
RSAC 2026: AI-Driven Threats, Global Cyber Leadership Shifts, and the Policies Reshaping Defense Priorities
RSAC 2026 surfaced AI-assisted attack tooling, enforcement of EU NIS2 and the incoming EU AI Act, and structural shifts in U.S. and allied cyber leadership as the defining issues for security practitioners. SOC teams and CISOs face active NIS2 enforcement since October 2024, EU AI Act high-risk system deadlines in August 2026, and ongoing CISA KEV remediation obligations. Organizations must audit AI product compliance, validate vulnerability remediation workflows, and document NIS2 risk management measures now.