Vibe Security Radar

About Vibe Security Radar

Vibe Security Radar is a public tracker that monitors vulnerabilities (CVEs, GHSAs, RustSec, and other advisories) where AI coding tools introduced the vulnerable code. The goal is to bring transparency to the security implications of AI-assisted development so that developers, maintainers, and security teams can make informed decisions.

73 AI-linked vulnerabilities tracked11,189 advisories analyzed

Coverage: May 2025Mar 2026

Methodology

Every vulnerability is processed through a six-tier pipeline. Bulk local data is preferred for throughput; API calls serve as fallbacks. Fix commits are traced back to bug-introducing commits via git blame, then each commit is checked for AI tool signatures. A two-phase LLM verification process confirms causality — first understanding the vulnerability itself, then evaluating whether each AI-authored commit actually introduced it.

  1. Tier 1

    Bulk advisory ingestion

    Load all advisories from the OSV bulk data dump and the local GitHub Advisory Database clone (reviewed + unreviewed). No API calls needed for this tier.

  2. Tier 2

    NVD references

    Parse NVD reference URLs to extract GitHub commit and pull-request links for each CVE.

  3. Tier 3

    GitHub search (fallback)

    Search GitHub commits for CVE/GHSA mentions when earlier tiers lack fix-commit SHAs.

  4. Tier 4

    Git blame analysis

    Clone the affected repository, diff the fix commit, and run SZZ-style git blame to trace bug-introducing commits. Only security-relevant files (identified by LLM analysis in Tier 6) are blamed, reducing noise from unrelated changes in the fix commit.

  5. Tier 5

    AI signature detection

    Check each bug-introducing commit for AI coding tool signatures: co-author trailers, bot email addresses, commit message markers, and tool-specific metadata. Known CI/CD bots (Dependabot, Renovate, GitHub Actions, etc.) are explicitly filtered out.

  6. Tier 6

    Two-phase LLM causality verification

    Phase 1 (per-CVE): An LLM analyzes the fix commit to understand the vulnerability — its type, root cause, vulnerable code pattern, and which files are security-relevant. Phase 2 (per-commit): For each AI-signaled commit, the LLM uses Phase 1 context to verify whether the commit actually introduced the vulnerability, producing a structured verdict with causal chain analysis. This two-phase approach eliminates false positives from commits that merely touch the same file as the vulnerability.

LLM Verification

Each CVE with AI-signaled commits goes through a two-phase LLM analysis using Gemini 3.1 Flash Lite. The first phase analyzes the fix commit to understand the vulnerability: its type (e.g., command injection, XSS), root cause, and vulnerable code pattern. The second phase evaluates each blamed commit against this context to determine whether it causally introduced the vulnerability.

Each verdict includes structured data — vulnerability type, root cause description, vulnerable pattern, and a causal chain explaining how the commit led to the vulnerability. CVEs where all AI-signaled commits are judged UNRELATED or UNLIKELY are filtered out, significantly reducing false positives compared to file-level blame alone.

AI Tools Monitored

We detect signatures from 40 AI coding tools. Detection relies on co-author trailers, bot email addresses, commit message keywords, and other metadata that these tools embed in git commits. Known CI/CD bots (Dependabot, Renovate, GitHub Actions, etc.) are explicitly filtered out to prevent false positives.

Data Sources

Limitations

Contact

Questions, suggestions, or found a false positive/negative? Reach out at hanqing@gatech.edu