About Vibe Security Radar
Vibe Security Radar is a public tracker that monitors vulnerabilities (CVEs, GHSAs, RustSec, and other advisories) where AI coding tools introduced the vulnerable code. The goal is to bring transparency to the security implications of AI-assisted development so that developers, maintainers, and security teams can make informed decisions.
Coverage: May 2025 – Mar 2026
Methodology
Every vulnerability is processed through a six-tier pipeline. Bulk local data is preferred for throughput; API calls serve as fallbacks. Fix commits are traced back to bug-introducing commits via git blame, then each commit is checked for AI tool signatures. A two-phase LLM verification process confirms causality — first understanding the vulnerability itself, then evaluating whether each AI-authored commit actually introduced it.
- Tier 1
Bulk advisory ingestion
Load all advisories from the OSV bulk data dump and the local GitHub Advisory Database clone (reviewed + unreviewed). No API calls needed for this tier.
- Tier 2
NVD references
Parse NVD reference URLs to extract GitHub commit and pull-request links for each CVE.
- Tier 3
GitHub search (fallback)
Search GitHub commits for CVE/GHSA mentions when earlier tiers lack fix-commit SHAs.
- Tier 4
Git blame analysis
Clone the affected repository, diff the fix commit, and run SZZ-style git blame to trace bug-introducing commits. Only security-relevant files (identified by LLM analysis in Tier 6) are blamed, reducing noise from unrelated changes in the fix commit.
- Tier 5
AI signature detection
Check each bug-introducing commit for AI coding tool signatures: co-author trailers, bot email addresses, commit message markers, and tool-specific metadata. Known CI/CD bots (Dependabot, Renovate, GitHub Actions, etc.) are explicitly filtered out.
- Tier 6
Two-phase LLM causality verification
Phase 1 (per-CVE): An LLM analyzes the fix commit to understand the vulnerability — its type, root cause, vulnerable code pattern, and which files are security-relevant. Phase 2 (per-commit): For each AI-signaled commit, the LLM uses Phase 1 context to verify whether the commit actually introduced the vulnerability, producing a structured verdict with causal chain analysis. This two-phase approach eliminates false positives from commits that merely touch the same file as the vulnerability.
LLM Verification
Each CVE with AI-signaled commits goes through a two-phase LLM analysis using Gemini 3.1 Flash Lite. The first phase analyzes the fix commit to understand the vulnerability: its type (e.g., command injection, XSS), root cause, and vulnerable code pattern. The second phase evaluates each blamed commit against this context to determine whether it causally introduced the vulnerability.
Each verdict includes structured data — vulnerability type, root cause description, vulnerable pattern, and a causal chain explaining how the commit led to the vulnerability. CVEs where all AI-signaled commits are judged UNRELATED or UNLIKELY are filtered out, significantly reducing false positives compared to file-level blame alone.
AI Tools Monitored
We detect signatures from 40 AI coding tools. Detection relies on co-author trailers, bot email addresses, commit message keywords, and other metadata that these tools embed in git commits. Known CI/CD bots (Dependabot, Renovate, GitHub Actions, etc.) are explicitly filtered out to prevent false positives.
- Claude Code
- Cursor
- Aider
- GitHub Copilot
- Devin
- Windsurf
- Codeium
- Amazon Q
- Sweep
- OpenAI Codex
- Google Gemini
- Google Jules
- Tabnine
- Sourcegraph Cody
- OpenCode
- Kiro
- JetBrains Junie
- Roo Code
- Cline
- OpenHands
- Lovable
- Fine Dev
- Replit Agent
- Qodo
- Continue
- Augment Code
- Trae
- GitLab Duo
- Kimi Code
- Google Antigravity
- Kilo Code
- CodeGeeX
- Bolt.new
- Zencoder
- CodeGPT
- Amp
- v0
- Same.dev
- Leap.new
- Traycer
Data Sources
- OSV.dev (bulk + API)— Open Source Vulnerability database. Bulk data dumps are used for batch scans; the REST API fills in any gaps.
- GitHub Advisory Database (local clone)— Full git clone of GitHub-reviewed and community-unreviewed advisories, enabling offline batch analysis without API rate limits.
- NVD— National Vulnerability Database (NIST). Reference URLs are parsed to extract commit and pull-request links.
- GitHub Search API— Fallback commit and code search when advisory databases lack fix-commit SHAs.
Limitations
- Only detects AI involvement when explicit signatures exist (co-author trailers, bot emails, commit message markers).
- AI tools that do not leave signatures in commits cannot be detected.
- Git blame may attribute lines to the wrong commit in some edge cases; two-phase LLM verification reduces but does not eliminate this.
- LLM verification uses a lightweight model (Gemini 3.1 Flash Lite) and may occasionally misclassify borderline cases.
- Only publicly disclosed vulnerabilities with available fix commits can be analyzed; vulnerabilities in closed-source code or without public patches are not covered.
Contact
Questions, suggestions, or found a false positive/negative? Reach out at hanqing@gatech.edu