At itsavibe.ai, we're a community-driven initiative focused on establishing best practices, standards, and frameworks for AI-generated code. Our mission is to raise, create, and catalog best practices for AI-driven code development while identifying and mitigating risks inherent to this practice.
We believe in the power of AI to transform software development, but we also recognize the importance of responsible adoption and transparent practices.
The ecosystem comprises four complementary standards: VIBES for audit data, VERIFY for cryptographic attestation, PRISM for risk scoring, and EVOLVE for agent learning and governance.
Creator & Lead Developer
Andre Ludwig is a cybersecurity and AI security leader with over two decades of experience. As Global Practice Leader for Cybersecurity, Privacy, and AI Security at Ankura, he leads engagements helping clients navigate AI risk governance, threat intelligence, and incident response. Previously, he built and led security teams at Capital One, InQuest, Bricata, Neustar, and QOMPLX, and consulted with DARPA on advanced security research. Andre co-founded the Conficker Working Group, led global cyber takedowns including Operation SMN and Operation Blockbuster, and launched Quad9 — a nonprofit DNS security platform protecting over 150 million users.
Security Advisor
Pedram Amini is a veteran security researcher, bug bounty pioneer, and founder with extensive experience in cybersecurity. As CTO at InQuest (acquired by OPSWAT in 2024), he developed Deep File Inspection (DFI) technology for real-time threat detection. Previously, he founded the Zero Day Initiative at TippingPoint and led Jumpshot before its acquisition by Avast. At itsavibe.ai, Pedram advises on security practices for AI-generated code and leads initiatives to identify and mitigate potential vulnerabilities in AI systems.
AI and Machine Learning Lead
Zachary Hanif is an AI and cybersecurity leader currently serving as Head of AI, ML & Data and Vice President of Traffic Intelligence at Twilio, where he leads AI capabilities at internet scale. Previously, he was Head of AI and ML at Capital One, CTO of Eastern Foundry, and an early employee at Endgame where he pioneered ML-based malware detection. Zachary is a published researcher, a founding member of The Honeynet Project, and has presented at Black Hat USA, RSA Conference, and NVIDIA GTC. He brings deep expertise in operationalizing machine learning at scale to the itsavibe.ai initiative.
Security Advisor
Mike Hom is the Product Architect and Head of AI Strategy in Google Cloud Security, with a focus on security operations and applied threat intelligence. He is responsible for strategic initiatives and outcomes. He has been an offensive and defensive security engineer, software engineer, incident commander, intelligence lead, and vulnerability researcher in previous organizations. Previously, Mike held security roles at In-Q-Tel, Amazon Web Services, and the US government.
Help establish AI code transparency as the norm. Whether you're presenting to leadership, evaluating vendor practices, or shaping policy — these resources will help you make the case.
AI coding assistants are already writing production code across your engineering teams. Without structured audit data, you have no way to track what was AI-generated, verify its provenance, or assess its risk. VIBES provides the missing accountability layer.
Use these when presenting VIBES to management, security teams, or compliance officers.
There is no widely adopted standard for AI code audit data. Here's how existing approaches fall short.
| Approach | Scope | Limitations |
|---|---|---|
| Git commit messages | Ad hoc labeling | Unstructured, no model/prompt data, easily omitted |
| Vendor telemetry | Usage analytics | Proprietary, not portable, no code-level mapping |
| SBOM / SLSA | Supply chain provenance | Designed for dependency tracking, not AI generation context |
| Internal policies | Process controls | Manual enforcement, no machine-readable format, org-specific |
| VIBES Ecosystem | Full AI code lifecycle | Open standard, structured data, attestation, risk scoring, agent governance |
Industry research and regulatory trends that support the case for AI code transparency.
Print or share this summary with stakeholders. Print this page →
Base data standard for AI code audit metadata. Records model, prompt, session, and human review data in structured JSON.
Cryptographic attestation layer. Uses DSSE envelopes and Ed25519 signatures to create tamper-proof audit trails.
Risk scoring extension. Automated severity bands and policy-driven CI/CD gating for AI-generated code changes.
Agent governance framework. Decision records, feedback loops, and guardrails for autonomous AI agent operations.
Open & Vendor-Neutral — No lock-in. Adopt at your pace. Start with VIBES Low assurance and grow as your needs evolve.
Learn more at itsavibe.ai • Join the community on SimpleX Chat • Contribute on GitHub
Start by introducing the standards to your team. Present the talking points above, share the one-pager, and begin with a pilot project using VIBES Low assurance.
We're building a diverse community of developers, researchers, and industry professionals passionate about the future of AI-generated code. Join the conversation, share ideas, ask questions, and help shape the future of the VIBES ecosystem.
We use SimpleX Chat for our community — a private, secure messenger with no user IDs or tracking. Discuss the standards, get implementation help, share what you're building, and connect with the team directly.
New to SimpleX? Download the app first (iOS, Android, desktop), then tap the group link to join.