One foundation standard for AI assurance data. Four extensions that build on it — cryptographic attestation, risk scoring, agent governance, and incident response & forensics — addressing fundamentally different problem spaces from a single, shared data substrate.
Looking for a simpler overview? Switch to User view →
AI is generating production code, executing workflows, and making architectural decisions at scale. Developers, agents, and autonomous systems produce artifacts daily with no standard way to record what was generated, by whom, or why.
This creates four compounding blind spots:
This is not theoretical. CrowdStrike researchers found that LLM vulnerability rates jump nearly 50% under certain prompt conditions. Without audit data, you cannot identify which code was affected. See why attestation matters →
The VIBES ecosystem addresses these gaps with one foundation standard and four extensions: VIBES captures the data, VERIFY proves it's authentic, PRISM scores the risk, EVOLVE turns it into actionable intelligence, and TRACE handles incident response and forensics when something goes wrong. It starts with data. VIBES provides the foundation.
Everything starts with data. VIBES (Verifiable Inventory of Bot-Engineered Signals) is the base standard — a structured, tool-agnostic format for recording AI involvement. It defines three assurance levels that progressively capture more about the context and execution of AI tools and agents. Start simple and increase detail as your needs grow.
"Which AI model and tool generated this function?"
Records the tool name, version, model name, and version for every AI-generated line or function. The minimum viable audit trail.
~200 bytes per annotation
Learn more →"What prompt produced this code?"
Adds the full prompt text, prompt type, and context files to every annotation. Enables reproducibility and audit trails for regulated industries.
~2–10 KB per annotation
Learn more →"What was the model thinking when it wrote this?"
Captures the full chain-of-thought and reasoning traces. For safety-critical systems, security forensics, and AI research.
~10–500 KB per annotation
Learn more →VIBES also supports context graphs for tracking causal relationships between code changes and multi-agent delegation hierarchies for orchestrated workflows.
Beyond Source Code: While the examples and tooling above focus on software development, VIBES is a general assurance data format. Its annotation model, context graphs, and attestation pipeline apply to any domain where AI decisions need to be recorded, verified, and audited.
VIBES is tool-agnostic — it works with Claude Code, Cursor, Windsurf, Copilot, CLINE, or any AI tool. All data lives in a .ai-audit/ directory alongside your code, tracked in git.
VIBES produces the base data. Each extension both consumes that data and defines additional fields of its own — VERIFY adds cryptographic signatures, PRISM adds risk scores, EVOLVE adds governance and decision records, TRACE adds IoCs, sealed evidence bundles, and incident reports. They are true extensions from both a data-generation and use-case perspective. Adopt just VIBES, or layer on any combination as your needs evolve.
The base data standard. Defines what to record, how to hash it, and where to store it.
Read the VIBES spec →The security attestation extension. Cryptographic proof that your audit data is authentic and untampered.
Read the VERIFY spec →The risk scoring extension. Computes severity bands from audit data for CI/CD gating and security triage.
Read the PRISM spec →The agent learning extension. Governance frameworks, decision records, and feedback loops for self-improving agents.
Read the EVOLVE spec →The IR & forensics extension. IoC vocabulary, sealed evidence bundles, and STIX-compatible incident reports for agentic systems. Standalone & provider-agnostic.
Read the TRACE spec →This site has two tracks: a User track for developers who use AI tools, and a Technical track (you're on it now) for implementors and contributors. Use the persona switcher in the navigation bar to switch between them.
Add transparency to your projects in minutes. Drop a badge in your README, run vibecheck to validate, and show the world how your code was built.
Get started →Track AI provenance across your codebase for compliance and security. Know which models generated which code — and reassess risk retroactively when threats emerge.
Explore the standard →Integrate VIBES into your AI coding tool. A basic Low implementation takes about 200 lines of code. Add VERIFY for attestation, PRISM for risk scoring, EVOLVE for agent learning, or TRACE for IR & forensics. Be the tool that proves its work.
Implementation guide →The VIBES ecosystem is built on a simple principle: one shared data substrate, extended for each problem space. Every extension reads from the base VIBES data and contributes new data of its own — so adopting one extension doesn't require the others, but combining them creates compounding value.
VERIFY adds cryptographic envelopes and signatures that prove VIBES data hasn't been tampered with. PRISM computes and stores risk scores derived from VIBES signals. EVOLVE introduces delegation records, decision graphs, and governance metadata that power agent learning. TRACE defines IoCs, sealed evidence bundles, and STIX-compatible incident reports for the moment something goes wrong. Each extension enriches the audit trail with its own data while building on the same foundation — and the ecosystem is open for future extensions we haven't imagined yet.
VIBES is open, free, and community-driven. Contribute to the specification, build integrations, submit implementations to the registry, or help shape the ecosystem. Every contribution moves the standard forward.
Whether you want to join the community, build with the standard, or champion adoption — there's a path for you.
Contribute to specs, tooling, and the ecosystem. Discuss ideas, review proposals, and shape the standards.
Get involved →Integrate VIBES into your tool or platform. A basic Low implementation takes about 200 lines of code.
Implementation guide →Advocate for VIBES adoption in your organization. Help establish transparency as the norm for AI-generated code.
Resources →