VERA: Verifiable Enforcement for Runtime Agents
A Zero Trust Reference Architecture for Autonomous AI Agents
Berlin AI Labs — February 2026
Abstract
AI agents take real actions with real data at machine speed. Compromised AI agents pose significant risks including data exfiltration, unauthorized financial transactions, and cascading failures across downstream systems, often at speeds that preclude human intervention.
The security community has responded with governance frameworks that specify what to document, what to log, and what to monitor. These frameworks provide valuable guidance but leave a critical gap: none define the runtime enforcement layer that makes governance verifiable.
This paper introduces VERA (Verifiable Enforcement for Runtime Agents), a zero trust reference architecture for AI agents that prioritizes enforcement over documentation, cryptographic proof over policy assertions, and reference implementation over specification prose.
The Enforcement Gap
Architecture Overview
VERA places a hardened enforcement plane between the agent runtime and the untrusted world. Trust is never assumed; it is enforced by Policy Enforcement Points (PEPs) and verified by a cryptographic Proof Engine.
Read the Full Specification
The complete VERA specification, including formal security properties, threat models, and implementation details for all 12 services, is available in our open source repository.