The SOC 2 Audit is Catching Up to Agentic AI
For the first three years of the enterprise generative AI boom, SOC 2 auditors largely treated AI systems as black boxes — interesting from a risk perspective, but not yet formally in scope for the Trust Services Criteria. That era is ending.
In 2025, the AICPA issued updated guidance clarifying that AI systems acting on behalf of an organization — particularly those with access to customer data, financial systems, or regulated infrastructure — are in scope for the Security, Availability, and Confidentiality Trust Services Criteria. Audit firms have updated their questionnaires accordingly. If your company deploys AI agents and holds a SOC 2 Type II certification, the next audit cycle will almost certainly include AI-specific inquiries.
Most engineering and compliance teams are not ready. This post breaks down exactly what the Trust Services Criteria require, translated into concrete infrastructure decisions for teams running agentic AI systems.
The Relevant Trust Services Criteria
SOC 2 is organized around the Trust Services Criteria (TSC), which map to five service commitments: Security (CC), Availability (A), Processing Integrity (PI), Confidentiality (C), and Privacy (P). For AI agents, the most directly relevant criteria fall under the Security category.
CC6: Logical and Physical Access Controls
CC6.1 requires that access to systems is restricted to authorized users, components, and programs. The "programs" qualifier is directly applicable to AI agents. Your controls must demonstrate that only authorized agents can access protected resources, and that this authorization is enforced and logged.
CC6.2 requires prior to issuing credentials to access systems, the completeness, accuracy, existence, and rights of the requesting entity are evaluated. For AI agents, this translates to a requirement that agent credentials are issued through a formal provisioning process — not ad-hoc, not via shared accounts — and that the agent's declared scope is evaluated before access is granted.
CC6.3 requires that access is removed when no longer needed. For agents, this means session-scoped credentials that expire at task completion, plus a process for deprovisioning agents that are retired or modified.
CC7: System Operations
CC7.2 requires monitoring system components for anomalies that might indicate malicious acts, natural disasters, or errors. Applied to AI agents, this means your monitoring must be capable of detecting anomalous agent behavior — actions outside the agent's declared scope, unusual access patterns, unexpected resource consumption — and alerting on deviations.
CC7.3 requires evaluating security events to determine whether they could or have resulted in a failure of the entity to meet its objectives. This requires a post-incident analysis capability. If an agent behaves unexpectedly, you need to be able to reconstruct exactly what it did, in what order, and with what authorization — which requires a complete, timestamped, attributed audit trail.
CC8: Change Management
CC8.1 requires that infrastructure and software changes are authorized, tested, and approved. For AI agents, model updates, prompt changes, and tool permission expansions are all changes that must go through this process. The audit evidence required is a record showing that each change was reviewed and approved before deployment.
What a Compliant AI Audit Trail Looks Like
The criteria above translate into a specific set of audit trail properties. An AI audit trail that satisfies SOC 2 requirements must be:
Attributed
Every action recorded in the audit log must be attributed to a specific, non-shared agent identity. Log entries like "service-account-12 called the payments API" do not satisfy CC6 if service-account-12 is shared among multiple agents or human processes. Each entry must be attributable to a specific agent instance executing a specific task.
Complete
The log must capture every action the agent takes that touches a protected resource. Sampling, aggregation, or selective logging is insufficient. SOC 2 auditors will ask: "If an agent exfiltrated customer PII, would you have a record of every read and every transmission?" The answer must be yes.
Immutable
Audit logs are only meaningful if they cannot be tampered with. This means logs must be written to a destination that the agent itself — and ideally the operator — cannot modify or delete. Cryptographic chaining (hash of each entry includes the hash of the prior entry) provides tamper-evidence. Write-once storage (object storage with object lock enabled, or an append-only database) provides tamper-resistance.
Timestamped with Authoritative Time
Log entries must carry timestamps from an authoritative time source that cannot be manipulated by the agent or its runtime environment. This is relevant for incident reconstruction: if timestamps are drawn from the agent's local clock, a compromised agent can falsify the temporal record.
Queryable and Reportable
An audit trail that exists but cannot be queried efficiently does not satisfy the audit evidence requirement. Auditors will ask for evidence demonstrating specific controls — for example, "show me all instances in the last 12 months where an agent accessed customer data outside business hours." If that query takes four days to run, the control is effectively non-functional. Your audit infrastructure must support time-bounded, identity-scoped, action-type-filtered queries with sub-minute response times.
The Common Gaps
Based on the audit inquiries that surfaced in 2025, the most common gaps in enterprise AI audit infrastructure are:
Shared service accounts. As discussed above, this fails CC6.1 and CC6.2 directly. Every agent must have its own identity.
Log forwarding without attribution. Many teams forward agent logs to a SIEM, but the logs themselves do not carry agent identity — they carry process IDs or container names that are not stable across invocations. Correlation is impossible at scale.
No coverage of tool calls. Application-level logs capture what the agent said, but not what it did. If the agent called an external API, wrote to a database, or invoked a code execution tool, those actions must be independently logged at the infrastructure level — not just derived from the agent's self-reported output.
Missing change management records for model updates. Model version updates are changes to a critical system component. They require a paper trail showing who authorized the update, what testing was performed, and what the rollback plan was. Most teams have no formal process for this.
No anomaly detection. Audit trails are retrospective by default. CC7.2 requires prospective monitoring. Real-time anomaly detection on agent behavior — flagging actions outside declared scope or access patterns that deviate from baseline — is a distinct capability from logging, and one that most teams have not built.
Preparing for Your Next Audit
The practical readiness checklist for SOC 2 AI agent compliance:
- Agent inventory. Maintain a current registry of every AI agent running in production, its declared scope, its identity, and its data access permissions. Auditors will ask for this.
- Per-agent credentials. Every agent has a unique, non-shared identity with credentials scoped to its declared function and lifetime.
- Infrastructure-level logging. All agent-to-resource interactions are logged at the infrastructure layer — not just the application layer — with agent identity, resource identifier, action type, timestamp, and outcome.
- Immutable log storage. Logs are written to append-only or write-once storage that the agent runtime cannot modify.
- Queryable audit interface. Your team can produce a filtered audit report for any time window, agent, or resource within minutes.
- Change management for models and prompts. Model updates and significant prompt changes go through a documented review and approval process with evidence retained.
- Anomaly alerting. Real-time or near-real-time detection for agent actions outside declared scope, with escalation paths.
None of these items require months of engineering work in isolation. But they do require deliberate infrastructure investment — infrastructure that most teams have not yet built, and that auditors are increasingly expecting to see.
The organizations that will sail through their 2026 SOC 2 audits are the ones building this foundation now, not the week before the audit window opens.