airunidentity.com

ai-run-independent-verification

AI Run Identity Must Be Independently Verifiable

If verification requires the original system's cooperation, it is not verification. It is trust.

What Independent Means

What “Independent” Means

Independent means the verifier has no relationship with the system being verified. No shared infrastructure. No API access. No trust relationship. The verifier receives a claim — "this run had this identity" — and can confirm or deny that claim using only the information provided.

This is not a high bar for most systems. A bank statement can be verified against the bank's records. A software build can be verified against its source code. A signed document can be verified against the signer's public key. In each case, verification does not require the original system to participate in the act of verification.

For AI runs, this property does not exist. When a system claims a run executed under certain conditions, the only way to check that claim is to ask the system. Query its logs. Examine its traces. Inspect its dashboards. All of these require the system's infrastructure. All of them require trusting that the system recorded accurately. None of them constitute independent verification.

The distinction is between verification and corroboration. Corroboration asks: does the system's own evidence support its claim? Verification asks: can the claim be confirmed without the system's evidence? Current AI systems support the first. No system supports the second.

System Cooperation

Why Current Approaches Require System Cooperation

Every current approach to confirming what an AI run did requires access to the system that ran it. This is not a limitation of implementation. It is a consequence of architecture.

Logging stores events in the system's own storage. To verify a log, you must access that storage. To trust a log, you must trust that storage was not modified. The log is not evidence of what happened. It is the system's account of what happened. These are different things.

Tracing propagates context through the system's own execution path. To verify a trace, you must access the tracing infrastructure. The trace exists within the system's boundary. An external party cannot access it without the system's cooperation. And even with access, the trace records the flow of execution, not the composition of the run.

Observability platforms aggregate telemetry from the system's own instrumentation. They present the system's view of itself. An external party examining this telemetry is not performing independent verification. They are reading the system's self-report through a different interface.

In every case, the evidence lives inside the system. Verification cannot be independent when the evidence is not independent. The system is both the subject of verification and the sole source of evidence. This is a structural conflict, not a tooling gap.

Structural Requirements

What Independent Verifiability Would Structurally Require

For verification to be independent, the evidence must exist outside the system. Not a copy of the system's logs stored elsewhere. Not a mirror of the system's telemetry. Evidence that is structurally independent — generated in a way that the system cannot unilaterally alter after the fact.

The run's identity would need to be established at the moment of execution. Not recorded afterward. Not reconstructed from artifacts. Established — meaning the identity is fixed before the run produces its output, and that fixation is visible to parties outside the system.

The verification process would need to be deterministic. Given the same identity claim and the same verification inputs, any party must reach the same conclusion. If verification produces different results for different verifiers, it is not verification. It is interpretation.

And the verification must not require any privileged access. No API keys. No service accounts. No network connectivity to the original system. The verifier needs only the claim and the rules for checking it. Everything else must be contained in the identity itself.

The Hardest Condition

Why This Is the Hardest Condition to Satisfy

Of the four conditions, independent verifiability is the most difficult because it requires the system to produce something it cannot later control. Every other condition can, in principle, be satisfied by a well-designed internal system. Consistency is a property of the identity-generation process. Source derivation is a property of the input selection. Interpretation-freedom is a property of the representation format. All three are internal design choices.

Independent verifiability is external. It requires the system to create an artifact that has meaning outside its own boundaries. An artifact that can be examined, tested, and confirmed by a party who has never interacted with the system. This is a fundamentally different kind of requirement.

Most identity systems in computing avoid this problem. Database IDs are verified by querying the database. Session tokens are verified by checking the session store. OAuth tokens are verified by calling the issuer. Each of these is a closed system of verification — the verifier must trust the issuer.

For AI run identity, that closed model is insufficient. The question being asked — "did this run execute under these conditions?" — cannot be answered by asking the system that ran it. The system is the subject of the question. It cannot also be the authority on the answer.

There is no system today that resolves this for AI runs. The condition is clear. The path to satisfying it is not.