Technical Alignment Map · March 2026
Regulatory Mapping
DORA × EU AI Act
Overview
This page maps AmplefAI’s enforcement properties to regulatory obligations commonly associated with DORA (Digital Operational Resilience Act) and the EU AI Act.
The mapping is intentionally honest. It shows what is directly supported by the runtime, what is only partially addressed, and where meaningful gaps remain.
AmplefAI is an enforcement runtime, not a compliance framework. It provides cryptographically verifiable control primitives that can support specific regulatory obligations. It does not claim to be a complete compliance solution, a conformity assessment, or legal advice.
How to Read This Mapping
GEI Properties × Regulation
Traceability, Logging, and Reconstruction
Supports obligations associated with DORA’s operational resilience and incident reconstruction requirements, and the AI Act’s logging requirements for relevant systems.
| GEI Property | DORA | AI Act |
|---|---|---|
| Every action requires signed authorization | Directly supported | Directly supported |
| Evidence record exists before action runs | Directly supported | Directly supported |
| Tamper detection across the decision ledger | Directly supported | Directly supported |
| Governed decisions can be reconstructed after the fact | Directly supported | Directly supported |
| Replay of previously used authorization is prevented | Directly supported | Partially supported |
AmplefAI’s append-only, hash-linked evidence chain and replay model support strong traceability and post-incident reconstruction. Every governed action produces a verifiable chain showing what was authorized, what was bound, and what executed.
Access Control, Integrity, and Cybersecurity
Supports obligations associated with DORA’s ICT risk controls and the AI Act’s robustness / cybersecurity requirements.
| GEI Property | DORA | AI Act |
|---|---|---|
| Signing key isolated from execution authority | Directly supported | Directly supported |
| Authorization scoped to specific tool and tenant | Directly supported | Directly supported |
| Authorization bound to exact parameters via cryptographic hash | Directly supported | Directly supported |
| Authorization expires on strict validity window | Directly supported | Partially supported |
| Any verification failure results in deny, no fallback | Directly supported | Directly supported |
| Authorizing policy version carried with the decision | Directly supported | Partially supported |
The important point here is architectural: access control is not just configuration. It becomes a runtime invariant that can be verified after the fact.
Incident Reconstruction and Evidence Quality
Supports DORA’s incident handling and reconstruction expectations and helps raise the evidentiary quality of post-incident analysis.
| GEI Property | DORA |
|---|---|
| Reconstruct a governed action from the evidence chain | Directly supported |
| Detect tampering in the decision record | Directly supported |
| Prove the timeline: authorization preceded action | Directly supported |
This is stronger than ordinary logging. Given a trace identifier, the system can reconstruct the governed action boundary with verifiable integrity.
PCK Properties × Regulation
Logging, Record-Keeping, and Decision-State Traceability
Supports the AI Act’s logging and record-keeping expectations by preserving decision context in a replayable form.
| PCK Property | AI Act |
|---|---|
| Context history is append-only and immutable | Directly supported |
| Decision context is forensically reconstructable | Directly supported |
| Exact policy and context state at decision time is recoverable | Directly supported |
| Context entries are attributable to a specific agent | Directly supported |
| Context assembly is deterministic and auditable | Partially supported |
The PCK provides the missing “what did the agent know?” layer. That is often implied by governance requirements, but rarely implemented in a form that can be replayed and verified.
Transparency and Input-State Explainability
Supports part of the AI Act’s transparency expectations by making input state and policy state reconstructable, while not claiming model explainability.
| PCK Property | AI Act |
|---|---|
| Same inputs produce identical context assembly | Partially supported |
| Decision inputs are forensically reconstructable | Partially supported |
| Policy and context state are recoverable at decision time | Partially supported |
Honest assessment: AmplefAI provides input-state transparency, not model reasoning transparency.
It can prove what context and policy were present at decision time. It does not prove why a model produced a specific output.
Data Integrity and Isolation
Supports obligations associated with DORA’s ICT control expectations and the AI Act’s integrity / robustness expectations.
| PCK Property | DORA | AI Act |
|---|---|---|
| Cross-tenant leakage is structurally blocked | Directly supported | Directly supported |
| Out-of-scope data is invisible, not merely access-denied | Directly supported | Directly supported |
| Cross-agent dependencies are recorded and immutable | Directly supported | Partially supported |
| Tenant identity cannot change after creation | Directly supported | Directly supported |
| Isolation is enforced structurally across scope levels | Directly supported | Directly supported |
Operational Resilience Properties
Supports parts of DORA’s resilience expectations by making critical context inclusion, bounded execution, and offline replay enforceable.
| PCK Property | DORA |
|---|---|
| Critical context survives budget pressure | Directly supported |
| Context operations have bounded resource usage | Directly supported |
| No silent truncation; hard failure on overflow | Directly supported |
| Replay works without external dependencies | Directly supported |
| Stale context is detected and handled | Directly supported |
| Policy reads are consistent and non-stale | Directly supported |
Combined Spine Properties
The governed execution spine creates properties that no single component achieves alone.
| Combined Property | What It Proves | Regulatory Relevance |
|---|---|---|
| Full governed decision chain from context through execution | Decision can be reconstructed with cryptographic integrity | Strong support for logging, traceability, reconstruction, and control evidence |
| Fail-closed at every step | No silent fallback into ungoverned execution | Strong support for integrity and control reliability |
| Knowledge-state binding in signed authorization | Authorization is bound to what the agent knew | Strong support for logging, traceability, and partial transparency |
| Governed execution in confidential compute | Enforcement can run in a protected execution boundary | Strong support for integrity, isolation, and runtime assurance |
Known Gaps — Honest Assessment
These are obligations that AmplefAI does not fully satisfy on its own.
Gap 1: Enterprise Risk Management Framework
DORA Art.5–6
DORA requires a broad ICT risk management framework covering the identify-protect-detect-respond-recover lifecycle. AmplefAI is a runtime enforcement layer inside that broader system; it is not the full framework.
Status: Out of scope for V1. AmplefAI provides enforcement primitives within a risk management framework. It does not replace the framework itself.
Gap 2: Third-Party Provider Oversight
DORA Art.15, Art.28–31
DORA includes requirements around third-party ICT risk and oversight, including contractual audits and mandatory reporting. Architectural independence helps here, but contractual oversight, audit rights, and supplier governance are not runtime properties.
Status: Partially supported at the architecture level. The independence invariant ensures governance is not embedded in the third-party tool being governed. Contractual and audit process requirements are organizational, not runtime.
Gap 3: Human Oversight Surface
EU AI Act Art.14
The AI Act includes human oversight expectations for relevant systems. AmplefAI supports human-authored governance policy and deterministic deny behavior, but it does not yet provide a full human oversight interface or intervention workflow.
Status: Partially supported. Policy authorship provides human control over governance rules. Real-time human intervention during agent execution is not in scope for V1.
Gap 4: Full Lifecycle Governance
EU AI Act Art.9, DORA Art.6
Neither DORA nor the AI Act is satisfied by runtime controls alone. Design, deployment, monitoring, change management, and decommissioning remain organizational and operational responsibilities.
Status: Out of scope. AmplefAI is a runtime enforcement layer, not a lifecycle management tool.
Gap 5: Model Explainability
EU AI Act Art.13
AmplefAI can prove what went in. It cannot prove why a model produced a specific output. That is a broader explainability problem, not something a governance runtime can solve on its own.
Status: Structural limitation. AmplefAI provides input-state transparency — provably the strongest form of transparency available at the governance layer.
Gap 6: Technical Documentation
EU AI Act Art.11
Art.11 requires technical documentation for high-risk AI systems. AmplefAI’s documentation exists but is not structured as a formal conformity assessment artifact.
Status: Partially supported. Documentation exists but is not in formal conformity format. This is a packaging gap, not a capability gap.
Detailed Invariant-Level Mapping
A detailed version of this mapping — with specific numbered invariants, cross-reference tables, and implementation notes — is available to qualified partners and investors.
Request Full Regulatory Mapping →What This Mapping Does Not Claim
This page is a technical alignment map, not a legal determination of compliance.
It does not constitute legal advice, a conformity assessment, certification, or a claim that deploying AmplefAI alone makes an organization compliant with DORA, the EU AI Act, or any other regulation.
Actual compliance depends on organizational context, deployment scope, governance process, legal interpretation, and the surrounding control environment.
This mapping is meant to support that conversation, not replace it.
Last updated: March 2026.
AmplefAI · Because the system being governed cannot govern itself.