March 12, 2026 · 5 min read
When Everyone Can Build, Who Governs What Ships?
Harrison Chase is right about what's changing. The question he doesn't answer is the one regulated industries can't ignore.
By AmplefAI
LangChain CEO Harrison Chase published a piece this week arguing that coding agents are fundamentally reshaping how Engineering, Product, and Design organisations work. His thesis is sharp: when implementation becomes cheap, the old sequential process — idea, PRD, mock, code — collapses. Anyone with product sense and a coding agent can go from idea to working prototype in an afternoon.
The bottleneck shifts from building to reviewing.
He's right. And the implications for regulated industries are far more consequential than his analysis suggests.
The Review Bottleneck Is a Governance Problem
Chase frames the new world as a split between builders and reviewers. Builders are generalists who wield coding agents to move fast. Reviewers evaluate whether what got built is architecturally sound, solves the right problem, and meets quality standards.
This is an accurate picture of what's happening at product-stage startups. But in financial services, insurance, and any sector operating under frameworks like DORA or the EU AI Act, the review problem is categorically different. It's not just "is this code well-architected?" It's "can we prove this system operated within policy?" and "can we reconstruct exactly what happened when a regulator asks?"
Human reviewers don't scale to answer those questions. They never did. And they certainly won't scale in a world where the volume of agent-generated code, decisions, and deployments is growing exponentially.
Observability vs. Governed Execution
The Generalist Explosion Meets Regulatory Reality
Chase celebrates the rise of the generalist — people who blend product thinking, design intuition, and coding agent fluency. He's right that these people are enormously productive.
But productivity without governance is a liability in regulated environments. When your PM can spin up a working prototype before lunch, the first question a compliance officer asks isn't "is the UX good?" — it's "what policy governed this agent's execution?" and "where's the audit trail?"
The faster generalists build, the wider the governance gap becomes.
From Capability to Compliance Surface
PRDs Become Prompts. Prompts Need Policy Binding.
One of Chase's most interesting observations is that the PRD of the future might just be a structured, versioned prompt. The document that describes what to build becomes the instruction that tells an agent how to build it.
Follow that thread to its logical end: if the prompt is the product specification, then the governance layer that binds prompt to execution to evidence is the entire compliance surface. You need to know that the agent executed the intent described in the prompt, that it didn't deviate, and that you can replay exactly what happened after the fact.
This is what epistemic binding means in practice. Not a dashboard of metrics. Not a log file you hope someone reads. A cryptographically verifiable chain from intent to execution to outcome, with policy enforcement at every step.
The Governed Execution Spine
Observability Is Not Governance
The agent infrastructure community has made enormous progress on observability. Traces, evals, metrics — the tooling for seeing what agents do has matured significantly. Chase himself has been a leader in this space.
But observability tells you what happened. Governance ensures what happens is what should happen. Recording 100,000 traces a day is visibility. Enforcing 29 regulatory invariants across every execution is governance. Replaying a specific agent decision chain for a regulator with cryptographic proof of integrity is forensic compliance.
The gap between these two — between seeing and governing — is where regulated industries are stuck.
Four Questions a Regulator Will Ask
The Infrastructure Layer That's Missing
Chase is building the execution harness — the runtime environment that makes agents capable of long-horizon, multi-step work. That layer is necessary and valuable. But it sits below the governance layer that regulated industries require.
What's missing is the governed execution spine: infrastructure that enforces policy invariants at runtime, maintains epistemically bound records of every agent action, and provides forensic replay capability that satisfies regulatory scrutiny.
This isn't about slowing agents down. It's about making their speed trustworthy. A Nordic bank doesn't need its agents to do less — it needs to prove that what they did was within policy, and that the proof is tamper-evident.
When everyone can build, the competitive advantage is governed execution.
We build governed execution infrastructure for AI agents in regulated industries. Our enforcement invariants map to DORA and EU AI Act requirements. If you're navigating the gap between agent capability and regulatory compliance, we should talk.
Design Partner Program → · Regulatory Mapping →
AmplefAI builds the independent governance layer that ensures AI capability remains accountable to your institution — not your provider.
Learn more at amplefai.comAmplefAI
Continue Reading
Follow the thinking
We're building the constitutional layer for autonomous AI — in public. Get new posts delivered.
No spam. Governance-grade email only.