The Mechanism Analysis

Every substantive law is a causal machine. Modern governance lacks the document that tests it.

Elias Kunnas

Modern states have documents for legality, fiscal cost, and policy intent. They do not have a standard document that tests whether the mechanism of a proposed law can actually produce the outcome it promises. The mechanism analysis is that missing artifact: a pre-enactment structural test of legislative causal machinery, containing a steelmanned reconstruction, a response map, a capital-stock ledger, a typed failure readout, and a repair specification. It plugs into a repair-or-override loop that preserves legislative sovereignty while making structural costs legible before the law is installed.


I. The Empty Slot

When a modern state produces a major legislative proposal, it generates hundreds of pages of supporting documentation: legal drafting, constitutional review, fiscal scoring, regulatory impact assessment, committee hearings, expert testimony, ministerial briefings.

Yet this documentary apparatus routinely omits the most consequential engineering question about the new rule:

What does this mechanism make rational for the next actor to do?

Every substantive law is a causal machine. By altering payoffs, shifting boundary conditions, and opening new response channels, regulatory and fiscal legislation creates complex, autonomous feedback loops. (Acts that are primarily declarative — naming a post office, designating a holiday — carry too little mechanism load to warrant this form; the residual symbolic causality they do carry is real but not what the artifact is built to test.) The existing infrastructure treats legislation as text, intent, and a static reallocation of funds — not as a causal machine to be tested before installation.

The missing artifact is the mechanism analysis: a pre-enactment structural test of a legislative proposal's causal machinery. It tests whether the proposed mechanism can do the work the proposal claims, and where it is likely to fail.

The artifact rests on a simple premise: a law's mechanism — its incentive gradients, feedback loops, and selection pressures — determines what the law produces. The stated intent is representation; the mechanism is what runs.

II. Adjacent Documents

The mechanism analysis sits between existing institutional artifacts. None occupies its slot.

Constitutional review tests whether a proposed law violates higher legal norms — fundamental rights, jurisdictional limits, due process. It does not test whether the law's causal mechanism produces its stated outputs.

Fiscal scoring and official fiscal forecasting — CBO-style legislative cost estimates, OBR-style official forecasts, and ministry budget calculations — incorporate dynamic behavioral modeling: labor-supply responses, capital-shifting, tax-base gaming. The primary ledger is fiscal. It does not measure consumption of administrative capacity, institutional trust, service quality, or other non-financial stocks.

Regulatory impact assessments describe expected effects. Where they are reviewed by oversight bodies (OIRA in the United States, the Regulatory Scrutiny Board in the EU, the Council on Regulatory Impact Analysis in Finland), the center of gravity is expected costs, benefits, administrative burden, and alternatives — not adversarial testing of whether the mechanism will route behavior through the intended channel.

Audit offices — the National Audit Office of Finland, the Government Accountability Office in the United States, the European Court of Auditors — examine execution retrospectively. By the time an audit can see a mechanism failure, the mechanism has run for years.

Think tanks, academic networks, and consultancies can produce rigorous structural critiques. They lack procedural attachment. Their output sits outside the legislative pipeline. They cannot force a body politic into a binding repair-or-override loop, which leaves their analyses available to be ignored or selectively weaponized.

Two existing regulatory bodies approximate the form partially. The United Kingdom's Regulatory Policy Committee demands that proposing departments articulate the causal link between the regulatory tool and the stated outcome, and has issued documented repair specifications — notably in the Gender Pay Gap reporting initiative, where the department was forced to reconstruct the missing causal mechanism after initially failing to explain why mandatory data publication would actually narrow the gap. New Zealand's Regulatory Impact Assessment Team explicitly names moral hazard and crowding-out as systemic failure modes its framework must test. What both bodies show is that the form is operationally executable inside an existing oversight institution. What neither provides is full-anatomy mechanism analysis attached procedurally to the legislative pipeline.

The slot stays empty because each existing artifact is optimized for a different legitimacy problem — legal coherence, budgetary balance, procedural justification, retrospective accountability, academic truth. None is designed to adversarially test the causal machine before political capital has committed to installing it. By the time the mechanism failure is visible, the law has acquired owners, beneficiaries, sunk costs, and institutional defenders.

The structural problem is not that existing institutions fail their mandates. It is that pre-enactment mechanism testing has no owner.

III. The Anatomy of a Mechanism Analysis

A mechanism analysis contains five components. The components are causally sequenced: each component's output is the next component's input.

Input requirement: evidence bundle. A mechanism analysis begins with a bounded evidence set — bill text, explanatory memorandum, fiscal tables, impact assessment, committee material, existing evaluation evidence, and relevant institutional rules. Claims outside the evidence bundle may be flagged as hypotheses, but they cannot close a failure readout. This prevents the analysis from becoming an essayistic counter-narrative.

1. Mechanism Claim

What the law says will cause what.

The first component reconstructs the proposal's causal claim. A bill states an objective and a set of rules; the causal model is left implicit. The analysis surfaces it: the stated goal, the asserted causal path from rule to outcome, the actor behavior assumed, the constraints assumed to hold, the time horizon implicitly presumed.

The question this component answers: what would have to be true for this law to produce its stated outcome?

The reconstruction is causally steelmanned: the analysis works from the proposal's strongest available mechanical reading. Steelmanning at the causal level is distinct from steelmanning at the moral level — the analysis tests the causal model the proposal embeds, including, when warranted, the causal model implicit in the stated objective itself. Sometimes a mechanism executes the rule faithfully but the objective rests on an incoherent causal claim (treating maintained infrastructure as extractable revenue, treating a private good as a public one). The artifact surfaces this when it occurs.

2. Actor and Response Map

Who reacts to the new rule, and how.

The second component maps the population of actors affected and projects their rational response. For each affected actor: what payoffs change, what new opportunities open, which response channels become cheapest, what strategic substitutes exist, what gaming becomes possible.

The question this component answers: what does the law make rational for each actor to do?

The proposal's mechanism claim assumes a behavior; the response map projects what actors actually optimize for. The gap between the two is where the mechanism holds or breaks.

The response set is not bounded by the law's domain. Whenever an exit to a neighboring institutional system (medical, tax, judicial, political-coalitional) is cheaper than within-domain compliance, actors take the exit and the cost lands wherever the exit deposits it. (Pressure routes through the cheapest available channel; see Response Vector.)

3. Capital-Stock and Absorption Ledger

What hidden variables carry the cost.

The third component identifies which stocks of value bear the pressure when the formal target is protected. The ledger is multidimensional: fiscal cost, administrative capacity, institutional trust, human capital, service quality, future optionality, and other stocks specific to the policy domain.

The question this component answers: what variable absorbs the pressure if the formal target is met?

Mechanism failures absorbed by hidden stocks do not show up at the formal target. The target is what the mechanism is wired to protect; while absorption channels remain available, the formal indicator looks fine and the failure lands wherever the absorption channel runs — staff burnout, deferred maintenance, quality erosion, trust depletion, future flexibility. The absorption ledger names the stocks consumed to keep the indicator clean. The failures that do show up at the formal target (sequencing violations, missing capacity, exhausted absorption) are typically easier for existing review to catch; the silent ones accumulate until something downstream breaks — at which point the cost of repair has already escaped the rule's domain.

Stocks differ in their recovery horizon. Some are restored by reversing the rule (incentive parameters, headcount). Some require structural rebuilding over years (institutional capacity, accumulated coordination). Some are functionally irreversible at policy timescales: severed training pipelines, lost tacit knowledge, demographic collapse, accumulated distrust. The asymmetry between destruction speed and rebuild speed — hysteresis — is the ledger's severity tiebreaker. A fast-to-destroy and slow-to-rebuild stock depletion is categorically different from a reversible parameter change, even when the same euro figure attaches to both.

The selection of capital stocks is empirical, not normative. The framework has a single non-arbitrary objective function — long-term civilizational adaptive capacity, the substrate of survival and flourishing (see Flourishing Is Maximum Safety Margin). Inclusion on the ledger is then an evidence question: does this stock measurably affect adaptive capacity, on what horizon, by how much? Disagreement about inclusion is empirical and answerable, not a clash of values. The reclassification of inconvenient empirical questions as value disputes is the move Calculemus diagnoses; the ledger refuses it.

The artifact also carries a structural commitment that cannot be deferred to legislative discretion: when a bill's implicit objective function diverges from long-term civilizational adaptive capacity — when its target's maximization predictably depletes load-bearing stocks — the divergence is itself a finding. The artifact has no authority to override the legislature's choice of objective. It has the duty to flag the conflict on the record. Like a compiler warning: the code compiles, but the warning is part of the build output.

The ledger is the artifact-level instantiation of Full Accounting: a stock booked at zero can be consumed without appearing as a cost, and "hard to measure" is not the same as zero. The mechanism analysis's job is to refuse the zero booking — approximately right beats precisely zero.

4. Failure Readout

Where the mechanism breaks.

The fourth component produces a typed diagnosis: how and where the mechanism is likely to fail. Common types:

Each diagnosed failure is paired with the steelmanned counterargument and an assessment of why the steelman does not close the failure — so the reader can see exactly where the proposal's strongest defense stops working.

Failures also stratify by severity into three types. Type 1: the mechanism fails to deliver the proposal's own stated outcome — the politically safest finding, because the bill does not do what its drafters said it would. Type 2: the mechanism delivers the stated outcome but burns capital stocks the proposal does not track. Type 3: the mechanism executes faithfully on an objective that is itself incoherent — the rule's causal model of the world is wrong before the rule is even written. Type 3 is the deepest and the hardest to land, but when it is the operative finding, no parametric repair touches it. The production discipline behind classifying findings under this typology is documented in How Mechanism Analyses Are Made.

The failure types above are not exhaustive. They specialize, for legislation, the broader taxonomy of stuck-equilibrium primitives developed in Bad Equilibria Are Not One Thing — missing representable object, missing destination, misaligned payoff surface, missing credible joint move, missing execution capacity. Each is itself a candidate failure type when it shows up in a specific bill.

Predictions are confidence-typed. The framework distinguishes between mechanism logic (high confidence: the direction follows from the incentive structure), structural prediction (medium confidence: combined effects across multiple mechanisms), and quantitative magnitude (low confidence without separate empirical modeling). Conflating confidence levels destroys the analysis's credibility.

The question this component answers: what is the specific structural failure mode?

5. Repair Specification and Movement Test

What would fix the mechanism, and what would prove it had failed.

Component 4 above supplied the first field of the six-field repair specification standard from Constructive Diagnosis (the failure mechanism itself). The fifth component supplies the remaining five, plus one added for legislative-architecture work (constraint analysis):

The component concludes with an explicit repair-or-override offer to the legislature: repair the mechanism along one of the specified paths, or override the analysis and pass the law unchanged. Override is always available.

The question this component answers: what would fix this mechanism, what would prove it failed, and what would the institution default to doing wrong?

IV. A Worked Example

A compressed illustration of the form. In 2023, Finland reorganized health and social services into 21 welfare regions plus Helsinki, which retained the organizing function as a municipality. The 21 welfare regions have no independent taxing authority; Helsinki has municipal taxation but operates under the same central-formula funding for these services. All face statutory service obligations grounded in constitutional social-rights duties, with the central state as funder and payer of last resort.

In 2025, the government proposed a funding-law amendment that, among other changes, temporarily cut transition adjustments for Helsinki. The Ministry of Finance framed the cut as a temporary savings measure that would not endanger service provision, noting Helsinki's surplus years and the broader 2026 funding-growth context. The Government Institute for Economic Research (VATT) flagged the incentive problem directly: if better fiscal balance can lead to a funding cut, welfare regions' incentives to pursue stronger balance weaken.

This was not the first warning. During the pre-enactment review of the original 2021 reform, VATT had already explicitly identified the soft-budget-constraint risk created by full state financing without regional taxing authority; Parliament's Finance Committee (Valtiovarainvaliokunta) flagged the same incentive architecture in its opinion (VaVL 1/2021); later National Audit Office (VTV) material has likewise treated the welfare-region funding model as raising soft-budget-constraint concerns. The warnings were noted but not embedded as binding countermeasures. The structural deficit dynamic the warnings predicted materialized in the regions' first operational years (2023–2024); the 2025 amendment now operates inside the architecture whose failure mode was already named by the existing review apparatus.

A mechanism analysis of the proposal:

Mechanism claim: the formula update will improve allocation efficiency and tighten cost control.

Actor and response map: each organizer faces a constitutional service obligation under central-formula funding it cannot independently supplement for these services, and a state that ultimately covers deficits because no organizer can be allowed to fail in delivering statutory services. An organizer observes the rule: surplus produced → targeted cut; deficit produced → state coverage. The rational response is to avoid producing surpluses.

Capital-stock and absorption ledger: the formal target — fiscal correction — is met. The pressure is absorbed into off-balance-sheet stocks: depletion of preventive-care capacity, administrative burnout, erosion of institutional trust in funding predictability, and accumulating downstream bailout pressure on the central state.

Failure readout: a soft budget constraint (Kornai 1986) — the architectural condition under which organizations that cannot be allowed to fail lose all efficiency incentives. Here it is instantiated in formula-based regional funding: the proposed cut teaches all regions that producing surplus exposes them to discretionary penalty. Even framed as a temporary transition cut for one outlier, the structural signal to the system is universal: the central state is willing and able to use discretionary ex-post formula adjustments to claw back surplus whenever it appears. A rational organizer observes this and updates its risk model accordingly: visible efficiency invites confiscation. Steelmanned counterargument: the cut is technically a correction of historical over-funding, not a punishment. Counterargument assessment: at the mechanism level, "technical correction" and "penalty" are two labels for the same rule (surplus produced → cut applied). The label does not change the rule. The rule shapes future behavior; the label does not.

Repair specification and movement test:

The reader recognizes the universal pattern: a subordinate organizer with service mandates and no service-side revenue levers, backed by a sovereign that cannot let it fail. A textbook soft budget constraint. The mechanism is portable; the national context is incidental.

V. The Repair-or-Override Loop

A mechanism analysis is not a veto. It is the input to a structured procedural loop.

The loop has four steps:

  1. The law proposes a mechanism.
  2. The mechanism analysis tests it.
  3. If the analysis identifies a failure, the legislature has four moves available:
    • repair the mechanism along the repair specification;
    • narrow the legal claim to what the mechanism can actually deliver;
    • change the target to what the mechanism can produce;
    • override the analysis: publicly explain, on the record, why the analysis's failure readout is being accepted and the law passed unchanged.
  4. The movement test runs at the specified review window: predictions are compared to outcomes.

The override is the load-bearing political move: the legislature remains sovereign and can pass a law the mechanism analysis flagged as broken. But the override is not the absence of engagement — the override is public engagement, on the record, with the analysis's specific causal claims. To override is to accept the assumptions the analysis identified as broken, in writing, attributed to the legislators who accepted them. There is no override without that acceptance. The override is always available, but it is never silent.

This resolves the tension between structural expertise and democratic sovereignty. The mechanism analysis does not prevent Parliament from choosing a broken mechanism. It prevents Parliament from later pretending the break was unforeseen.

The artifact's weight is procedural, not substantive: it claims no authority over the decision, only the right to make the decision legible. Legislatures often install broken mechanisms intentionally — sometimes because the political cost of repair exceeds the cost of override, sometimes because the flaw was deliberately designed to distribute quiet rents. Just as fiscal scoring forces budgetary costs into the open, the mechanism analysis forces structural costs — and the political choices behind them — into the open.

A note on scope: the artifact's procedural force depends on which laws it attaches to. Routine statutory maintenance does not warrant a full mechanism analysis. The form is appropriate for substantive, structurally consequential legislation — fiscal architecture changes, regulatory reforms, social-service reorganizations, market design. The triggering mechanism (impact threshold, parliamentary minority petition, scope criteria) is a question for the institutional layer hosting the artifact, not for the artifact specification itself.

A second scope boundary: per-bill analysis does not capture interaction effects between separately designed laws. Sanctions in one statute feeding population pressure into the relief mechanism of another; soft budget constraints reinforced across overlapping bailout regimes; market mechanisms colliding with thin-market geographies elsewhere. These compound patterns require coordinated analysis across multiple proposals. They are a documented extension of the form, not a substitute for per-law analysis.

VI. The Admissibility Regime

The artifact's authority comes from its constraints, not its scope. To be admissible, the analysis must be severable from the analyst's moral, partisan, or policy preferences. It evaluates a specific proposal rather than producing generalizable theory or comparative survey. Its repair specifications remain independent of advocacy for any particular institutional reform.

A mechanism analysis is admissible only if:

  1. Its evidence base is bounded.
  2. Its causal reconstruction is steelmanned.
  3. Its predictions are confidence-typed.
  4. Its repair specification is separable from the analyst's political preference.

A mechanism analysis is structurally inadmissible if it relies on narrative substitution — "the spirit of the law clearly requires X" — to close a causal gap. Narrative may be present, but it must remain tethered to evidence, structure, and readout.

The admissibility regime keeps a mechanism analysis on the computable side of the line developed in Calculemus. Most legislative disagreement is presented as value disagreement when a large share of it is uncomputed causal disagreement. The mechanism analysis does the computation the proposal did not; the admissibility constraints exist so the artifact stays computation rather than drifting into advocacy.

The discipline does not enforce itself. The form's force depends on an adversarial review architecture: opposing actors with standing and incentive to challenge a mechanism analysis on its own discipline conditions — a different party in committee, an opposing expert witness, a journalist with structural literacy. Where such actors exist, the discipline is enforceable. In a captured field where they do not (see Trapped Equilibria), the artifact still produces an on-the-record finding, but its immediate force is reduced; the record becomes a durable focal point for boundary actors and future challengers rather than a procedural lever in the current loop.

VII. Closing

A failed law is not mysterious. It fails along the path its mechanism made rational. If the law rewards gaming, actors game. If it protects a proxy, the underlying target erodes. If it creates a soft budget constraint, deficits return. If nobody owns the downstream variable, the cost is absorbed where the statute does not look.

Modern states already know how to write laws, score budgets, and audit failures. The missing step is to test the causal machine before installing it.

Every structurally consequential law should leave behind a mechanism analysis: what the law claimed would happen, what actors were expected to do, where the pressure would route, what would count as failure, and what the legislature chose to repair or override.

A law is a machine. The mechanism analysis is the test bench.


Sources and Notes

The failure modes:

  • Kornai, J. (1986). "The Soft Budget Constraint." Kyklos 39(1), 3–30. — The original analytical move: organizations that cannot be allowed to fail lose efficiency incentives at the architectural level, not the parametric one. Source of the failure type instantiated in the worked example.
  • Goodhart, C. A. E. (1975). "Problems of Monetary Management." Reserve Bank of Australia. — Original formulation: any observed statistical regularity tends to collapse once pressure is placed on it for control purposes. The now-standard "when a measure becomes a target, it ceases to be a good measure" phrasing is Marilyn Strathern's later generalization (1997, "Improving Ratings: Audit in the British University System"). The proxy-target divergence failure type generalizes the insight to any rule that allocates against a metric.
  • Campbell, D. T. (1979). "Assessing the Impact of Planned Social Change." Evaluation and Program Planning 2(1), 67–90. — Independent of Goodhart, same insight from a different domain: corrupted indicators corrupt the social processes they were meant to measure. Two paths to the same diagnosis is how you know it is structural.

The repair-or-override loop:

  • Klein, G. (2007). "Performing a Project Premortem." Harvard Business Review. — The closest existing parallel in private-sector decision-making: imagine the project has failed, then work backwards to causes. Mechanism analysis is the institutional version with admissibility constraints attached.

Worked example (HE 38/2025, Finland):

  • Hallituksen esitys 38/2025 vp — the Finnish government's proposal to amend the Welfare Region Funding Act. Cuts transition adjustments to Helsinki, frames the cut as efficiency correction. The proposal’s own materials are the bounded evidence base for the analysis.
  • VATT (2025). Expert statement to the Administration Committee on HE 38/2025, 2 June 2025. — The Government Institute for Economic Research flagged the incentive problem directly: cuts triggered by surplus weaken the incentive to produce surplus. A clean mechanism-logic claim from inside the existing institutional apparatus.
  • For the full mechanism analysis the worked example compresses, see mekanismirealismi.fi/mev/he-38-2025-hva-funding (Finnish).

Comparative pre-enactment regulatory review:

  • United Kingdom Regulatory Policy Committee (RPC). The clearest existing approximation of mechanism analysis embedded in an oversight body. The RPC scrutinizes whether departments have articulated the causal link between regulatory tool and stated outcome, using the Treasury Green Book strategic-options framework. Documented case histories of forced causal reconstruction include the Gender Pay Gap reporting initiative (RPC15-GEO-2384) and HFSS food-promotion restrictions (DHSC-4332/4333). See RPC Case Histories — Options (September 2025) for primary cases. The RPC's effectiveness is limited by broad de minimis exemptions that remove much of the regulatory pipeline from scrutiny.
  • New Zealand Regulatory Impact Assessment Team (RIAT). Explicitly requires agencies to test for moral hazard and crowding-out as named failure modes. See the Regulatory Impact Analysis Guide and Treasury New Zealand's Best Practice Impact Analysis Guidance Note (2017).
  • Finnish Council of Regulatory Impact Analysis (Lainsäädännön arviointineuvosto). Operates within a strict administrative-burden and cost-benefit scope; academic review notes that "the Council's evaluations almost never include a description of the criteria used to reconstruct the intervention logic of the evaluated policy" — a direct empirical statement of what this artifact is built to add. See the Council's Annual Review 2024.
  • US Office of Information and Regulatory Affairs (OIRA). The OMB's own retrospective analysis of 47 regulatory case studies found that post-enactment estimates of costs and benefits missed pre-enactment forecasts by more than 25 percent — the empirical signature of static cost-benefit modeling against a dynamic actor environment.
  • EU Regulatory Scrutiny Board (RSB). Wields a quasi-binding negative veto on impact assessments but operates at cost-benefit-plus-alternatives scope. Notable documented misses include the Digital Operational Resilience Act (DORA) — infringement procedures opened against 13 Member States for transposition failures — and early iterations of the Rule of Law Conditionality Regulation.
  • German Nationaler Normenkontrollrat (NKR). The quintessential Standard Cost Model body. Highly effective at quantifying compliance friction; total annual German bureaucracy cost remains stagnant at ~€64bn despite vigorous NKR interventions — the macro-signature of catching costs without altering mechanisms.

Engineering precedent:

  • Historical engineering analogue: FAA Order 8300.14 — Repair Specification Approval Process (now cancelled, 2015; the live successor is FAA Order 8300.16A, which covers approval of technical data associated with major repairs and alterations). Cited here not as current FAA doctrine but as the engineering institutional template the artifact-form draws on. When a critical aviation component fails, engineers do not conduct a cost-benefit analysis of replacement; they issue a repair specification that defines the baseline of the failed part, conducts root-cause analysis of the failure mechanism, and provides a substantiated repair process. The convergence between the artifact-form here and the FAA repair-specification paradigm is not coincidence — both are structural responses to the same engineering principle: a failure mode must be diagnosed and a specific repair specified before the failing component is reinstalled. The reason mechanism analysis's repair specification has more fields than the FAA's — explicit ownership, wrong-repair warning, movement-test — is that the institutional domain doesn't supply for free what physics and an existing maintenance organization supply in the engineering case. Owner, falsification test, and known-bad-repair are exactly the things FAA repair-specification doctrine gets to assume; mechanism analysis has to specify them as structural fields, because in the legislative domain none of them is institutionally given.

Cross-jurisdictional cases of predictable pre-enactment mechanism failure: The diagnostic this essay frames is not theoretical. The following major statutory failures were structurally predictable from each statute's incentive architecture at enactment; in each case the existing review apparatus optimized for a different question and missed the mechanism.

  • UK Post Office Horizon scandal & the Postal Services Act 2000. Mechanism failure: a highly autonomous corporate body (POL) was granted both unchecked private prosecutorial power (Prosecution of Offences Act 1985 s.6(1)) and operational responsibility for a centralized IT system whose accounting errors generated phantom shortfalls. Parliamentary review focused on procurement delays; no actor-response map identified the principal-agent distortion that would predictably produce wrongful prosecutions of franchisees to defend the IT investment. Outcome: 900+ wrongful convictions, multiple suicides, billions in compensation. See the Post Office Horizon IT Inquiry.
  • Australian Robodebt (Online Compliance Intervention, 2015–2019). Mechanism failure: applying annual ATO income data averaged across 26 fortnightly welfare-reporting periods is mathematically and legally invalid for an episodic statutory entitlement. A steelmanned causal reconstruction of the data-matching algorithm would have surfaced the incompatibility pre-enactment. Cabinet New Policy Proposal review did not require attached legal advice verifying statutory authority for the averaging method. Outcome: $1.2B+ in class-action settlement and refunds, net fiscal loss of ~$565M, multiple suicides. See the Royal Commission into the Robodebt Scheme (2023).
  • US Affordable Care Act (2010) — state-exchange architecture and federal fallback. CBO scoring assumed cooperative federalism; an actor-response map would have predicted partisan opt-out by >30 states, overwhelming the federal fallback. A typed failure readout of the drafting would have flagged the Section 1311 "established by the State" scrivener's error that triggered King v. Burwell. The advance-tax-credit disbursement architecture, paying insurers based on unverified self-reported income, had no upfront friction against fraud — later GAO investigation: 23 of 24 fictitious applications approved; $21B+ in un-reconciled tax credits.
  • US PRWORA welfare reform (1996) — TANF block grant. A capital-stock ledger combined with an actor-response map would have shown that decoupling federal block grants from poverty rates and tying them to caseload reduction creates a state-level revenue-extraction mechanism: rational state actors reduce caseloads by administrative attrition (easy) rather than by genuine pathways out of poverty (expensive). HHS Assistant Secretaries Bane, Edelman, and Primus resigned in protest specifically flagging this mechanism failure; CBO scoring nonetheless approved on fiscal-savings grounds. Outcome: rise of "disconnected" extreme poverty, state-level diversion of TANF funds.
  • EU Common Agricultural Policy 2003 reforms (Single Payment Scheme). A capital-stock ledger would have shown that decoupled, area-based payments mechanically capitalize into land rents (basic economic rent theory). Vague statutory definitions of "agricultural activity" guaranteed regulatory arbitrage. Outcome: payments flowing to golf courses, railway companies, and real estate firms; land prices artificially inflated; young farmers priced out. Documented in European Court of Auditors, Special Report 5/2011.
  • Finnish hyvinvointialue funding reform (2021–2023). The architecture funded by the worked example above. VTV and Parliament's Finance Committee identified the soft budget constraint at pre-enactment review; the warnings were not embedded as binding countermeasures. The mechanism failure (regions cannot fail in delivering statutory services; rational political response under no taxing authority is to maximize service provision and pass deficits upward) materialized in 2023–2024, materially contributing to Finland's general government deficit breaching the EU 3% threshold and the debt-to-GDP ratio passing 80%. See VTV's Fiscal Policy Monitoring Report 2024.
  • Northern Ireland Protocol of the Brexit Withdrawal Agreement. The Good Friday Agreement rests on three interlocking strands; the Protocol's mechanism secured Strand Two (North-South) by economically and constitutionally severing Strand Three (East-West), which logically requires violating Strand One's cross-community consent principle. A steelmanned causal reconstruction would have shown this trilemma was unresolvable by the chosen mechanism. Parliamentary scrutiny was fast-tracked under "Get Brexit Done" political pressure. Outcome: multi-year collapse of the Stormont executive, eventual forced renegotiation via the Windsor Framework.

The pattern across these cases is the same: the failure mode was diagnosable from the statute's incentive architecture pre-enactment; the existing review apparatus (CBO, parliamentary committees, RIA bodies, audit offices) optimized for fiscal scoring or procedural compliance and did not perform actor-response mapping, capital-flow accounting, or typed-failure analysis. The artifact-form this essay specifies would have surfaced each of these failures while the mechanism was still amendable.


Related: