Mechanism Space
Why systems do not move in the space our words describe.
People reason in semantic space, where adjacent words make destinations look close. Systems move in mechanism space, where distance is set by incentives, feedback architecture, authority, and what the structure rewards. The targets that look one step away in language are often unreachable along any path the current system can actually take.
I. The Wrong Metric
Some advice points at a destination without naming the path that gets there. Communicate openly. Practice evidence-based policy. Fix capture by deregulating. Make AI safe. Increase transparency. Each phrase looks like a short step from where the system is now to where it should be. None of them reliably gets the system to move.
The standard explanations describe symptoms. People are weak. Institutions are slow. Implementation is hard. It’s complicated. These accounts are true and they explain nothing structural. The pattern recurs across domains where the actors are competent, the resources are available, and the implementation is attempted in good faith. Something else is wrong.
The structural reason has a name in optimization theory. You measured distance with the wrong metric.
Coordinate distance and metric distance can disagree. In ordinary parameter space, two states can look adjacent. Under the geometry that actually governs the system’s motion, the same two states can be very far apart, or unreachable from each other along any path the system can take. Slogans like the ones above are correct as descriptions of where the system should end up. They are silent about whether the geometry the system runs on can get there.
This essay is about that silence: the geometry the words don’t carry, and the diagnostic discipline of seeing it.
II. Two Spaces
Semantic space is where words live. Distance there is metaphorical adjacency: synonymy, association, conceptual neighborhood. “Open communication” is one word from “honesty,” two from “intimacy.” In this geometry, a target sounds reachable if the words for it are familiar.
Mechanism space is where systems move. Distance there is the amount of incentive change, feedback architecture change, skill change, or authority change required to actually instantiate the endpoint. “Mechanism” in this essay is deliberately broad. It includes culture, meaning, identity, ritual, and somatic constraint, insofar as those determine what the next agent in the chain will actually do. The engineering connotation is convenient shorthand. The category is wider than engineering.
The same target sits at different distances in the two spaces. Public discourse runs almost entirely on semantic distance and assumes mechanism distance follows. The assumption is wrong often enough that recognizing the gap is most of the diagnostic work.
I use “space” colloquially. The argument depends on the asymmetry between the two distances being stable enough to predict failure, not on metric-space axioms. (“Selection space” appears in regional innovation policy and evolutionary computation with the same meaning; “mechanism space” is the more general handle.)
III. Why Coordinate Distance Misleads
Mathematics has a clean version of the distinction. In optimization, the path that looks steepest in ordinary parameter coordinates can fail to be the true steepest path once the geometry of the underlying system is taken into account. Amari’s natural gradient (1998) exists because the coordinate gradient is, in general, the wrong metric: it ignores the curvature of the underlying distribution. Optimization along the wrong gradient can waste steps, stall, or move in directions that look locally sensible only because the coordinate system is hiding the relevant geometry. Control theory has a parallel notion of reachability. A target state is structurally unreachable if no causal path connects the available actuators to it, regardless of how clearly the target can be named.
Social systems behave similarly without being literal Fisher manifolds or linear control systems. Language gives us coordinates. Incentives, affordances, feedback architecture, identity, and authority define the geometry. When the two diverge, gradient descent on the wrong axes runs forever without converging. Exhortation runs along the existing geodesic and the target stays where it was.
IV. Specimens
Five cases of semantic adjacency masking mechanism distance. Each follows the same form: a label that sounds close to the desired outcome, a system that moves somewhere else, and the structural reason the gap was always going to open.
1. Wells Fargo’s “eight is great”
Wells Fargo wanted deeper customer relationships. The chosen proxy was products per customer, embedded in the bank’s “eight is great” cross-selling culture. Under sales targets and compensation incentives, the system selected for accounts, not relationships. The CFPB’s 2016 consent order cited Wells Fargo’s own analysis finding about 1.53 million deposit accounts and about 565,000 credit-card accounts that may not have been authorized. Goodhart’s Law states the result formally: when a measure becomes a target, it ceases to be a good measure. Semantic adjacency between “more products per customer” and “deeper relationships” looked obvious. Mechanism distance was vast and ran the opposite direction.
2. The WHO Surgical Safety Checklist
Hospitals globally adopted the WHO Surgical Safety Checklist on the strength of dramatic aviation analogies. Early results made the checklist look like the mechanism. Later implementation problems showed the mistake. The mechanism of safety was never the list itself; it was team coordination, shared situational awareness, and the flattening of operating-room hierarchy that lets a junior nurse challenge a senior surgeon when something is wrong. Where the checklist was deployed top-down as a punitive administrative rule, it could become tick-box compliance. The semantic label was “checklist.” The selection pressure had to target hierarchy and coordination. Catchpole and Russ document the gap in the BMJ Quality & Safety literature: the artifact is not the mechanism, and treating the artifact as the mechanism makes implementations fail in predictable ways.
3. “Communicate openly”
This is treated at length elsewhere in this corpus (The Implicit Treaty); the short version belongs here. “Communicate openly” names a reachable state only when the body and the relationship already afford it. Under attachment threat or conflict flooding, the very capacities required by the script—timing, reflection, self-report, curiosity, and repair—can become unavailable. Standard advice such as “use I-statements” or “practice active listening” asks for a runtime the system may not currently have. Modern protocols stabilize the physiological baseline first and then attempt the script. The endpoint label assumes a state that the conflict itself has disabled.
4. Bernstein’s Transparency Paradox
Management increases observability of workers, expecting accountability gains. Field experiments at a Chinese mobile phone factory show the opposite. Workers under sustained observation abandon the localized experimentation that drives improvement. Productive deviance becomes legible to management and gets punished, so it stops happening, and the metric of organizational learning collapses. Bernstein’s finding: zones of privacy raise performance compared to continuous surveillance. The semantic claim is that transparency increases accountability. The mechanism produces hiding behavior and performative compliance. Constant observation selects for camouflage.
5. Environmental Impact Assessments
The semantic intent is ecological protection. The actual selection pressure can drift toward litigation defense. Agencies are rewarded for producing legally defensible documents that survive judicial review, not necessarily for creating adaptive ecological management. “Stronger impact assessments” can then run along the existing geodesic: more pages, more legal armor, no change in operational ecology. The metric shifts only when assessment is connected to ongoing operational responsibility, not merely upfront paperwork. (See also Libertarianism Is an Incomplete Solution: capture is an equilibrium of self-interested agents under information asymmetry; deregulating to fix it is gradient descent on a truncated landscape.)
The same endpoint label is mechanism-near for someone whose system already runs the stack and mechanism-far for someone whose system doesn’t. The disagreement that follows is not really about the label. It is about which geometry the speakers are operating in.
V. Cheap Talk and Costly Actuators
Game theory has a precise version of the distinction. Cheap talk is costless, non-binding communication: it does not alter any payoff in the underlying game. Costly signaling is communication that requires real expenditure (caloric, financial, institutional) of a kind a deceptive sender could not profitably mimic. Cheap talk lives in semantic space. Costly signaling alters mechanism space.
Most reforms that fail are cheap-talk-like: they add words without altering the game. A report without correction authority is cheap-talk-like. An impact assessment with no override is cheap-talk-like. A radical-transparency mandate with no zone-of-privacy carve-out is cheap-talk-like. A debt brake that triggers automatically when revenue fails to cover spending is a costly actuator. A correct-or-explain duty that forces a public override decision is a costly actuator. The diagnostic question becomes: what does this make locally rational for the next agent in the chain?
One refinement on “rational.” It does not mean utility-function-optimizing. It includes whatever the next agent’s body, identity, and tribal allegiance will actually permit. The Wells Fargo employee opening fake accounts is not strategically computing payoffs. The surgical nurse not contradicting a senior surgeon is not running a cost-benefit calculation. Both are responding to affect, identity-pressure, and somatic threat-detection that operate well below strategic computation. Mechanism space includes incentive geometry, affect/identity geometry, and somatic-feasibility geometry. The cheap-talk frame is the cleanest sub-case, not the whole story.
Diagnostic checklist
Run any reform proposal through these four questions:
- What behavior does this select? Iterate forward five steps; what gets rewarded and amplified?
- What changes if the report is ignored? If nothing, the report is cheap talk.
- Is this a semantic label or a control input? Names without actuators are coordinates.
- Is the target reachable under current constraints? If no causal path connects available actuators to the target, the proposal is structurally inert regardless of intent.
These questions are themselves operations the framework names but does not teach. Demonstration on Wells Fargo. (1) The “eight is great” quota selects branch managers who hit the number by any means available, since failing it ended the career. (2) If individual employees ignored the quota, they were terminated; the quota was load-bearing. (3) The quota was a control input; bonuses, terminations, and branch closures all keyed off it. (4) The target—deeper customer relationships—was unreachable under the constraint that employees be measured on product count; the chosen control input had no causal path to relationship depth and a vast causal path to fraud. The diagnosis tells you the gap exists; it does not tell you how to close it. Different gap-types (misaligned proxy, stripped social context, unavailable runtime, observation effect, institutional capture) require different fixes.
Where this framework fails
Not every reform requires changing the metric. Some succeed by simply executing what the existing metric already rewarded but no one had attempted. Some endpoint labels work fine when both speaker and listener already have the operational stack: the shorthand compresses well, no harm done. Some semantic-space talk is doing irreducibly social work, signaling membership, performing care, maintaining the productive ambiguity that lets coalitions hold; reducing it to a mechanism would destroy what it’s actually doing.
The framework matters when (a) repeated attempts at the visible target keep missing, (b) the gap between stated rule and selected behavior is large and persistent, and (c) intervention costs are high enough to need diagnostic discipline before commitment. Outside those conditions, the framework is doing no work and you should drop it. A frame that explains everything explains nothing.
VI. Changing the Geometry
Three examples of interventions that worked by altering the metric, not by pushing harder toward the visible target.
Switzerland’s debt brake. The European Stability Pact tried to constrain fiscal behavior through rules whose enforcement depended on political actors willing to sanction violators; France and Germany’s 2003 episode exposed the weakness of that geometry. The Swiss debt brake changes the default by tying federal expenditure to cyclically adjusted receipts. Parliament still decides spending priorities, but the expenditure ceiling changes what inaction produces. Same goal in semantic space. Different mechanism geometry. Different trajectory.
Bernstein’s zones of privacy. The transparency-paradox solution: shield team-level operational experimentation from continuous executive observation. The metric of organizational learning shifts from “who looks compliant under the gaze” to “who improves the process when no one is watching.” The intervention is structural, not motivational. No one has to want to learn more; the geometry now permits learning where it previously punished it.
Toyota’s andon cord. A worker who sees an abnormality can trigger an andon signal; if the problem is not resolved, the line stops. The cost is real and visible: stopped minutes are expensive and obvious. The metric shifts from “individual workers absorb defects to keep the line moving” to “the defect becomes the line’s problem.” A junior worker is given procedural authority that flattens hierarchy without requiring a speech about quality culture. The semantic label “quality culture” becomes operationally instantiable because the actuator exists. Without the actuator, the same label is cheap talk.
In every case the move is the same: install a costly actuator that bends the trajectory. Without the actuator, exhortation runs forever along the existing geodesic.
Author’s related work on legislative mechanism review is at mekanismirealismi.fi; deliberately omitted from the examples above to keep them external.
VII. Two Languages
Most everyday coordination uses semantic-space shorthand legitimately. Both speakers have the stack, the label compresses well, and no harm is done. The discipline begins where the shorthand stops working, when a target keeps being missed by people doing exactly what the label says. That is the signal that the metric is not what the words describe.
A non-technical version of the diagnostic question, for skeptics of the whole frame: what will the people in this system do six months after the announcement, when the announcement is no longer news? Anyone who has worked inside an organization knows the answer is usually “go back to what they were doing before.” That is mechanism space asserting itself against semantic space without needing the vocabulary.
People reason in semantic space. Systems move in mechanism space. The gap closes one of two ways: the speaker learns to read the second geometry, or, when the target matters and exhortation has failed, someone installs a costly actuator that bends the trajectory. Mechanism realism is the discipline of knowing which of those moves the situation requires.
The real question is not “what does this say?” but “what does this make locally rational?”
Sources & Notes
The pattern is not new. Several traditions describe local versions of it.
In optimization, Shun-ichi Amari’s Natural Gradient Works Efficiently in Learning (1998) formalizes why optimization along the wrong gradient stalls. Charles Goodhart’s 1975 lecture, later popularized by Marilyn Strathern (1997), and Donald Campbell’s (1979) parallel formulation give the canonical statements of measure-vs-target degradation. Friedrich Hayek’s The Use of Knowledge in Society (1945) is the same insight in market form: stated rules cannot encode the information that prices (a metric) carry. Mechanism design (Hurwicz, Maskin, Myerson) is the constructive version: design institutions whose equilibrium produces the desired outcome.
In sociology and organizational theory, Argyris and Schön’s espoused theory vs theory-in-use (1974), Bourdieu’s misrecognition (1977), and Meyer and Rowan’s institutional decoupling (1977) describe the same pattern in organizational learning, social-reproduction, and legitimacy registers. Donald MacKenzie’s An Engine, Not a Camera (2006) extends it to economic theory itself: markets are constituted by their models, not described by them. Polanyi’s The Tacit Dimension (1966) names the operational stack the framework requires but cannot articulate. James Scott’s Seeing Like a State (1998) describes the destructive form when imposed top-down. Implementation science is the entire academic field whose object of study is precisely this gap.
The alignment literature on mesa-optimization (Hubinger et al., 2019) describes the same gap in optimization processes: the trained system can optimize for a different objective than the one it was trained on, and the divergence is exactly the semantic-vs-mechanism distinction inside an artificial system.
What this essay tries to add is a single portable handle — semantic space vs mechanism space — that lets the diagnostic move travel from technical contexts into ordinary speech, where the same gap silently kills most reform proposals long before they encounter formal analysis.
The Wells Fargo specimen draws on the CFPB consent order of 8 September 2016. The Surgical Safety Checklist discussion follows Catchpole and Russ in BMJ Quality & Safety (2015). The Transparency Paradox is Ethan Bernstein’s 2012 ASQ paper. The Switzerland debt brake / EU Stability Pact contrast is treated more fully in Libertarianism Is an Incomplete Solution. The relationship specimen is the short form of The Implicit Treaty.
See also: The Implicit Treaty (mechanism space, dyadic) · Libertarianism Is an Incomplete Solution (mechanism space, governance) · Ethics Is an Engineering Problem · Telocracy