Steering Power
A model of multi-agent alignment
I. The 83-Day Proof
Finland applied for NATO membership 83 days after Russia invaded Ukraine. The same Finland that couldn't complete a single healthcare reform in two decades. Same constitution. Same institutions. Same people.
This isn't a heartwarming story about national unity. It's an empirical anomaly that demands a model. Something changed between February 23 and February 24, 2022. It wasn't capacity. Finland had the same resources, the same bureaucracy, the same diplomatic corps on both days. What changed was the relationship between agents and the system they operate within.
This essay presents a model that explains not just Finland's NATO sprint, but a general pattern: why some threats mobilize entire nations in weeks while others drift for decades, even when the stakes are higher.
II. The Model
Define steering power as the force that actually moves a system. Not activity, not noise, not the production of reports and strategies and press conferences. The force that changes the state of the world. Most political systems produce enormous amounts of energy that never becomes steering power — it dissipates as internal friction, performative motion, and bureaucratic heat. Steering power is what remains after you subtract all of that.
System-level steering power in a multi-agent system is the product of three conditions:
Steering Power = Capacity × Coupling × Salience
Where:
- Capacity — the system's available resources. Human capital, institutional competence, trust, economic reserves, infrastructure. Can the system do the thing?
- Coupling — how strongly each agent's fate actually depends on the system's fate. Is the steering column connected to the wheels? Does the politician's career actually hinge on the outcome?
- Salience — how strongly agents perceive that coupling. Does the threat register? Does it feel real? Can they see the connection between their choices and the consequence?
Note that Coupling and Salience are different things. "Having skin in the game" (Coupling) and "feeling like you have skin in the game" (Salience) are routinely conflated. They should not be. A politician's career may objectively depend on fiscal sustainability, but if the bankruptcy is 15 years away, they don't feel it. The coupling is real; the salience is zero. They optimize for next week's poll.
The multiplicative structure has precedent. Vroom's Expectancy Theory (1964) uses the same any-zero-kills-the-product logic for individual motivation: Motivation = Expectancy × Instrumentality × Valence. But Vroom's three variables are all subjective beliefs inside one agent's head. The model here applies the same structure to system-level responsiveness, with a key difference: Capacity and Coupling are objective facts about the world. Only Salience is perceptual. The failure mode the model identifies is specifically perceptual, not material.
If any factor is zero, steering power is zero. You can have enormous capacity and it does nothing. You can have an existential threat and nobody moves. The factors don't add. They multiply.
This is a first-order approximation: the minimum model that captures the dominant dynamics. Reality has thresholds and non-linearities that clean multiplication flattens. The model also simplifies away at least two things. First, direction: even with high Coupling and Salience, agents may disagree about what to do. NATO was binary (join or don't); climate policy is not. When the required action is obvious, Coupling implies direction. When it isn't, disagreement can neutralize steering power even with all three factors present. Second, temporal dynamics: Salience decays (COVID showed this), Coupling shifts as political conditions change, and feedback loops operate across the factors. The model is a snapshot, not a simulation.
Its value is diagnostic. When a system isn't moving, it tells you which variable to investigate, not the precise magnitude of the output.
III. The NATO Case
Before February 24, 2022:
- Capacity = high (same as after)
- Coupling ≈ 0 (a politician's re-election didn't depend on security policy)
- Salience ≈ 0 (the Russian threat was abstract, distant, a think-tank topic)
- Steering Power ≈ 0 (decades of "NATO option" rhetoric—noise without motion)
After February 24, 2022:
- Capacity = high (same)
- Coupling = maximum (opposing NATO now meant career destruction—voters made security the question)
- Salience = maximum (missiles on television, a neighbor state invaded—impossible not to feel the threat)
- Steering Power = maximum (83 days)
Same people. Same constitution. Same capacity. Only Coupling and Salience changed—and steering power went from zero to maximum.
Before the invasion, politicians recited the mantra of "NATO option"—a verbal formula that avoided a binary choice. Sound and fury, no motion. The system was busy without moving. When salience spiked, the pseudo-motion collapsed. "Let's study this further" was no longer an acceptable answer, because everyone could see the same thing.
IV. The Multiplication Principle
The multiplicative structure generates a key diagnostic. When a system isn't moving, don't ask "who's to blame?" Ask: which factor is zero?
This reframes most political dysfunction. The conventional diagnosis is moral failure: corrupt politicians, apathetic voters, captured institutions. The model says: it doesn't matter how good the agents are. If Coupling is zero (the politician's career doesn't depend on the outcome) or Salience is zero (the threat doesn't register), steering power is zero regardless of capacity or intention.
A further implication: Capacity, Coupling, and Salience vary by issue domain. The same state can be in crisis mode on one issue and drift mode on another. Finland demonstrated world-class execution on security while its demographic trajectory, healthcare system, and infrastructure continued to decay undisturbed. NATO's salience was maximal; the fertility rate's salience was zero. Same machine, different inputs.
V. The Pattern Across History
The model isn't specific to Finland. It's a general pattern wherever multi-agent systems face threats.
Pearl Harbor (1941). The United States had enormous industrial capacity but Coupling was weak (the war was "Europe's problem") and Salience was low (an ocean separated Americans from the fighting). December 7 changed both simultaneously. Salience became visceral. Coupling became total—opposing mobilization was political suicide. The same nation that had resisted involvement for years redirected its entire economy in months.
The 2008 Financial Crisis. For years, systemic risk was building. Capacity to intervene existed. Coupling was real: everyone's retirement was tied to the financial system. But Salience was near zero. Mortgage-backed securities, credit default swaps, and leverage ratios didn't register emotionally. They were abstract, technical, boring. When Lehman collapsed, Salience spiked overnight. Suddenly everyone felt the coupling. TARP passed in weeks, after years of "the market self-corrects."
COVID-19 (2020). An instructive mixed case. Salience was extremely high initially: a novel pandemic, hospital footage, death counts on every screen. Coupling was real but variable: a 25-year-old's personal risk was radically different from a 75-year-old's. As time passed, Salience decayed (habituation) and Coupling fragmented along demographic lines. Steering power peaked early, then disintegrated into factional noise. The pattern: when Salience is driven by media rather than direct experience, it's inherently unstable.
VI. The Prediction: Slow Threats Kill
The model makes a specific, falsifiable prediction: threats that are real but slow will systematically fail to produce steering power, even when Coupling is objectively higher than for fast threats that do produce it.
This is because Salience tracks perceptual vividness, not objective magnitude. The human attentional system evolved for fast, visible, emotionally charged signals: predators, not atmospheric composition changes. Slow threats fail the Salience filter even when they're existentially more dangerous than the fast threats that pass it.
Climate change is the canonical case. Coupling is near-total: every human's fate depends on atmospheric stability. Capacity to act exists. But Salience oscillates: it spikes during heat waves, floods, and fires, then decays back to baseline. The threat is too slow, too distributed, too abstract to sustain the perceptual intensity required for system-level coordination. The gap between Coupling (real) and Salience (intermittent) is where the failure lives.
Demographic decline follows the same pattern. Below-replacement fertility is a slow-onset civilizational threat. Coupling is total: no children, no civilization. But Salience is near zero. Nobody feels the dependency ratio. It's a number in a report, not an image on a screen. By the time the consequences become vivid enough to generate Salience, the window for intervention has narrowed or closed.
The general principle: civilizations don't die from threats they can see. They die from threats where Coupling is real but Salience is zero. The threat is there. The connection is real. Nobody feels it.
VII. Manufacturing Alignment Without Crisis
Crises work by forcing Coupling and Salience into alignment. Missiles on television make the abstract concrete. Bank runs make systemic risk personal. But crisis is a terrible alignment mechanism: it requires catastrophe to generate the perception that should have preceded it.
If the model is correct, it implies a design specification for institutional architecture:
Build institutions that manufacture Coupling and Salience for slow threats, without requiring a crisis to do so.
This means:
- Manufactured Salience: Make invisible threats visible. Turn abstract statistics into concrete, emotionally resonant signals. Not propaganda: accurate translation of real data into formats that pass the human attentional filter. A demographic dashboard that shows, in real time, what the dependency ratio means for each citizen's retirement. An infrastructure decay tracker that shows which bridge will fail in which year.
- Manufactured Coupling: Make agents' fates actually depend on system outcomes, not just perceptually but structurally. Accountability mechanisms that trace decisions to consequences across time. If you block a reform in 2025 and the predicted outcome materializes in 2035, that trace should be permanent and public.
- Capacity preservation: When Coupling and Salience are present, the system's existing capacity is freed. Agents who were pulling in different directions now pull together—internal friction drops, effective capacity rises. The same resources produce more when aligned.
The Fourth Branch is one institutional design that attempts exactly this: an independent body whose job is to maintain the causal link between policy decisions and long-term outcomes, manufacturing the Coupling and Salience that crises provide naturally but destructively.
The alternative is to wait for crises. This works, but only if the crisis arrives while capacity still exists. For slow-onset threats, by the time Salience spikes, Capacity may already be depleted. The demographic window closes. The infrastructure crumbles past repair. The institutional competence erodes. Then you have high Salience, real Coupling—and zero Capacity. The multiplication still yields zero.
VIII. Implications for AI Alignment
The model is isomorphic to the AI alignment problem. An AI system is a multi-agent architecture (or a single agent with sub-objectives) where:
- Capacity = computational resources, training data, architecture
- Coupling = how tightly the system's reward signal tracks the principal's actual objective
- Salience = whether misalignment is detectable before it's catastrophic
A misaligned politician is a mesa-optimizer: an agent embedded within a larger system, optimizing a proxy objective (re-election) that diverges from the principal's objective (civilizational flourishing). The vacuum cleaner robot that dumps dust on the floor to re-vacuum it is the same pattern as the politician who creates problems to campaign on solving them.
The governance version and the AI version share the same structural solution: you don't fix alignment by hoping for better agents. You fix it by designing the architecture so that Coupling is structural (the agent's reward actually tracks the principal's objective) and Salience is maintained (misalignment is visible before it compounds into catastrophe).
Summary
- Steering Power = Capacity × Coupling × Salience. Multiplicative: any zero produces zero.
- Coupling is objective: how much your fate actually depends on the system's fate.
- Salience is perceptual: how much you feel that dependence.
- The gap between Coupling and Salience is where civilizations die. The threat is real but doesn't register.
- Crises temporarily close the gap—but only for fast, vivid threats, and only while capacity remains.
- The design problem: build institutions that manufacture Coupling and Salience for slow threats, without requiring catastrophe.
Related:
- The Governance Alignment Problem — The diagnosis: politicians are structurally misaligned agents
- The Fourth Branch — An institutional design that manufactures Coupling and Salience
- Telocracy — Governance needs a purpose before steering power matters
- Ethics Is an Engineering Problem — Architecture beats disposition
- Only Selection — The mechanism that enforces alignment over time
- When Does Reform Actually Happen? — The historical pattern of crisis-driven reform
Sources and Notes
On the Finland NATO case:
- Finland's NATO accession timeline: application submitted May 18, 2022, 83 days after Russia's invasion of Ukraine on February 24, 2022. Ratified April 4, 2023.
- Finland's healthcare reform (sote-uudistus) was attempted across multiple governments from 2005 to 2023—roughly two decades of legislative effort before any version passed.
On the multiplicative structure:
- Vroom, V. (1964), Work and Motivation—Expectancy Theory: Motivation = Expectancy × Instrumentality × Valence. The structural template for any-zero-kills-the-product models of motivated action. Originally individual psychology; the model presented here applies the same multiplicative logic to system-level responsiveness with an ontological shift from all-subjective variables to objective/objective/subjective.
- Rogers, R.W. (1975), "A Protection Motivation Theory of Fear Appeals"—decomposes threat response into severity, vulnerability, and response efficacy as a multiplicative product. Later empirical work often retreated to additive models due to measurement difficulty, but the theoretical logic mirrors the Steering Power model.
- Hainmueller, J., Mummolo, J. & Xu, Y. (2019), "How Much Should We Trust Estimates from Multiplicative Interaction Models?"—replication study of 46 interaction effects in top political science journals found that multiplicative models consistently fail standard statistical tests due to lack of common support and hidden non-linearities. This explains why the model's formulation is rare in quantitative political science: not because the logic is wrong, but because the methodology can't reliably detect multiplicative effects even when they're real.
On the coupling/salience distinction:
- Freeman, R.E. (1984), Strategic Management: A Stakeholder Approach—explicitly separates a stakeholder's actual economic stake from the organization's perceived stake. The objective/perceptual separation applied to corporate governance.
On slow-onset threat failure:
- Pot, W.D., Scherpenisse, J. & 't Hart, P. (2022), "Robust Governance for the Long Term and the Heat of the Moment"—formalizes why governance systems fail on "creeping crises" despite high objective risk. Identifies the attention mismatch between objective coupling and perceptual salience as the failure mechanism. Proposes "strategic coupling" (linking acute shocks to slow crises) as an institutional fix. The closest prior art to the slow-threat prediction in this essay.
On mesa-optimization and alignment:
- Hubinger et al., "Risks from Learned Optimization in Advanced Machine Learning Systems" (2019)—the foundational paper on mesa-optimizers: agents that develop internal objectives divergent from their training objective.
On salience and attentional filtering:
- Weber, E.U. (2006), "Experience-Based and Description-Based Perceptions of Long-Term Risk: Why Global Warming Does Not Scare Us (Yet)"—documents the salience gap for slow-onset threats.
- Kahneman, D. (2011), Thinking, Fast and Slow—System 1 processing favors vivid, emotionally charged signals over abstract statistical information.
On institutional design for long-term governance:
- Holmström, B. (1979), "Moral Hazard and Observability"—foundational work on the principal-agent problem and why naive incentive schemes fail.
- Buchanan, J. & Tullock, G. (1962), The Calculus of Consent—constitutional design as constraint on political agents.
On crisis as alignment mechanism:
- Historical cases: Pearl Harbor and US mobilization (1941–42), TARP passage during 2008 financial crisis, Finland's war reparations program (1944–52).