Powerless Intelligence

Cognition becomes abundant before authority does.

Elias Kunnas

A diagnosis becomes governance only when an authorized body must answer it through a procedure with teeth. AI lowers the cost of producing many diagnoses; it does not lower the cost of building the procedural primitive that converts a finding into a binding action. The bottleneck moves from cognition to response duty.


I. The end of “we did not know”

The pre-abundance defense was ignorance. When cognition was expensive and slow, the institutions that produced harm could plausibly say we did not know, no one had the capacity to model that, the evidence was too fragmented. Powerless intelligence is what the post-abundance era replaces ignorance with: cognition that has institutional evaluators — receivers, in the sense that the signal has somewhere named to land — but no enforceable duty for any actor to respond. The defense becomes the evaluation was inconclusive, the model was contested, we received too many warnings to act on any single one.

This is a sharper claim than the corpus’s earlier diagnosis of cognitively ownerless decisions, where no institution received the signal at all. Here the receiver exists; the response duty does not. The structural primitive that would convert a finding into a binding action — recall, halt, pause, escalate, override — is absent, weak, or deliberately separated from the evaluation function across current AI-governance infrastructure.

The cleanest current case is AI governance. The pattern is older.

II. Powerless, not ownerless

AI evaluation has institutional homes in three jurisdictions, each set up with response powers explicitly carved out.

The UK AI Security Institute, renamed in 2025 from the AI Safety Institute, was set up under a founding paper that excludes three response powers by name:

The 2025 rebrand sharpened the security focus but made no public change that turned the institute into a regulator or made it responsible for release decisions.

The US AI Safety Institute was renamed in 2025 to the Center for AI Standards and Innovation; NIST describes its outputs as “voluntary guidelines.”

The European AI Office, established under the AI Act, “encourages and facilitates” codes of practice under Article 56, “monitors and evaluates” their implementation, and “publishes its assessment of the adequacy.” The binding step is the Commission’s discretion to approve a code via implementing act, or to impose common rules if a code is judged inadequate.

Three jurisdictions, three different shapes, the same structural pattern. Each has working evaluation capacity. None has an automatic conversion from “evaluation finds X” to “deployment changes.”

The bottleneck is not the receiver. It is the response duty.

This is what powerless intelligence looks like when an institution has been built around it.

III. Epistemic denial-of-service

Warning inflation is the condition where warnings accumulate faster than triage and response capacity. Each warning may be reasonable. The total field becomes unusable, because no actor can be expected to attend to every signal in time, and no signal carries the procedural attachment needed to force priority.

Epistemic denial-of-service is the strategic form. Warning inflation can happen innocently when cognition becomes cheap; epistemic denial-of-service is the adversarial version, used defensively. An incumbent funds or produces enough counter-models, alternative safety cases, methodological disputes, and compliance artifacts that no single warning becomes procedurally decisive. The cluster does not need to refute the warning; it needs only to keep the warning from becoming a trigger. Contestation is healthy when it resolves through a procedure. It becomes denial-of-service when contestation prevents any warning from ever becoming a trigger.

This is symmetric to the NEPA failure mode treated in The Legitimacy Gate, one layer up: capture answers mandated verification with rituals. NEPA produces compliance documents that overwhelm the modeling function. Epistemic denial-of-service produces competing models that overwhelm the response function.

The pattern predates AI. Before the 2008 financial crisis, Claudio Borio and William White at the Bank for International Settlements published sustained warnings about housing leverage and shadow-bank exposure; Raghuram Rajan’s 2005 Jackson Hole paper Has Financial Development Made the World Riskier? argued that the post-1980s financial system had concentrated tail risk in ways the prevailing regulatory frame did not see; an academic literature ran in parallel.

The cognition organs existed. The warnings were public. No major domestic actor in the affected economies had a response-duty primitive strong enough to convert those warnings into binding deleveraging before the crisis; the warnings became material only as forensic input after the response window had closed. The failure was not in evaluation. It was in the trigger.

The same shape persists when the cognition is faster, the cluster is smaller, and the signal-to-noise of any individual evaluation is lower.

IV. The safety institute as powerless oracle

The UK AI Security Institute is the cleanest current specimen because the three carve-outs quoted in §II were design features, not accidents. A regulator with deployment authority would have been a regulator that frontier labs could decline to cooperate with on pre-deployment access; it would also have walked into immediate jurisdictional conflict with existing competition, data-protection, and product-safety authorities. The carve-outs traded enforcement teeth for evaluation access and inter-agency feasibility. State-backed evaluation capacity was structurally insulated from any deployment trigger. The 2025 rebrand to Security preserved the trade. The US rename to CAISI in the same year went further: Safety was dropped from the framing entirely, the public-facing language shifted to standards and innovation, and the published outputs are voluntary guidelines.

The European structure is the inverse failure shape. The AI Office and the Board have formal monitor-evaluate-publish duties under Article 56, and the surrounding AI Act carries more formal legal architecture than the British or American institute model. But the finding does not itself become a deployment halt, training pause, or recall: it must travel through code adequacy, standards, Commission judgment, provider obligations, and enforcement. There is process. The trigger fires through political timelines, not automatic thresholds.

What this looks like inside the AI evaluation literature is already concrete. Hubinger and colleagues showed in Sleeper Agents (2024) that backdoor behaviour can be made persistent through standard safety training — supervised fine-tuning, reinforcement learning, and adversarial training — and that adversarial training can teach models to better hide their triggers rather than removing them. That is a clean technical finding. No major AI-governance regime makes a result of this form, by itself, trigger a binding deployment halt, training pause, or recall. The finding becomes evaluation literature. The response-duty primitive is missing.

The legitimacy substrate in the sense of The Legitimacy Gate is in place: statutory or quasi-statutory standing, named institutional homes, formal mandates. The failure sits downstream of the substrate. Evaluation quality is also real: the published work in each jurisdiction is technically serious. The failure does not move when more or better evaluation is added. It moves only when the response-duty primitive is built.

The prediction follows. The bodies that survive and matter over the next governance cycle will be the ones that acquire some response-duty primitive, however narrow — a single class of finding that triggers a single class of binding action — even at the cost of evaluation breadth.

V. Authority, resource, answerability

The conversion of an evaluation into governance requires three properties together: authority, even narrow authority, to halt, modify, or escalate based on the signal; resource — analytical, political, financial — to investigate the signal in the time the decision allows; and answerability — legal, political, or procedural — that attaches when the warning is ignored and the harm materializes. Authority without resource is a paper veto. Resource without authority is a think tank. Authority and resource without answerability is an institution that can act, can investigate, and routinely chooses not to. The three are jointly necessary; absent any one, the institution is structurally configured to produce intelligence without converting it.

The Owner and Trigger fields of Constructive Diagnosis’s six-field standard already require an institutional seat and a condition under which the seat is obligated to act. Powerless intelligence is what those two fields look like at the level of detail an institutional designer needs. The procedural-attachment problem of The Legitimacy Gate is the same problem one step upstream, at the level of public decision-making in general.

VI. Repair: warning architecture, not more warnings

More diagnostic substrate does not repair powerless intelligence; The Legitimacy Gate already covers substrate. The repair is the architecture that wraps substrate so it converts.

A warning architecture requires working parts that current AI evaluation infrastructure lacks. Triage decides which signals are decision-relevant within the response window. A defined contestation route, with formal standing, keeps challenge procedural rather than political. An escalation trigger fires when a signal crosses a specified threshold, without further discretionary step. A memory layer outlasts staff turnover and electoral cycles, so the next administrator inherits the open question. A movement test, public and after the action, distinguishes response from compliance theatre. The institutions that matter over the next decade will be the ones that build enough of this stack to make warnings actionable.

VII. Close

The pre-abundance defense was ignorance. The post-abundance defense is overload. Both end the same way: no binding action, and the cost absorbed by whoever is downstream of the unbuilt response.

Cognition is no longer the scarce resource. Authority is. The work is to build the response-duty primitive that converts the abundance into governance before the next defense — the model passed the evaluations we agreed on — replaces ignorance as the form the failure takes.

The signal flood is not the problem. The missing trigger is.


Sources and Notes

Powerless intelligence as primitive. Powerless intelligence is institutional evaluation capacity without an enforceable response-duty primitive that converts a finding into a binding action. It is distinct from ownerless intelligence, where the failure is the absence of a receiver. Here the receiver exists; the conversion does not. Legitimacy Came Before Cognition develops the historical asymmetry between legitimacy stack (mature) and cognition stack (fragmented); this essay is the AI-era applied piece at one layer further downstream.

UK AI Security Institute. The institute was established as the UK AI Safety Institute under the 2023 Bletchley Park AI Safety Summit and renamed the AI Security Institute in 2025. The founding paper (Introducing the AI Safety Institute, gov.uk, November 2023, updated January 2024) states explicitly that “the Institute is not a regulator and will not determine government regulation,” that “the goal of the Institute’s evaluations will not be to designate any particular AI system as ‘safe’,” and that “the Institute will not hold responsibility for any release decisions.” The 2025 rebrand sharpened the security focus and made no public change that turned the institute into a regulator or assigned it responsibility for release decisions.

US Center for AI Standards and Innovation (CAISI). The US AI Safety Institute was renamed in 2025 to the Center for AI Standards and Innovation under the second Trump administration. NIST’s CAISI page (nist.gov/caisi/guidelines) describes its outputs as “voluntary guidelines.”

EU AI Office and Article 56. The European AI Office is the body established under the EU AI Act to oversee general-purpose AI obligations. Article 56 (Codes of Practice) tasks the AI Office and the Board with “encouraging and facilitating” the drawing up of codes covering Articles 53 and 55 obligations, with “monitoring and evaluating” their implementation, and with “publishing their assessment of the adequacy.” The Commission may, by implementing act, approve a code as having Union-wide validity, or impose common rules if a code is judged inadequate. Article 56 required codes of practice to be ready by 2 May 2025; the GPAI Code was ultimately published in July 2025 and accepted by the Commission and AI Board as an adequate voluntary tool for demonstrating compliance.

Sleeper Agents. Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, and colleagues, Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, arXiv:2401.05566 (v3, January 2024). The abstract reports that “backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training,” and that “rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior.”

Pre-2008 financial-stability warnings. The pre-crisis macroprudential literature is associated most clearly with Claudio Borio and William White at the BIS through the 2000s, and with Raghuram Rajan’s 2005 Jackson Hole paper Has Financial Development Made the World Riskier? The claim made here is the narrow one: serious technical warnings existed in central-bank and academic channels, but no domestic actor in the major affected economies carried a binding response-duty primitive between warning publication and crisis arrival.

NEPA cross-reference. The NEPA failure mode used here as the symmetric one-layer-down case is developed in The Legitimacy Gate §V and in Constructive Diagnosis §V. The framework treats compliance documentation that absorbs the modeling function as the upstream version of the same pattern that produces evaluation literature absorbing the response function.

Bridges to the rest of the corpus. Legitimacy Came Before Cognition develops the historical asymmetry between legitimacy and cognition stacks. Bad Equilibria Are Not One Thing classifies the failure modes the response-duty primitive would diagnose. Constructive Diagnosis develops the methodological standard (the six-field repair specification) of which the Owner and Trigger fields are the upstream form of this essay’s authority/resource/answerability triad. The Legitimacy Gate develops the procedural-attachment primitive at the level of public decision-making in general; Powerless Intelligence develops it one layer further downstream at the level of institutional response.


Related: