The Framing Machine
How institutions learn to prefer legible problems to working artifacts
Academic institutions economize attention by building containers — journals, affiliations, departments, citation formats — that reduce the cost of deciding what deserves inspection. This works inside established disciplines. It fails for cross-domain mechanism work, which arrives without a container and demands inspection from scratch. The institution differentially rewards framing problems over building solutions, because framing creates citable surface area while building creates verification burden. Over generations, this selection pressure becomes taste: framing feels like research; building feels like implementation. The engineering meta level remains empty because noticing the vacancy is citable while occupying it is institutionally homeless.
I. The Container
Every knowledge institution has two jobs that are easy to confuse. The first is to discover or produce truth. The second is to reduce the cost of deciding what deserves attention.
The second job is unavoidable. Nobody can inspect every claim from first principles. So institutions build containers. A journal article means that a certain kind of review has occurred. A university affiliation means that a department once made a hiring decision. A conference acceptance means that a program committee recognized the work as belonging to a field. These containers do not make work true. They make work cheaper to begin trusting.
The arrangement works when the work fits the container. A database paper goes to database reviewers. A theorem goes to mathematicians. The verification burden is distributed across a community already trained to recognize the artifact.
The failure appears when the artifact crosses domains. A working system that compiles legislation is partly law, partly programming languages, partly civic infrastructure, partly archival science. A mechanism audit of legislation is partly economics, partly public administration, partly legal theory, partly systems engineering. No one discipline owns the verification burden. Each can see a fragment. None can certify the whole.
The artifact therefore arrives as a demand: inspect me. Most readers decline — not because they are irrational, but because the cost is real. A position paper can be skimmed, cited, or placed in a literature review in minutes. A working artifact must be run, checked, reproduced, interpreted, and situated. The paper asks for attention. The artifact asks for labor.
This is where legitimacy and truth diverge. The container does not prove the content. It determines whether anyone will pay the cost to look.
II. Citation Surface
In a citation economy, a well-framed problem has surface area.
A frame can be cited by people who agree, disagree, extend, refine, or use it as background motivation. It is a reusable object. Its value compounds through continuation.
A solution produces a different kind of value. It reduces uncertainty. It converts a question into infrastructure. If it works, people use it; if it works very well, they may stop talking about the question at all. The value moves from discourse to operation.
That is not a defect. It is what solving means. But citation systems are better at measuring continuation than closure. They notice when a concept becomes a node in a literature. They are weaker at noticing when an artifact quietly removes a class of failure from the world. The better the artifact works as infrastructure, the less visible it becomes as scholarship.
This creates a predictable distortion. A researcher who names ten missing systems creates ten citation surfaces. A researcher who builds one missing system creates one verification burden. The institution differentially rewards work that remains available for academic reuse — solving often destroys the discursive option value that made the problem academically fertile.
The selection pressure does not need to be conscious. It operates the way all selection operates: differentially reproducing the phenotype that fits the environment. Over enough generations, the population converges.
III. Selection Becomes Taste
The mature form of an incentive is not calculation. It is taste.
A first-generation academic might consciously choose to write a framing paper instead of building because the career incentives are clear. A third-generation academic does not choose. They taste the difference. Implementation feels crude. Framing feels sophisticated. “That’s engineering, not research” is not a strategic calculation — it is a genuine aesthetic judgment, arrived at through years of immersion in an environment that rewarded the judgment’s ancestors.
This is Values Are Replicators applied to academic culture. The value “theory is the real contribution” inhabits the academic the way a successful meme inhabits a mind. The host does not experience it as a parasite. They experience it as discernment. Nobody needs to say “I avoid implementation because it is poorly rewarded.” After enough training, implementation simply feels like a lower-order activity. The judgment is sincere. That is why it is powerful.
The mechanism is invisible to participants because the output looks identical to genuine insight. An academic who says “implementation is not my job” may be correct — perhaps their comparative advantage really is in framing. The problem is that the entire population says it simultaneously, because the selection pressure operated on all of them. The engineering meta level is empty not because each individual made a wrong choice, but because the environment that shaped all of them has no niche for cross-domain building.
IV. The Boundary Game
Every adjacent field has a principled reason why the work belongs elsewhere. Academia studies mechanisms. Government administers mandates. Lawyers interpret texts. Engineers build systems. AI safety researchers analyze model behavior. Each boundary is locally defensible. The vacancy appears only at the system level.
The pattern is stable because no group bears the cost. Laws that fail to produce their stated effects, institutions that drift from their purpose, feedback loops that never close — these costs are spread across the entire system and attributed to politics, complexity, or human nature. Nobody’s performance review suffers because the engineering meta level is empty. The parties communicate fine. They communicate specifically to redirect the work elsewhere. The boundary is not a wall — it is a routing rule that sends the work into a loop.
V. The Counterexamples
The claim is not that academia never rewards artifacts.
Systems papers, database papers, programming-language papers, ML benchmark papers, scientific instruments, software papers — all exist and are cited. Artifact-evaluation tracks at conferences evaluate working code. The Journal of Open Source Software exists explicitly to give research software a citable container. Software Heritage added citation support because software needs recognition as a research output. DORA and CoARA push institutions to evaluate contributions beyond journal impact factors.
These counterexamples do not refute the argument. They prove it. Each exists because someone recognized that a specific class of artifact lacked a container and built one. Where the container exists, building becomes legible. Where it does not, the artifact remains extra-literary — visible as tooling, invisible as contribution.
The question is what happens to cross-domain civic mechanism infrastructure: a legal compiler, an institutional audit specification, a legislative feedback loop. These fit no systems conference (too much law), no law journal (too much code), no public administration journal (too much mechanism design). The containers that exist were built for disciplinary artifacts. The artifacts that fall between all disciplines have no container and no realistic path to getting one.
This is not the familiar theory-practice gap. That gap assumes both sides exist and need connecting — theorists produce frameworks, practitioners apply them, and the problem is handoff friction. Here, the practice side for cross-domain mechanism work is not an engineering department waiting for instructions. It is a vacancy. The framing side produces papers about missing infrastructure. Nobody is assigned to build the infrastructure. The division of labor assumes both roles are staffed.
VI. A Specimen
A recent position paper argued that AI makes competent-looking judgment abundant and shifts scarcity toward verification, legitimacy, and provenance. It proposed treating AI policy as institutional redesign. The paper is useful — it names a real transition. It is also a recognizable academic artifact: short, framed, citable, institutionally affiliated.
The kind of artifact the problem demands is different: not another description of missing verification infrastructure, but a working verification substrate — a compiler that reconstructs statute text and exposes consolidation errors, or mechanism audits that test whether a bill’s chosen instrument can produce its own stated goal. The asymmetry is that the paper belongs to a container and the artifact does not. The paper is legible as research before its claims are evaluated. The artifact becomes legible only after its operation is understood.
Content matters eventually. Containers decide whether “eventually” ever arrives.
VII. Why It Persists
The system is self-reinforcing at every level. The people who decide what counts as research are the people the system selected. They sit on tenure committees, grant panels, and editorial boards. They evaluate new work by the standards that produced them. PhD students learn what a contribution looks like by observing their advisors — they learn that building a tool is a means to a publication, not the publication itself. Grant agencies evaluate proposals using review panels drawn from the existing population, and a proposal to “build a legal compiler and use it to audit legislative quality” maps to no grant category: too applied for basic research, too theoretical for applied research, too cross-domain for any disciplinary panel.
The vocabulary reinforces the boundary. “Research” means what happens inside the container. “Development” means what happens outside. “Impact” is measured by citations within the container, not by errors found in published law.
The system is not broken. It is working perfectly — for the objective function it is actually optimizing.
VIII. What Breaks It
Historically, two things have broken selection traps of this kind.
The first is an external crisis severe enough that the old phenotype fails visibly. Evidence-based medicine became institutionally powerful when medicine’s existing authority structures could no longer absorb questions about effectiveness, variation, and cost as matters of professional judgment alone. Deming’s quality methods became harder to ignore in the United States after Japanese manufacturing success made quality engineering competitively visible. The crisis creates demand for the engineering meta level that the normal incentive landscape does not.
The second is a new medium that routes around the existing containers. The printing press routed around the Church’s monopoly on legitimate knowledge. Open-source software routed around the proprietary development model. If AI research tools make cross-domain mechanism work cheap enough that a single person can produce it, and if the output is inspectable enough that readers can verify it without the container, the legitimacy bottleneck loosens. The container does not disappear — it becomes optional for those willing to pay the inspection cost.
Neither path requires academia to reform itself. Both require the engineering meta level to become visible enough that ignoring it is more expensive than populating it. That has not happened yet for governance. It happened for medicine, for manufacturing, for software. The governance equivalent is waiting for its forcing function — or building without one.
The irony is that the illegibility of mechanism-building inside academia is itself a mechanism failure. The institution that studies verification has no native process for verifying cross-domain verification work. The framing machine frames the problem of framing machines. Occupying the vacancy would require exactly the kind of work the vacancy prevents from being recognized.
Sources and Notes
Bias blind spot and cognitive sophistication: West, R. F., Meserve, R. J., & Stanovich, K. E. (2012). “Cognitive sophistication does not attenuate the bias blind spot.” Journal of Personality and Social Psychology, 103(3), 506–519.
Burden of Knowledge and specialization: Jones, B. F. (2009). “The Burden of Knowledge and the ‘Death of the Renaissance Man.’” Review of Economic Studies, 76(1), 283–317.
Evidence-based medicine: Cochrane, A. (1972). Effectiveness and Efficiency: Random Reflections on Health Services. Nuffield Provincial Hospitals Trust. The Cochrane Collaboration was founded in 1993.
Quality engineering: Deming, W. E. (1986). Out of the Crisis. MIT Press. Deming taught Japanese manufacturers quality methods beginning in 1950; US industry adopted his methods widely in the 1980s after Japanese competition made quality engineering competitively visible.
Natural selection of bad science: Smaldino, P. E. & McElreath, R. (2016). “The natural selection of bad science.” Royal Society Open Science, 3(9), 160384. Agent-based model demonstrating that selection for publication volume degrades methodology without individual malice. Labs using low-power methods out-reproduce rigorous labs. The model assumes all agents have integrity — the degradation is purely structural. Describes the external selection mechanism but does not model the internalization step.
Competition for priority: Tiokhin, L., Morgan, T. J. H. & Yan, M. (2021). “Competition for novelty reduces information sampling in a research game.” Royal Society Open Science, 6(7), 180934. The “priority rule” (disproportionately rewarding the first to publish) creates a structural penalty for thoroughness. Fast framers systematically outcompete slow builders.
Goodhart’s Law applied to academic metrics: Fire, M. & Guestrin, C. (2019). “Over-optimization of academic publishing metrics: observing Goodhart’s Law in action.” GigaScience, 8(6). Also: Manheim, D. & Garrabrant, S. (2018). “Categorizing Variants of Goodhart’s Law.” arXiv:1803.04585. Multiple mechanisms through which the citation target decouples from the original measure of scientific progress.
Value capture and the seduction of clarity: Nguyen, C. T. (2020). “Games: Agency as Art.” Oxford University Press. Also: Nguyen, C. T. (2022). “Transparency is Surveillance.” Professionals enter with rich, holistic values; institutional metrics simplify those values; over time the simplified version replaces the original. The “seduction of clarity” — metrics offer cognitive relief that mimics understanding — is the psychological mechanism by which external incentives become internalized aesthetic preferences.
Scientific taste as socialized judgment: Lamont, M. (2009). How Professors Think: Inside the Curious World of Academic Judgment. Harvard University Press. Peer reviewers develop shared “scientific taste” through socialization, mentorship, and institutional mechanisms — not through rational deduction from first principles. When the citation economy rewards framing for several generations, the population’s aesthetic baseline recalibrates. Lamont uniquely bridges the gap to the internalization step.
Prestige-biased transmission: Henrich, J. & Gil-White, F. J. (2001). “The evolution of prestige: freely conferred deference as a mechanism for enhancing the benefits of cultural transmission.” Evolution and Human Behavior, 22(3), 165–196. Also: Boyd, R. & Richerson, P. (1985), Culture and the Evolutionary Process. Novices copy prestigious models holistically, including arbitrary aesthetic traits that merely co-vary with success. The institution’s external selection pressures are laundered through the prestigious individual and internalized by the novice as objective standards.
Trained incapacity: Veblen, T. (1914). The Instinct of Workmanship. Also: Merton, R. K. (1940). “Bureaucratic Structure and Personality.” Social Forces, 18(4), 560–568. Professional expertise creates blind spots — abilities function as inadequacies under changed conditions. The concept was expanded by Kenneth Burke (1935): adaptation to a specific environment produces incapacity outside it.
Obliteration by incorporation: Merton, R. K. (1988). “The Matthew Effect in Science, II.” Isis, 79(4), 606–623. The scientometrics term for when a solved problem becomes so foundational that it ceases to be cited — the original author’s citation stream terminates precisely because the solution succeeded.
Structural illegibility of unaffiliated researchers: Williams, A. (2025). “Epistemic Closure and the Irreversibility of Misalignment.” arXiv:2504.02058. When evaluative architectures rely on compounding filters (cognitive, institutional, social, recursive), work from outside the system is rejected unread — not for lack of truth but for lack of institutional packaging. The “Misalignment Attractor” is a terminal state where the system cannot detect novel failure modes or evaluate recursive repair mechanisms.
Citation economy and research evaluation reform: DORA (San Francisco Declaration on Research Assessment, 2012) and CoARA (Coalition for Advancing Research Assessment, EU, 2022) both advocate evaluating research contributions beyond journal impact factors. The Journal of Open Source Software (JOSS, est. 2016) provides peer-reviewed, citable publication for research software. Software Heritage added citation support to give software stable, referenceable identifiers as research outputs.
Selection pressure and cultural evolution: The broader argument that institutional incentives shape internalized taste draws on Dawkins, R. (1976), The Selfish Gene; and Henrich, J. (2015), The Secret of Our Success. The application to academic culture specifically follows the framework in Values Are Replicators.
The fragmented map that enables the vacancy: The Severed Map. The vacancy itself: The Unpopulated Meta. The institutional proposal: The Fourth Branch. The replicator dynamics underneath: Values Are Replicators. What the engineering meta level looks like in practice: Holistic System Rotation. How the output routes: Optionality Has No Router.