Who Codes Reality?
Digital Systems, Decision Architecture and Institutional Accountability in the Built Environment
I. Starting Point
In large development projects in the built environment — new urban districts, infrastructure programmes, long-term urban development strategies — digital simulations are used to make the possible consequences of decisions visible: traffic flows, energy consumption, water demand, heat development, ecological effects.
The decisive question is not what the model calculates. The decisive question is who decides in advance what can be calculated at all.
People decide first what will be considered: what belongs in the model and what does not, which data count as relevant, which objective functions the system is meant to optimise. These decisions appear technical. They are political.
When a model optimises traffic, what “optimise” means must be defined beforehand. Shorter journey times for cars? Lower emissions? Better accessibility for pedestrians? Each of these objectives produces a different city. The categories of a model do not emerge in the computer. They emerge in planning teams, standardisation processes, procurement specifications and software architectures — in the institutional spaces where it is determined which concepts exist and what they mean.
A model without a category for biodiversity cannot take biodiversity into account. A model that evaluates only monetary costs structurally excludes ecological and social values. The machine calculates only within the world we have defined in advance.
This is why a large part of the power of modern systems lies not in their outputs but in their architecture: in data structures, standards and objective functions. Whoever designs these structures determines how decisions are subsequently made.
II. The Double Codification
Alongside the legal codification of claims, a second codification layer is emerging: the technical codification of reality, on the basis of which institutions make decisions. While the first codification is enforced by courts and regulation, the second is produced by models, data standards and software architectures. This parallel is the core of the present argument.
Katharina Pistor shows in The Code of Capital that capital is not a natural condition. It comes into being when an asset is brought into a particular form through law. This codification confers four properties: priority, durability over time, universality and convertibility. The institutional architecture behind it is three-tiered: law firms construct claims; courts, regulatory authorities and central banks recognise them; state power enforces them. Without the third stage, capital does not exist — only a claim. The political consequence: inequality is not a by-product of markets. It is a property of the codification architecture. Priority rules determine who gets paid first; insolvency rules determine who bears losses — before any market transaction.
The second codification layer operates on the same principle but on different foundations. Institutions make reality legible by simplifying it. This simplification is never neutral: it privileges the perspective of the institution that undertakes it, and excludes what resists its category structure. James C. Scott demonstrated this for state administration — cadastres, forestry management, city plans as instruments that translate complexity into administrable formats while systematically eliminating what cannot be formalised. What does not appear in the simplification does not exist for the institution. Under certain conditions, this form of legibility destroys precisely the complexity on which the system’s stability depends.
What Scott describes for the administrative state is happening today through digital systems — at greater speed, greater reach and lower visibility. The objective function of a model is the structural equivalent of a priority rule: it determines whose interests the system serves first. A traffic model that optimises only journey times is structurally related to Scott’s spruce monoculture: short-term efficiency through elimination of the variables that secure long-term stability.
Between the two codification layers, however, there is an institutional asymmetry that names the core of the problem. The first layer has an enforcement regime: courts, regulatory authorities, central banks. The second has none. Its enforcement runs through market power, technical dependency and procurement structures — private, and not recognisable as an enforcement regime.
The actors who design model architectures, data standards and objective functions exercise codification-equivalent power without being subject to the institutional constraints that apply to the first layer. No contestability. No systematic accountability. No public control over which variables exist, which targets apply, whose losses count as costs.
This is not a caveat to the analogy. It is the problem this paper addresses.
III. AI as Institutional Infrastructure
For a long time, digital systems in urban planning were based on deterministic models. Their assumptions were explicitly formulated and in principle locatable — rarely questioned, but findable. The analytically relevant difference between classical models and machine learning systems lies not in the method but in reconstructability: in classical models, assumptions are explicitly documented. In machine learning systems, they are statistically sedimented. This shifts the locus of decision from traceable rules to training processes that are difficult to reconstruct — regardless of whether the system is deterministic or probabilistic. What cannot be reconstructed from the system’s documentation does not exist for institutional purposes.
Model architecture is a political decision. Training data is a political decision. Evaluation criteria are political decisions. None of these decisions is currently institutionally constituted, publicly negotiated or systematically contestable.
Scott identifies four conditions under which legibility projects fail catastrophically: a powerful actor with enforcement capacity, a high-modernist ideology, a defenceless population and absent feedback mechanisms. All four map onto the deployment of AI in the built environment. The algorithmic system acquires de facto binding force as soon as it is embedded in a procurement chain: system recommendations become the basis of public procurement decisions without the system’s architecture being legally qualified or contestable as such. This is the mechanism through which a model recommendation becomes functionally a standard — without ever having been legitimised as one. The belief that data-driven systems are more objective than human judgement is the contemporary form of Scott’s high modernism. Those affected, who do not know the system’s architecture, cannot contest it. And where assumptions are not reconstructable, there is no mechanism by which errors can be corrected.
In Pistor’s terms: when AI recommendations feed into planning decisions, insurance assessments or capital allocation, these outputs are effectively codified. They acquire priority, durability and universality. Who codifies these properties — and under what law? If the answer is: the model operator, through terms of service and licence agreements, then the codification is governed by private law and escapes public control. Exactly the pattern Pistor describes for financial derivatives.
IV. Why the Pressure Will Come from Capital Markets
Institutions can manage risks only if decisions are reconstructable. Systems whose decision logic is not reconstructable destroy the foundations of modern liability and risk regimes. Reconstructability is not a transparency demand. It is a functional precondition.
The institutional asymmetry of the second codification layer — de facto binding force without a public enforcement regime — will not primarily be closed by regulation. The strongest real enforcement levers lie in the risk and liability logics of capital markets.
Sovereign wealth funds, pension funds, large investment funds and the institutions that insure these investments finance and insure projects at the scale of cities, regions and entire countries. To manage risks, they require the ability to establish who made which decision at which point — on the basis of which data, with which assumptions, and why. Without reconstruction, no attribution of responsibility. Without attribution of responsibility, no functioning liability regime.
With the emergence of AI systems, a new category of risk arises. Not necessarily because the systems calculate incorrectly — but because it is no longer possible to explain after the fact how a result came about, which assumptions it rested on and who is responsible for those assumptions. For institutions investing or insuring billions, this is a structural exclusion criterion. Ignorance does not protect against liability. When a decision causes serious harm and nobody can explain how it came about, a legal and financial grey zone emerges: losses are real, responsibility is not attributable.
Systems that cannot answer this question are not decision tools. They are liability-displacement machines.
This finding can be made precise using Pistor’s framework. The codification of capital functions only if the third stage — state enforcement — remains operational. Enforcement requires that claims are identifiable, responsibilities attributable and decision chains traceable. When AI-supported decision bases undermine these conditions, they threaten not merely individual transactions but the functioning of the codification regime itself.
Mariana Mazzucato has shown — in The Entrepreneurial State as in The Value of Everything — that public institutions regularly absorb the risky, long-term investments in innovation while the private sector extracts the returns. This pattern reproduces itself at the level of decision infrastructure: socialised infrastructure costs, privatised codification power. Mazzucato’s demand for conditionality translates directly: the public sector provides infrastructure, data and legitimacy, and therefore has the right to impose reconstructability conditions on the systems built upon it.
Mazzucato’s second diagnosis is equally operative here: when a digital planning system reports efficiency gains produced only through externalisation — ecological damage, social displacement, intergenerational losses — this is value extraction booked as value creation. The model cannot make this distinction visible if its categories do not contain it. Categories are therefore not merely conditions of visibility. They are conditions for whether a system can distinguish value creation from value extraction at all.
Capital markets possess both the incentive and the levers to enforce reconstructability — through financing conditions, insurance terms, reporting requirements and regulatory standards. The institutional asymmetry of the second codification layer will not be resolved through ethical demands. It will be addressed through withdrawal of financing and refusal of insurance.
V. The Requirement
The institutional gap is clear. The first codification layer has an enforcement regime. The second has none. Closing this gap requires no new technical system. It requires conditions of institutional admissibility, applied at the level of recognition: courts, regulatory authorities and insurers must accept that algorithmic decision bases hold the same codification status as legal claims — and are subject to the same requirements of transparency, contestability and accountability. Without this recognition, the second codification layer remains in a legal void: formally private, functionally public, institutionally uncontrolled.
From this follows a concrete requirement: decisions must be documented so that they are reconstructable after the fact — regardless of which system prepared or made them. This is not a technical specification. It is a requirement for institutional governance. It must be imposed before procurement, not after the damage occurs.
Objective functions must be explicitly negotiated and publicly anchored before procurement. What is not in the objective function will not be optimised; what is not measured will not be protected. Ecological systems, social cohesion, intergenerational resilience do not appear in most models because they are difficult to quantify — not because they are irrelevant. Their absence is a codification deficiency with the same structural effect as a priority rule. And without reconstructability, no institutional memory; without institutional memory, no improvement — only repetition.
Pistor explains how law privileges certain claims. Scott explains what happens when codification architectures operate without feedback. Mazzucato explains how the boundary between value creation and value extraction is blurred when the categories that make the distinction visible are absent. All three describe the same mechanism at different levels: a formal structure defines what counts — and excludes what does not.
The problem is not only absent transparency. The problem is unmandated reality definition.
Whoever designs the objective functions, data structures and categories of digital systems codes the reality on the basis of which institutions act. This codification exercises real governance power without being subject to an equivalent regime of public control, contestability and accountability. The answer to the questions — under what institutional control, with what possibility of contestation, with what feedback mechanism — is currently identical: none, none, none.
As long as these conditions are absent, the deployment of AI in public responsibility is unmandated reality definition.


