A computational paradox of the postmodern condition.
If we want to make decisions in a complex society, we need a shared language. Experts on the ground must summarize complex situations in their communication with decision makers decoupled from the field. They need to make their experiences legible to those they report to.
The easiest way to make situations legible is to quantify them. To count things, record figures in tables, compute statistics, and make charts. Quantification sorts complexity into simple bins, simplifying communication both up and down the chain.
When we speak in such quantified numerical summaries, our statements feel objective. We believe that appropriate quantification isn’t be subject to the whims and opinions of an individual field worker. By agreeing upon standards, quantified measurements are now scientific facts.
Once we have objectivity, we have authority. Making decisions based on objective facts is obviously in the best interest of everyone else, and we impose threats of chastisement, ostracism, or violence upon those irrational individuals who disagree.
And once these numerical summaries that we made out of whole cloth to simplify communication become authoritative, they become real. They become things we should strive to maximize.
This is the quantification trap.
The quantification trap is social-scientific canon. You could build this story entirely out of texts written before the year 2000. The role of quantification, measurement, and legibility in statecraft is laid out by James C. Scott’s Seeing Like a State (1998) and Alain Desrosières The Politics of Large Numbers (1993). Theodore Porter’s Trust In Numbers (1995) highlights the turn to quantification in pursuit of standardization and objectivity. The blind optimization of decontextualized metrics is core to Jean-François Lyotard’s characterization of The Postmodern Condition (1979).
Twenty-five years into the twenty-first century, I don’t think you should have to run a Science and Technology Studies sidequest to recognize the quantification trap. It’s obvious and almost trite when we say it out loud. It’s trendy to talk about how metrics and benchmarks are bad and to prattle on endlessly about Goodhart’s, Campbell’s, or Murphy’s Laws. And yet, we continue to organize ourselves around statistical summaries. Is the quantification trap an inevitable part of scale? Is it an inevitable part of efficiency? Is it an inevitable part of the dismal hierarchy of bureaucratic power? The great puzzle of our contemporary condition is why it’s so hard to escape.
Part of the puzzle is that making society computable has dramatic benefits paired with every cost. The constant tension in mathematical rationality lies in the interplay between its sweet spots and its limitations. The quantification trap creates an intersubjectivity for collective action. Mathematically rational governance lets systems and hierarchies see, but also makes it easy to maintain their control. It facilitates posing clear questions and objectives, though crowds out nuance and multiplicity. It creates a shared understanding of standardization but removes the discretion of experts. It lets us speak about maximizing the average welfare of populations, but erases individuals.
If there are such clear trade-offs with quantification, why do we always tend to side with “the data?” The acceleration of computation has made the quantification trap exponentially more contagious. As computers became ubiquitous, the quantification process became inevitable and invisible. We don’t think about how we are tethered to unfathomable computing machines. They’re just part of who we are now. Our devices measure us all the time, recording time-on-site and click-throughs. Everything has a like button. All of these measurements are churned upon by data scientists hoping to hit their personal promotion metrics, regardless of whether the instrumentation means anything. The quantification trap is built out of an invisible fabric of computation.
I feel like I say this in the book, but never say it in the Irrational Decision. The book articulates the role of mathematical computation, optimization, and statistics as scaffolding in the elaborate quantification trap. To understand why we optimize what we optimize, it’s helpful to look at the history of computational methods and language boxing us in. The path from legibility to authority goes straight through computation and computerization. Quantification transforms experience into machine-readable data and a small number of interventions and outcomes. Decisions can only be automated once we throw away the messy, uncomputable parts. We maximize averages because it’s a convenient way to model uncertainty.
Now, I am by far not the only person to talk about the quantification trap. I wrote about it today because I felt I needed this placeholder after the last few weeks of talking about my book. However, if you want a reading list from the past 25 years, I can write us an impossibly long bibliography. Even in the past year, crossover books like Healy and Fourcade’s The Ordinal Society and Nguyen’s The Score have articulated the same conundrum.
It’s good that more people are talking about this. What we count, compute, and optimize is a political decision. Counting flattens complexity, and the choice of what is left is a question of power. The virality of the quantification trap forecloses better futures. We can’t strive for them if we can’t see the gilded cage we’re in.
Subscribe now
By Ben Recht