The Y2K Bug, The Preparedness Paradox, and the existence of multiple truths
Back in the nineties, I was told computers would end the world. Not because they would become superintelligent like in the movies, but because programmers had been frugal/lazy and written years with only two digits. You know how we write dates with only the last two digits of the year (01/14/25)? Well, computers were programmed this way too. That meant that when we moved from year 99 to year 00, havoc would break loose because differences in years would give invalid negative numbers. Newborns would be classified as centenarians, banks would lose all records of their holdings, planes would fall from the sky, nuclear missiles would launch themselves…
To prevent this pending disaster, IT departments around the world united to patch code bases. People received hefty bonuses if they could read and write the COBOL that instructed our ancient mainframes. Hundreds of billions of dollars were invested to patch this annoying problem that years should be represented by four digits, not two.
When the clock struck midnight on December 31, 1999, the Judgment Day predicted by Prince did not arrive. Some things definitely broke, and the wikipedia has a fun list of some of the systems that crashed. For instance, some ticket dispensers on public transit stopped working. But it was not the end of the world as we knew it.
Why did the doomsday predictions not pan out? This is a fascinating question of causal inference. It could be that the concerted efforts of IT workers programmed our way out of the apocalypse. It could be that the predicted consequences of data overflows were overblown, and most of the Y2K bugs were relatively minor threats that could be fixed upon failure. How could we know the difference? Historians spend their time assembling strong evidentiary bases to impute why things that happened happened. But how do we assemble evidence to argue about the cause of something that didn’t happen?
One technique might be to look to “synthetic controls,” places that faced similar circumstances but took different actions. As I read retrospectives on Y2K, I was surprised to learn that some countries decided to do nothing! Not only did Italy and Russia do nothing, but the US State Department issued an advisory not to visit these countries because of potential Y2K catastrophes. Of course, nothing catastrophic happened in either country.
Now, this isn’t evidence that the prevention in the US wasn’t important! These countries are not perfect comparisons. However, it does suggest that those who raised maximal Y2K panic were way off. And the reporting of the time tended to amplify the doomsayers and downplay any reasonable prognoses. I mean, this book is real:
Yet plenty of people at the time tried to quell the alarm about the Y2K panic.
Computer science professor Ross Anderson was initially so afraid that in 1996 he bought a cabin in the woods to ride out the millennium. He had calmed down considerably by 1999. After assessing and fixing the bugs at Cambridge University, he came away with a sanguine prediction in December 1999:
“The surprising discovery is that although some systems have been fixed or replaced, none of the bugs found so far would have had a serious impact on our business operations even if they had been ignored until January 2000.
“So the UK government anxiety about the millennium bug compliance of small-to-medium sized enterprises appears misplaced. The average small business owner, who is doing absolutely nothing about the bug, appears to be acting quite rationally. This does not mean that the millennium bug is harmless.
“There are still risks to the economy and to public safety, but in the UK at least they appear to be minor.”
Friend-of-the-blog John Quiggin, who has been blogging since before blogs were a thing, also bet that the Y2K bug was overhyped. You can read his prescient predictions from September 1999 here. Notably, John noted that by 1999, about half of the catastrophic failures should have already happened! This information guided his prediction:
“We can use the Gartner Group's analysis to conclude that date-related problems in the year 2000 will be about twice as severe as those in 1999. Since twice zero is still zero, it looks as if it is time to start running down those emergency stocks of baked beans.”
Finally, as a last piece of Y2K minimizing evidence, let’s be honest with ourselves about programming computers. We all know that for every bug we patch, we create two more bugs. The idea that our global effort fixed all bugs and didn’t introduce new glitches is unreasonable. In fact, it wasn’t the case. Buggy Y2K patches took out several US spy satellites for several days: “It was the only serious military consequence of the year 2000 rollover, for which the Pentagon prepared at a cost of $3.6 billion.” Considering that the massive bug patching didn’t create more failures adds credence to the position that the Y2K bug was not world-ending.
I’ve said it a couple of times already, and I’ll say it one more time to fend off grumpy commenters: Don’t take this presentation of evidence as an argument that Y2K wasn’t a real issue. I’m not taking sides here. I’m instead interested in how to best assemble historical evidence to assess why catastrophe didn’t occur in the year 2000. How can we think about retrospective evidence of non-catastrophes? On the 25th anniversary of the end of the world as we know it, John Quiggin wrote a retrospective arguing that we’ll never know. Or, as John puts it, the Y2K experience was a “Rashomon phenomenon. Many realities are simultaneously true:
“For programmers in large organisations, the experience of Y2k remediation was that of a huge effort that fixed plenty of bugs was 100 % effective. For small businesses and schools, another thing to worry about about, but too hard to solve, leading to “fix on failure” when problems emerged. For Pollyannas like me who correctly predicted few critical problems anywhere, vindication, but no credit. For most people, another panic and false alarm.”
All of these are simultaneously true. This is the crux of “the preparedness paradox.” In prevention, whether it be preventing natural disasters or individual illness, your hard work can simultaneously pay off and not have been worth it. It can be simultaneously true that you overreacted, but a reaction was essential. And it can be true that not all prevention for the sake of prevention is good. When regulation impacts people’s lives, “preparedness” can be a huge ask. We can’t govern by moral panic, yet we need guardrails against potential disasters. How we balance these makes governance daunting. Especially because citizens can believe completely contradictory things about the state of affairs and still all be correct.
Subscribe now
By Ben Recht