Building a theory of the architecture of organizing machines and people.
This is a live blog of Lecture 10 of my graduate seminar “Feedback, Learning, and Adaptation.” A table of contents is here.
Though the “theory” of computer science is most associated with algorithms and complexity, by far the most impactful theories all stem from architecture. Computer architecture, software architecture, network architecture. Architectural theory in computer science is seldom packaged in clean theorems, but there are implicit and explicit design principles that recur across dozens of abstraction layers.
Computing hardware, software, and network design all share key architectural concepts, but our courses don’t often cleanly connect the architectural dots across the application domains. In computer science, all of these different theories of architecture focus on designing hierarchical systems to support diversity and robustness. They all use similar building blocks, namely abstraction boundaries, layered hierarchies, and protocols for cross-layer communication. These protocols are all constraints that deconstrain.
In class, I walked through a few examples, though I had to entirely gloss over all the details. The result was my most Santa Fe Institute slide deck ever, an endless scroll of ugly graphs of networks. I started with the internet, which has the clearest declarative design of all the architectures. Here’s a glimpse from Berkeley’s CS 168:
The internet enables diverse applications to run on diverse networks. It does so by enforcing seven layers of protocols. All of these protocols flow through the “narrow waist” of the Internet Protocol (IP), the jewel “constraint that deconstrains.” Since every application has to flow through a single protocol, you can have incredibly diverse physical networking below and incredibly diverse applications on top. The protocols fan out above and below IP to support the diverse goals. Notably, the transport layer supports TCP, which lets applications know if their packets arrived, and UDP, which doesn’t. The internet is designed for robustness by having a strict protocol list, but pushing all of the processing and thinking about those protocols to the edge.
I also briefly discussed software, operating systems, and hardware architectures in computer science. These systems are physically more localized and have different design constraints. Their main goal is to enable local physical scale so that computers can support fast, general-purpose software. As computers became faster, more complex, and more reliable, their design became more layered and hierarchical. Here’s an image of the timeline Alberto Sangiovanni-Vincentelli shared with me
Rather than trying to design a computer chip from transistors, design cycles accelerate by letting engineers work at higher and higher levels of abstraction. Layered design now comes in to simplify choices. Alberto and Edward Lee like to echo DEVO: “Freedom from choice is what you want!” By establishing clean abstraction layers, engineers can innovate at each layer without worrying about what happens above and below.
Now, this is the point in the lecture where I split with John Doyle. John likes to use layered architectures to understand biology. Yes, you can look at biology and see architecture. Indeed, the constraints that deconstrain terminology were coined by systems biologists. Marc Kirschner and John Gerhart use the notions of constraints and deconstraints to describe how common platforms in biology facilitate agile evolution into diverse phenotypes and species. Because the platform is conserved, this enables rapid evolutionary changes that wouldn’t be predicted by simple, uniformly random variation.
However, I always find that people project technology onto biology to organize and understand biological function. In the 1600s, the body was a bunch of clocks. In the 1800s, it was an engine. Now we think of it as a computer. I’m not saying these projections of technology onto biology aren’t useful, but I don’t think that we necessarily learn more about technology from seeking common patterns in biology. Indeed, I’d rather look at recurring patterns in artificial structures to identify commonalities and general principles.
So instead of looking to biology, let’s look to management. Because man, every computer architecture diagram looks like an industrial org chart. This is not accidental. They serve similar functions. Computing and the mega-organization grew symbiotically in the post-war period, and building complex computing infrastructure required complex organizations of people. Some individuals certainly made brilliant, important advances at isolated nodes of these networks. However, the genius of layered architecture is that they admit a diversity of narrow innovations at every layer that locally grows the architectural ruleset without disrupting what everyone else is doing. In organizations, we have specific reporting and evaluation protocols, rules for bonuses and promotions, and schemes for supporting diverse business goals. The organizational architecture serves functions similar to those of computer architecture.
In “Toward a Theory of Control Architecture,” which I’ll discuss more in the next post, Nik Matni, Aaron Ames, and John Doyle set the stage with the Apollo project’s architecture, which bears a striking resemblance to today’s standard robotic architectures.
You have low-level controllers at one layer, a synthesis of sensors and trajectory optimization in the middle, and a high-level planner at the bottom. Part of this is because the abstraction makes it easier to reason locally about mitigating the complexity of launching people to a cold, barren, airless moon. However, such complexity also required massive teams of people. Here’s a small part of the organization of the Apollo Spacecraft Project Office.
A theory of architecture can’t neglect a theory of human organization. Both artificial structures work together to create the complex infrastructure underneath our contemporary condition.
Subscribe now
By Ben Recht