One of the most versatile problem-solving tools in physics is breaking down systems into simple, isolated parts. Physicists formulate our understanding of natural laws in terms of the simplest possible parts – a single electron, a single wave, a single reaction – because simple systems are easy and elegant to describe. When we need to describe more complex systems like a solar system or a hydrogen atom or a flurry of interfering waves, we find ways to apply these simple laws in conjunction to more complex situations.
But there are limits to how far we can take this approach of combining simple laws to understand complex systems. We can’t, for example, understand the process of water boiling by carefully examining the interactions between electrons and photons involved in the process. There are too many free variables and too many states to keep track of. Instead, physicists apply a different set of laws that are better suited to understand a system with massive numbers of individual pieces behaving similarly. Thermodynamics is the canonical example of this kind of physics, called statistical mechanics. We can understand the phenomenon of temperature using our laws about simple independent parts (how photons, electrons, and atoms interact with each other). But in practice we use a different set of laws, the laws of statistical thermodynamics, that tell us how these much larger systems, or ensembles, behave as a whole. When we use these statistical laws, we don’t need to concern ourselves with keeping track of the behaviors of every single component involved, because these laws about ensemble systems work at a different scale – these laws deal with “temperature” and “volume”, not “electron energy levels” and “photon quanta”.
These two ways of making sense of nature – classical mechanics and statistical mechanics – are related. They don’t contradict each other. In fact, one is just a large-number approximation of the other. But when we study simple systems, like a ball dropping to the ground, statistical mechanics isn’t useful, and when we study complex systems, like a pile of sand, laws about how individual grains of sand behave aren’t useful. Although they describe the same thing deep down, we have a pair of laws that we use to understand systems at either ends of the complexity spectrum.
When we look around, we can find this bi-polar pair of laws everywhere we study complex systems. In computing we have a way of understanding how programs behave in isolation, things like type theory and operating system research, and we have a separate discipline studying how large masses of computers behave at scale, like distributed systems. There are branches of psychology and economics that study how individual people or agents behave in isolated situations, and different branches that study how groups of people make decisions together, or how a massive economy of billions of such agents respond en masse to changes. In almost every discipline, there are the laws of individuals, describing the simplest atom of something, and the laws of masses, describing how to make sense of the complexity that emerges when millions and billions of these atoms combine beyond individual understanding.
Some people call these two levels of description levels of “abstraction”, and the ability to deeply understand something both atomically and at scale is frequently cited as a trait of good startup founders. They understand the experience each individual customer has with their product, but they also understand how their business, involved with thousands and millions of such customers, needs to improve.
It’s worth noticing the reason that we humans resort to this two-pronged approach in the first place. If we had infinite intelligence and computing capability, we wouldn’t need to resort to statistical, approximate laws about masses of things because we could infer everything by studying each individual component in a massive system. But given the limitations of our feeble meat-based computers in our heads and the relatively small scale of their digital counterparts, we need these large-number approximations, too.
When we try to understand how anything works, I think it’s useful to recognize this two-pronged nature of understanding, and ask ourselves whether we are studying the laws of individuals of some phenomenon, or the laws of masses. You can’t build a Google just by understanding how a single computer works, but it’s just as impossible when you only understand large distributed systems. We need a balance of both.
Beyond customization: build tools that grow with us →
I share new posts on my newsletter. If you liked this one, you should consider joining the list.
Have a comment or response? You can email me.