Blog Feed

Why does time move forward?

Last week, we talked about how humans measure and experience time. We determined that, although time is ultimately defined by the cyclical events we use to mark its passage, it has an undeniable forward quality as we move through our lives. We called this quality the “arrow of time,” and rooted it in a simple observation: of the large set of events that are technically allowed under the laws of physics, only certain events ever seem to occur.

Today, I’d like to discuss why time’s arrow only moves forward – why some things happen and others don’t. Why, for example, does a drop of food coloring always spread out to mix with a glass of clear water? Why doesn’t it ever condense back into a single drop? The answer, as I alluded to last week, boils down to a word you’ve probably come across before: entropy.

To understand entropy, we need to leave the human realm and take a trip to the quantum realm. Recall that what distinguishes this realm is its small size scale, on the order of individual atoms and molecules. Ultimately, the keys to understanding entropy, and indeed time’s arrow itself, lie here at the bottom. Before we talk about entropy, though, let’s define two useful terms that come into play in this realm, microstates and macrostates

What are microstates and macrostates?

Imagine you and your friend are playing a game*. The game is one that she invented. “It’s called Penny Encoder,” she tells you, “and it goes like this:”

I’m going to leave the room. Take these pennies and place them heads up or tails up in a row on the table. Take a picture of your arrangement and then put the pennies away. Then, using as few characters as possible, write down a description of your arrangement, so that when I come back in, I can reconstruct the row of pennies just as you had it. We both win if my arrangement is equivalent to your picture. 

Your friend leaves, and you lay out a row of five pennies in the following arrangement:

You take the above picture, put the pennies back in their special box, and think about how to encode your configuration. After a minute, and thinking yourself to be quite clever, you write “HTTHH” on a piece of paper. Of course, when your friend comes back, she is easily able to replicate the pattern of pennies, and you both win the game. 

Now it’s your turn to leave the room. When you return, your friend hands you a piece of paper that just reads “H1.” Somewhat perplexed, you make the following guess about how the pennies should be ordered:

Your friend says, “correct!” and shows you the picture of her original configuration:

“Hang on,” you say, “That’s not the same as what I guessed.”

“Sure it is,” she says. “My arrangement has one heads and four tails, and so does yours. They’re equivalent. And I was able to able to describe it in two characters, where you needed five.”

You retort, “Sure, but I thought the point of the game was to correctly reconstruct the position and orientation of each individual penny!”

Who’s correct here? Well, the real problem is that the rules of Penny Encoder do not specify whether the microstate or the macrostate needs to be reconstructed to constitute a correct answer. Okay, let’s deal with those terms in detail.

A microstate is what you described with “HTTHH.” It completely encodes the state (heads or tails) of every penny in the group. A macrostate, on the other hand, is what your friend described with “H1.” It describes the pennies as a single system (“one is heads-up; the rest are not”) without regard for the state of any individual coin. Every possible configuration of the pennies is associated with both a microstate and a macrostate, but you’ll notice right away that the relationship between them isn’t one to one. What do I mean by that?

Well, clearly every microstate has exactly one associated macrostate. For example, “HTTHH” could be rewritten as “H3,” but it could not be rewritten as “H2,” because the number of heads-up coins in “HTTHH” is 3, not 2. However, as we’ve seen, a macrostate like “H1” maps to multiple possible microstates – five, in fact: “HTTTT,” “THTTT,” “TTHTT,” “TTTHT,” and “TTTTH.” 

Some macrostates have more associated microstates than others have. For example, “H5” has only one:

(If you’re snarky, you might argue that we could swap around the order of these pennies, and that would constitute a new microstate. My answer to that is that we’re assuming that each penny is indistinguishable from any other and so it is completely described by its heads/tails orientation and its relative position in the line. So swapping any two heads-up pennies does not change the microstate, but swapping a heads-up penny with a tails-up one does. This argument will be more convincing later when we talk about molecules.)

What is entropy?

We’ve established that every microstate of a system corresponds to exactly one macrostate, and every macrostate corresponds to at least one microstate. Well, entropy is a characteristic of a macrostate. Essentially, entropy is the number of microstates that a given macrostate could map to**. Let’s return to Penny Encoder, and consider all of the game’s macrostates. Since each coin has only two possible states, this is easy; the state will have H values between 0 and 5. What is the entropy of each macrostate?

That’s it! That’s really all entropy means. You’ll no doubt notice the pattern in the above table: at least in this example, macrostates “in the middle,” –that is, those under which the difference between the number of heads-up and tails-up coins is small– have higher entropy. For more complicated systems, this table can be described as an (extremely pointy) bell curve peaking sharply in those middle states.

In fact, this peak is important to our intuitive understanding. Entropy is maximized when a system is more “mixed.” Put another way, in a macrostate with high entropy, it is difficult to predict the status of any constituent part of the system. With “4H,” for example, we can guess with 80% accuracy that a given coin is heads up, because ⅘ of them are. With “3H,” that accuracy drops to 60%. Clearly, the most difficult predictions would be in a larger system evenly split between heads and tails coins, from which we could not predict the state of individual coins with any accuracy better than random guesses.

Why does entropy matter? Let’s think back to the game we played last week with the projectionist, where we tried to guess whether we were seeing films playing normally or in reverse. Suppose we watch a video of a single coin being flipped, except we don’t see the mechanism that actually flips the coin. We just see the string of outcomes. It would take some pattern of tails, tails, heads, tails, heads, tails… We would have no way of determining the time-direction of this movie! The random sequence of a single coin’s orientations makes no irreversible progress. 

Suppose, now, that we watch a second movie, just like the previous one, except there are 100 coins in a 10 by 10 grid. They have no discernable pattern of orientation; heads- and tails-up coins are scattered throughout the grid. Several times a second, a random coin in the grid is flipped. After a few minutes, we see that all of the coins are tails up and then the movie stops. Of course, we cannot know with absolute certainty whether this movie is playing forward or backward. But common sense tells us that we’re far more likely to see the coins start in a state of 100T and move to a more mixed macrostate than the other way around.

Why is that? Essentially, since the coins are flipped at random, then over a sufficiently long time period, every microstate is equally likely. But the macrostate 100T only has one microstate associated with it, whereas a macrostate like 50T has hundreds! With all likelihood, enough random changes in the system will yield a many-microstate (or high-entropy) macrostate.

Put another way, the entropy of an isolated system over some significant time period will never decrease. Physicists call this the second law of thermodynamics, and I will call it 2LTD for short. Notice that this does not preclude random fluctuations in a system’s microstate that happen to decrease its entropy temporarily. Such events are certainly allowed, but they will only be noticeable in extremely small systems. 

This is why we had to visit the quantum realm to have this discussion: time essentially has no direction at this size scale. From the perspective of a single atom or molecule, every event is a random fluctuation with no detectable direction. Zoomed out any further, though, the universe can be described as a single system with untold microstates slowly converging toward the most probable macrostate. Even in the human realm, we do not observe such small changes; all we see is the steady increase of entropy.

How is entropy actually related to the arrow of time?

Hang on, you might be thinking. We just made a pretty huge leap from a statement about a weird coin movie to one about the entire universe. Surely the configuration of, say, a group of molecules is more complicated than the binary heads-or-tails states of pennies. That’s true, so at this point, it’s worth stepping back into the human realm and asking what entropy has to do with the conclusion we drew last week, that there are certain events we know to be physically possible but never observe.

Think about some of the clearest examples of time’s arrow in the human realm. Many of them involve some kind of diffusion, or spreading out, of either energy (in the form of heat) or matter. Butter melts in a pan, dye spreads out in water, smoke from a bonfire mixes with the surrounding air. We can think of all of these diffusions as a transition to a macrostate with greater entropy. 

Take the butter in the pan, for example***. Initially, the butter molecules are colder (bluer) than pan’s (hotter, redder) molecules, rendered here in beautiful 2-D Art™:

The molecules exchange heat via random collisions with one another – let’s say that each molecule only collides with the one directly across from it. Because the total amount of energy must be conserved, every collision between two molecules makes one warmer and the other colder. Thus, it is technically possible for the butter to give heat to the pan and become colder…

…but that would be a more “polarized” macrostate, with fewer possible configurations (because we are approaching the macrostate where all of the butter is “maximally” cold, and the pan “maximally” hot, which only has one microstate). It is much more likely, then, that the pan’s hot molecules will grant heat to the butter, cooling the pan slightly and warming the butter, causing it to melt. 

When the two objects are the same temperature (equilibrium), they have reached a system-wide state of maximum entropy. Put another way, as we said above: given the temperature of a random molecule from the whole system, it gets more and more difficult to guess whether it is a pan molecule or a butter molecule.

The diffusions of matter in space work the same way. The starting configuration of a drop of dye in a glass of water is one of minimum entropy. Given the position of a random molecule, we can know right away whether it is a dye molecule or a water molecule. But after enough random collisions, the system will approach the maximally entropic macrostate –that of the dye molecules being evenly spread throughout the water– because there are so many corresponding microstates. Not coincidentally, this kind of diffusion would be easy to spot as going forward in time, just as the melting butter would.

So we now see that the processes we described last week (that is, the ones we only see happen in one direction) are inextricably linked to an increase in entropy. Can we say that changes in entropy allow us to observe events moving forward in time? In fact, the truth is even more stark: the increase of entropy is the only physical reason that time moves forward.  

Note the use of “only” here. Why are we so sure that some other phenomenon isn’t contributing to time’s directionality? Because entropy is the only quantity in nature bound by a one-way law like 2LTD. Other behavioral laws of nature, such as the conservation of certain quantities, are symmetric. Every fluctuation has some opposite consequence, and fundamental constants do not change. But 2LTD uniquely mandates a steady, asymmetric increase in the universe’s entropy. Because the universe is an isolated system (and the only one we’re aware of, which is why we call it a “universe”), we should think of the statistical laws that force overall entropy to increase**** as the sole cause, not merely an observable effect, of time’s arrow.

Phew. Once again, thanks for sticking with me through what turned into a really long post. As before, I’ll acknowledge that you may have heard some of this before, but my goal is not to highlight what you didn’t already know. In fact, my goal is the opposite. It’s to highlight the importance of what sits intuitively in your mind.

I spoke in my first post about the heavy use of analogy in physics and my view that, at the end of the day, we must accept that the analogy is not merely describing the reality of an external world; in fact the analogy is the reality of a world we can only interact with by sensing and reasoning with our metaphor-hungry human minds. So when we say that entropy’s increase causes time to move forward, I really do mean it. I’m sure there are physicists out there who would take issue with my use of the word “cause” (and I’m eager to hear from them!), but the way I see it, if increasing entropy is the only way for us to conceptualize time’s arrow, it truly is the only cause.

Next week, we’ll take our most exotic journey yet, to the cosmic realm. Up there, everything we thought we knew about time will be called into question.

*I have been –and will continue to be– referring to thought experiments in the form of “games.” Generally speaking, these games are no fun and I do not endorse playing them. That said, I have often proven to be a poor judge of what other people will enjoy, so please let me know if you ever spend your Saturday night playing Penny Encoder.

**As with many things I’ll informally define on this blog, the scientific definition of entropy is slightly more complicated, but only slightly. Essentially, entropy is formally defined as the natural log of the number of microstates multiplied by a number called the Boltzmann Constant. This fact doesn’t change any of the subsequent analysis we will do.

***One caveat here is that in an actual cooking situation, the pan would be receiving heat from the stove, which means the pan-butter system is no longer “isolated,” as 2LTD requires. However, the analogy works just as well if the stove is turned off after the pan has been heated but before the butter is added.

****You may notice that 2LTD says that entropy cannot decrease, but when I describe time’s arrow, I talk about entropy increasing. Can entropy stay the same over time? The answer is yes, but only for a completely isolated system, which doesn’t really exist except at the scale of the entire universe. Eventually, the universe may reach total thermodynamic equilibrium, in which case time would stop.

What is time and what does it mean for it to “move forward”?

In my first post, I alluded to concepts I call the three realms and the four ideas. Last week, we talked at an abstract level about life in each of the three realms, a framework for segmenting reality according to relative size scale. For the next several weeks, I’ll be publishing a series of posts that get more specific about how different realms experience the four ideas. These are the fundamental physics concepts of time, space, matter, and energy.

If the prospect of multiple blog posts on “time” doesn’t raise your heart rate, you probably took a high school physics class that dealt little with time beyond including the letter t in a bunch of equations. The same can probably be said of matter, represented proudly by the letter m. Energy? Just convert potential to kinetic (and sometimes rotational) and call it a day. And space, well… what is there to say about space, anyway? Space is nothing! Wait, distance maybe? Did you mean to say distance? Would you like to borrow my x?

Suffice to say I think we can do better. For each of these series, I hope to begin by reflecting on the meaning behind the idea “at our level,” i.e. in the household realm. Then, we’ll explore the phenomena at the bottom (i.e. the quantum realm) out of which the idea manifests. Finally, we’ll head to the top, or the cosmic realm, where the idea reaches its limits and starts to break down. I hope it will be a conversation-provoking journey.

How do we measure time in the household realm?

I want to talk about time first; of the four ideas, it is probably the most familiar to our conscious experience of the world while being the most difficult to define from first principles. By “define from first principles,” I really mean, “define in a non-circular way.” Consider the prompt “Time is a measure of ____,” and you may (quite reasonably) end up saying something useless, like “Time is a measure of how long it takes between two events.” Of course, all you’ve really said is, “Time is a measure of the time between two events.”

Well, who could blame you? Time is what it is! Even physicists appear to have punted this question – they say that time is a measure of what a clock reads! This definition sounds trivial and nonscientific, but the more I think about it, the more I like it. Through the “clock” lens, time starts to resemble money.

Money has value because everyone is using the same currency (or at least, there are agreed-upon rules for converting between the various currencies people use). You accept a $20 bill from me because you know that somebody else will accept it from you later. Similarly, we can talk about time in a standardized way because we have all agreed on a system for recording it. When you ask me to meet you at noon, we are making the implicit assumption that our clocks will both read the same value – 12:00 – at more or less the same instant.

Of course, the metaphorical structures we use to think about time (or money, for that matter) are based on the tools we have for measuring it. In antiquity, there was no way to mark the passage of time with precise instruments, so repeating natural phenomenon like sunrises and the movement of constellations were used. The ubiquity of ancient solar and lunar calendars spanning tens of thousands of years speaks to this expansive but imprecise view of time. More reliable mechanical clocks in the Middle Ages allowed the solar day (one rotation of the Earth) to be divided up into conceptual –if not always measurable– pieces. Today, the international standard unit of time, the second, is defined as the duration of 9,192,631,770 vibrations of the atom Cesium-133*, yet another repeating event from the natural world!

In this way, time is really less akin to “money,” and more analogous to a word like “value” or “worth.” The value of someone’s possessions (or their time!) is measured by counting what they will accept in exchange. Likewise, time is measured by counting the events that occur within that time. Even the atomic clocks that synchronize events across the entire planet work this way – they count to 9 billion and say, “A second has passed!” and the GPS in your phone, or the stock market, or a globally distributed data center run by Google says, “Roger that!”

I get that this might not be the most mind-boggling stuff; everyone knows time is sort of a made-up construct. My point really is just to underline the notion that time is entirely a made-up construct, and even the most advanced means of defining and measuring it rely on a metaphorical understanding of repeating events that we as one species agreed to!

How do we experience time in the household realm?

  Of course, the measurement of time isn’t the thing that makes it so immediate and intuitive. What really matters to us humans is the sense of the passage of time, the “forwardness” of our existence. We live moment to moment, and there is always a sense that the present moment is taking place after some previously experienced moment. Although the perceived rate of this passage can certainly vary –probably because of how our brain encodes memories– its direction is resolutely onward.

But what does that actually mean? The metaphor I like to use is of a game played between you and a film projectionist. Imagine you’re sitting in a theater, and a series of short films are projected on the screen in front of you. Behind you is the projection booth, in which the projectionist sits with two piles of film reels. The reels in the left pile were filmed normally, but those on the right were filmed with the reel moving the opposite direction inside the camera, and so will show reversed motion when projected. The game begins when the projectionist selects a random film to play. If you can correctly guess which pile it came from, you win.

Now, most of the time, the movie won’t have to play very long before you’re able to identify with certainty whether or not it was filmed in reverse. You might see something spill or break, say (or implausibly reconstruct itself). Smoke may drift up from a fire (or back down into it). Bullets could fly into or out of a gun. These would be instant red flags.

Notice that I didn’t say “you’d notice all of the actors walking and talking backwards.” That can be (and has been) faked. In fact, there would be an awful lot of footage that you couldn’t bet with absolute certainty came from one pile or the other. Trees swaying in the wind, cars rolling along a road, practically any movement from a human or animal; you may have a clear intuition about the direction in which these actions were filmed, but not the absolute certainty you would have about, for example, a block of butter melting in a frying pan.

You might reasonably conclude that the laws of physics preclude such “backwards” behaviors, but that’s not the case! There is no physical rule preventing the requisite forces and heat from conspiring to push a puddle of melted butter back up into a solid cube. Taken a step further, consider something as open-and-shut as an egg falling off a countertop and breaking on the floor: we know from conservation of energy that the potential energy from the egg’s initial height converts into kinetic energy as it falls, which, when it lands, is transferred to the floor and to the flying shards of egg shell. But conservation of energy also requires that, in principle, the floor could push back up on the egg, and the molecular bonds off its shell could be restored, and this would provide the egg exactly enough kinetic energy to lift off from the floor and land back onto the countertop. Such a sequence of events would be totally legal, so to speak. So why do we never see it happen?

This is the point at which your brain might remind you of something called an “irreversible process,” or something else called  “entropy” that always increases. In fact, understanding entropy is exactly what we need to do in order to unlock the fundamental question of time’s arrow. But to understand entropy, we need to leave the household realm, and I want to linger here for just a moment longer.

To return to the prior paragraph, and indeed to the theme of this entire post, I want to underline that last question: “Why do we never see it happen?” As I said at the end of the time-as-value discussion, I know that this in itself is not the most profound thing you’ll read all week. I’m arguing that this question is hugely significant simply because we have to ask it at all. Much as measurements of time are equivalent to the regular processes we use as timepieces, so our ability to spot a seemingly irreversible process is not just a symptom of our experience of time; it is the experience! The arrow of time is the fact that there are things we know to be possible but never witness. And that’s it! That’s what it means for time to move forward. There’s no hidden complexity. 

To put a “human” touch on this discussion (and to go out on a bit of a limb) my theory of why we experience the arrow so intuitively is that the materials that make up our bodies and brains constantly undergo irreversible processes. As a result, we are especially attuned to such events when we observe them in nature. On a deep level, we recognize when objects transition permanently from one state to another. And the vehicle for that transition? We call it “time.”

Next week, we’ll take a deep dive into the quantum realm, wherein lies the root cause of time’s steady march from the past toward the future.

* It’s not exactly as simple as vibrations, turns out. I’m making a mental note to devote a short post to exactly how atomic clocks work (I guess I’m actually making a written note!)

How can we talk about phenomena we don’t physically experience?

Imagine that you and I are powerful beings on a world far from Earth.


Now let’s say we’re scientists on our world and we’ve just discovered Earth and we want to conduct some research on its dominant life form, the homo sapien. The only problem is that even in our highly advanced world, funding for basic research is being slashed and so we can only conduct one of three experiments:

  1. We can shrink down to microscopic size and take a tour of a human brain. We’d learn about what causes neurons to fire, how they communicate with each other in a network, and how the brain as a whole reacts to stimuli and develops over time.
  2. We can set up cameras and microphones in the home of a family of four humans, and observe how they act in isolation and as a group.
  3. We can take out a subscription to the New York Times’ International Edition.

Which option should we choose, if our goal is to understand human behavior? (I’m now realizing that only option 1 requires us to be “powerful beings.” Also pretend that all three options cost the same. Those papers don’t deliver themselves!) 

The problem, as any human will tell you, is that these three realms of human life are only tangentially related to one another, and to focus on just one of them out of context would be extremely confusing. 

Let’s consider the top-down experiment 3. We’d get a lot of interesting numbers and histories at this level. Sure, international affairs touch many people’s lives, but you’d have no idea why anybody was behaving the way they were. Big human systems – law, economics, public health – are of little interest until they affect the experience of individuals – in the courtroom, the workplace, or the clinic. (And yes, I know the Times incorporates those things into their reporting, but not to the degree that a dedicated personal narrative would). 

We face a similar problem with experiment 1; learning about the brain in isolation, however beautiful and complex an object it may be, would tell you next to nothing about what a human life really feels like. Maybe, maybe with deep study and insight, you could start to map out the actions of neurons to high precision, and you could theorize about the experiences of the consciousness those neurons gave rise to…

…but wouldn’t it be easier to just ask somebody what they were thinking? So now you’ll say we should pick experiment 2 and observe the one family in their home to get a proper introduction to the human species. And I’d agree with you that this is the best place to start. We’d get to observe a lot of the key components of a human life. Still though, there’s an awful lot missing from that picture: What goes on inside these people’s heads? What happens when they leave the house? How many other people are there? Are their households similar to this one?


When human beings study physics, when we try to describe the world around us, we run into this exact problem: each way of looking at things gives some correct results, but they don’t paint an informative overall picture. Worse, we spend our entire existence inside experiment 2, in the realm of objects whose sizes, speeds, and lifespans are on the same scale as the sizes, speeds, and lifespans of our bodies. But we know that’s not the whole story! It’s just that there are so many things to observe, and it’s very difficult to gather evidence from realms 1 and 3, much less synthesize it with what we know about our own realm.

I’m getting a little ahead of myself. The point of this post is to introduce a concept I refer to* as the three realms. This will guide our conversations going forward, since a lot of the important ideas we need to discuss fall outside the “household” experiences our brains are used to. The three realms are a way of dividing the universe according to relative size scale, and they’re useful because the physics in each of the realms is as different as the three portraits of humanity you’d get in our example above. You can think of them as “versions” of the universe coexisting at different levels of zoom within the same physical space. So there are no distances or hard borders separating these worlds; you can’t leave one and enter another. But you can only have experiences in and make statements about one realm at a time. Let’s get acquainted with them!

We’ll start with the version of the universe we inhabit, which I’m going to call the human or household realm. You can also think of it as the middle. Here, gravitational and electromagnetic forces are the order of the day; by and large, physics answers questions about things we can easily observe and conceptualize. How does smoke spread out and fill a room? What makes certain bodies buoyant? How much weight can a shelf hold? However familiar that sounds, be warned that the household realm is actually pretty inclusive and contains plenty of its own strangeness. I would define it as generally covering the following domain:

  1. Objects in the size range between a single biological cell and the moon
  2. Speeds between zero and a million miles per hour (roughly 50 times faster than a rocket needs to go to escape Earth’s orbit)
  3. Time scales between one second and a billion years

You may complain that some of the above hardly seems like household phenomena. True, microscopes and rocket ships are ways of getting “out of this world,” as it were. So there is a key point to underline here:

We should remember just how narrow the range of our experience is. For example, most humans spend our lives in the same range of roughly 100º Fahrenheit. Let’s say on all of Earth that range is more like 250º from extreme to extreme. But the range of temperatures in the entire universe, from the void of space to the bellies of the hottest stars? Hundreds of thousands of degrees. The same can be said of our lifespans, falling in the range of 0 to 100 years. Compare that to many subatomic particles that pop into being for less than a billionth of a second, or to small, efficient stars that can toil away for over 10 billion years. Now, I’m not trying to make you feel bad for not going out and experiencing all the universe has to offer (quite the opposite, in fact). All I mean to say is that in the grand scheme of things, you have more in common with bacteria or blue whales than you do with atomic nuclei and galactic clusters. Therefore, we’ll lump the former into our domain of reality and sum up by saying the household realm is comprised of things that fit neatly into the perspective of everyday human thought and measurement.

(One related note that I’m sure will come up later, but that I’d like to get on the record now: Physicists are notoriously flippant about factors of 2 or 10 or 100 winding up in their estimates, and that is a good thing. Bear in mind that each of the three realms under consideration are many orders of magnitude apart – a human cell might contain trillions of atoms – and that truly fundamental questions in science are usually resolved by getting the order of magnitude right, as opposed to achieving high decimal precision.)

Now, what happens when we study objects so huge and/or fast that they challenge the scope of our minds? We’ve entered the cosmic or relativistic realm, which I will also refer to as the top. This is the “international section” from our analogy above. Essentially, we approach the cosmic realm whenever we’re dealing with a body that’s massive enough to undergo weird gravitational effects, or anything moving fast enough to even be comparable to the speed of light (if that phrase sounds familiar, the relativistic realm is where Albert Einstein made his most famous contributions, the special and general theories of relativity). The convenient thing about this world is that it is very much in the popular consciousness: we’re talking about outer space! But remember, it’s not the location itself that alters cosmic physics, it’s the scale. It just so happens that such sizes and energy levels are rarely (but not never) found within the surly bonds of Earth (which kind of makes sense, since otherwise stuff would be blowing up all the time). Let’s assign the cosmic realm the following domain:

  1. Objects larger than the moon
  2. Speeds greater than a million miles per hour
  3. A variety of time scales, but often those in the billions of years

I’m excited to talk more about cosmic physics in future posts, but just as a teaser, I’ll just say that this is the realm in which space and time expand, contract, bend, and break. It is the realm in which stars live and die, in which matter and energy trade places with colossal brilliance; this is the big leagues, and our brains are going to need some pretty poignant metaphors to keep up with the data.

Finally, we arrive at the least intuitive part of any physics conversation, a world so small and strange that its title has gone from rigorous definition to sci-fi buzzword. I’m speaking, of course, about the quantum realm, which I will also call the bottom. The quantum realm is an ecosystem whose population is so bizarre that the atoms that make up your body, odd as they may be, are among the most conservative residents. Gravitation holds little sway down here, supplanted by electromagnetism and slippery nuclear forces. Let’s define the quantum realm’s domain:

  1. Objects smaller than a biological cell
  2. A variety of speeds – sometimes near light-speed and other times merely speeds that feel fast to a subatomic particle but that we mammals would sneer at
  3. A variety of time scales, but often those under a second

Why is it called “quantum?” Well, at the bottom, the physics of the everyday begin to brush up against the absolute limits of various quantities. What do I mean by that?

Imagine you and your friend are throwing a ball back and forth. The ball leaves your hand with some kinetic energy, flies through the air, and finally collides with your friend (who absorbs the energy and ultimately transmits it to the ground). Assuming you had extremely fine control over the muscles in your arms, you could in principle give the ball any amount of energy you wanted, and you would see how its velocity through the air changed as a result.

But what if you and your friend were passing a single electron back and forth? It turns out that there is a smallest unit of energy that the electron can have. Let’s call it “h”**, and you can only increase or decrease the electron’s energy by increments of h. So you throw it to your friend with energy 3h. She throws it back to you with energy 4h, you throw it back with energy 2h, and so on. There is no way to throw it with an energy of 3.5h. Here, h is the quantum of energy – the discrete quantity into which it can be divided. You don’t notice the multiples of h when you throw the ball because of the unbelievable number of particles that comprise it (after you reach thousands of trillions, what’s one or two more?). How does this quantization affect the behavior of the electron? Quantum physics holds the answer. It’s the physical equivalent of watching neurons fire inside a brain, and it is truly remarkable stuff.


Phew. We covered a lot of ground here, but I think this is a core concept, and since this blog is going to touch on such broad topics, we need to start from a common understanding. The takeaway is that the laws of physics we commonly observe lose influence as we approach extremes in size, energy, and time. Furthermore, we need to train ourselves to look for scientific explanations at all three levels when necessary. Effects at the bottom propagate up to the top, and often, our daily experiences in the household realm are the manifestations of behavior from above and below.

Thanks for sticking with me through the introductory material. Next week, we’ll dive into the first of the four ideas: time. The usefulness of thinking in terms of three realms should become more clear as we begin that discussion, since we’ll begin with how time manifests in the human realm and then examine its causes and effects in the quantum and cosmic realms.

*As far as I know, this isn’t standard terminology in physics, so when I say “I refer to,” that’s all I mean.

**Strictly speaking, h is actually the lowest energy increment of a photon, or packet of radiation. But electrons also have quantized energy and they’re much more familiar objects to most readers. Don’t worry; we’ll split the hairs on this one in due course.

Why am I starting a physics blog?

The human brain is a wonderful instrument, capable of astounding feats of ingenuity and empathy. It struggles, however, with issues of scale: people in other countries, numbers larger than 10, really anything outside its immediate vicinity. These things are difficult for the brain to deal with. To get around such issues, it makes extensive use of metaphor. Charts, tables, maps, poems, even meticulous “fact-based” narratives, these are all metaphors that allow the brain to absorb a small amount of information and provide much of its own context.

I don’t put quotes around “fact-based” to be pejorative. My only point is that the written or spoken word, however truthful, exists in the space between approximation and analogy. “President James Garfield was assassinated in 1881” is a fact, as we understand the word “fact” to mean. But what is a president? How was Mr. Garfield assassinated? 1881 whats? We could make this sentence more precise: A man named James Garfield, who had just won 215 electoral votes in a United States presidential election and therefore led the executive branch of the United States government, was fatally shot approximately 1,881 years after Jesus of Nazareth was born.

But we knew most of that. To add the additional “facts” costs our brains a lot of effort without appreciably improving our understanding. It suffices to say “President James Garfield was assassinated in 1881,” so our brains can fill in the rest. And that’s great! It’s a real time-saver with the additional luxury of mostly representing the truth. The same can be said for a map, or a chart. We know that the coastline of Iberia doesn’t look exactly like your right fist, but that doesn’t make a zoomed-out map of Europe any less useful. On the contrary, the metaphor of the first-shaped peninsula is hugely useful (because now we can conceptualize Spain and Portugal in relation to other places) and it comes with only the tiny cost of a few degrees of precision.

Physics in particular is a discipline ripe with metaphors, and for good reason. The universe is a blisteringly complicated place, and the role of physics is to find metaphors that translate such complexity into ideas digestible by a human brain. 

For example, Aristotle wrote that a stone falls to the ground because objects with a heavy “nature” simply belong at the center of the universe (which he believed to be below his feet). Isaac Newton, armed with better data, wrote that no, in fact, all objects are attracted to one another, and so the stone and the Earth both move. Albert Einstein, possessing still better data, wrote that actually, the stone and the Earth create curves in something called “spacetime” and cannot help but move along these curves. All three theories describe patterns of motion familiar to any human, but Einstein’s does the best job of predicting new observations, and so –for now– it is taken to be correct, despite being quite complex. What “really happens” when you drop a stone to the ground matters less than having an agreed-upon vocabulary for discussing it. In my view, it is unwise to think of Aristotle’s idea as simply “wrong.” Rather, his metaphor was superseded by more useful ones. Even Newton’s imperfect “classical” laws of motion are still widely taught and used for many engineering applications, because they are simpler than Einstein’s “relativistic” ones.

The problem is that physics isn’t taught that way. Rarely is physics primarily explained as the long, winding story of imperfect analogies that it is. Too often, formal physics education begins with minutia, prodding students to get the “facts” right without reflecting on the beauty of the metaphors, the marvelous breadth of both our knowledge and our uncertainty as a species. As an aspiring lifelong learner of physics, I am a sucker for these metaphors.

I need to pause here and clarify that my point is not to denigrate the importance of equations, or physics educators, or doing difficult problems in the weeds of physical systems. In fact, doing the math is often the best way to really grasp why things are the way that they are. And the people who devote their lives to helping students do that grasping deserve real appreciation. Furthermore, it would be a mistake to associate me with the worrying trend in public discourse towards mistrusting the scientific method and relying on unsubstantiated new-age theories. The metaphors of physics are powerful tools that have served us in demonstrable ways. To help the reader avoid making any mistake about where I stand, let me say firmly: science is real. It’s really just that I think physics education begins in the wrong place. 

All of physics is hard to understand at first. So it seems strange to me to start students off learning about relatively mundane phenomena: blocks sliding down ramps, balls flying off cliffs, things that we as humans can already experience. Because those things are so immediate to us, our difficulty in solving them is all the more disheartening. You struggle to understand why a car slides such and such a distance, and then someone mentions quantum mechanics and you think, “Well shoot! I barely understood the car thing! How the hell am I supposed to know what holds an atom together?” Again, it is valuable to solve the (tricky!) problem of when your car skids vs. rolls vs. flips over. But in my opinion, solving that problem first reinforces the false impression that it takes a special kind of intelligence to learn about phenomena we don’t directly experience.

But anyway, all of that brings me to this blog. I love thinking, talking, and drawing pictures about physics. It’s the most intellectually joyous thing my brain gets to do, precisely because it is difficult, time-consuming, and approximate. And one of my least favorite things to hear when I mention a physics concept is the phrase, “I could never understand that.” I hear that as, “I’m embarrassed by the thought of all the questions I would have to ask before I understood that, so I’m going to pretend there’s this special innate thing about me that prevents me from understanding that.” That is a justifiable emotion, but what it misses is that the whole point of physics is not the understanding per say, but the conversation itself that leads to it; the messy exchange of experiments, pictures, metaphors –and even the odd equation– that allows a human brain to comprehend something it has no business comprehending. Particles? stars? The completely counterintuitive mechanisms that power sailboats? These things do not come pre-installed in the brain. But no postgraduate mathematical training is required to install them; only some dedicated thought.

Well, for reasons selfish and unselfish, I want to help guide some of that thought. The selfish reason is that setting aside time to explain physics concepts will make me happy in and of itself. Plus it will clarify my own thinking on those concepts, which will also make me happy. The unselfish reason is that even though writing about physics isn’t particularly easy for me, I hope reading about it will be fun and rewarding for others. My focus, at least at first, will be on very fundamental things. Namely the four ideas (time, space, matter, and energy) and the three realms (quantum, human, and cosmic) in which the four ideas manifest. 

My goal regarding the reader is not to make them “smarter,” or to provide them with a repository of “facts.” My goal is to dust off a few of the mind’s rickety tracks, preparing them for the chugging and clatter brought by fresh trains of thought. To discover and share gorgeous example of the human brain bending over backwards to make sense of its surroundings. Above all, I hope to start some conversations about the metaphors that describe our lives.