Fear of Mechanism Is Failure of Wonder
Before you dive into reading this post, I invite you to read a short story I wrote called “Dismissiveness and Mystification — Four Tales of Emergent Complexity”. It's a set of four parables that illustrate a common pattern I see in how people think about complex systems.
Specifically, the pattern is a double-failure mode where people simultaneously dismiss mechanisms as trivially simple while demanding mysterious essences to explain what those mechanisms actually do. In each of the four parables, a character encounters a complex system, and their response is either to declare “this is nothing!” because they see the mechanism, or to demand some kind of magical essence because they can't accept that the mechanism suffices.
In each case, we as the reader know better. We can see how flawed our characters' intuitions are—and the final story also serves to mirror reductionist claims made about AI systems by showing that the very same claims can be made about humans—yet the logic seems completely erroneous when applied to us.
The parables highlight a deeply human intuition that's wrong—and wrong in a way that matters. We struggle to accept that simple rules can produce genuine complexity. We look at the parts and refuse to see the whole. In the rest of this blog post, I want to zoom into one particular manifestation of this failure mode: how ignorance of key ideas from computer science and fear of mechanism lead to flawed perspectives on free will and agency.
The Free-Will Debate
Some people never think too much about free will, but in other circles, it's a hotly debated topic. Whether we “actually have” free will, and what, if anything, might cause it to evaporate in a puff of logic, are questions that have a long history. Like some other philosophical debates, it seems quite possible to engage in the debate without necessarily clearly defining the “obvious” thing we're talking about. What even is this “free will” we're debating?
In 1984, Daniel Dennett wrote a book titled Elbow Room: The Varieties of Free Will Worth Wanting, in which he argued that the common formulations of the free-will debate were misguided. He suggested that the question should not be whether we have some kind of magical libertarian free will that is uncaused and uncausable, but rather whether we have enough freedom to make meaningful choices in our lives. Dennett argued that we do have this kind of freedom, and that it is compatible with a deterministic universe.
As an aside, when philosophers talk about a “deterministic universe”, they are not asserting that our universe actually is deterministic. Rather, they are saying that if we have a system that follows simple, predictable rules (like the laws of physics, or the code of a program), then everything that happens in that system is determined by those rules and the initial conditions. In such a universe, every event is the inevitable result of preceding events. Even in a broader nondeterministic universe, the parts of it that are deterministic can still be understood in this way. You probably think of your computer as operating deterministically, for example, even if the chips inside require quantum tunneling to function.
Yet to many, Dennett's position (or any similar compatibilist one) seems deeply unsatisfying.
The Underlying Fear
To many people, compatibilist positions embody a kind of horror. All “real” choice and agency seem to evaporate if we accept that our actions are determined by prior causes. The fear is that somehow all our struggles, our wrestling with difficult decisions, our sense of being authors of our own lives, are illusions. If we're just like clockwork, then we're puppets, our strings pulled by the deterministic laws of physics. We're simple. Predictable. Controllable. Deluded.
If we're just following rules, everything we do is just a foregone conclusion, and we have no real say in it. We're helpless before an external world that knows our every move.
Broadly, there are two reactions to this fear:
- Mystification
- We demand that there must be something more than mere mechanism. Some kind of magical essence to escape the horror of being mechanical. Libertarian free will is one such demand, where we insist that our choices must be uncaused, somehow floating free of the causal world. Of course, this approach just moves the horrible puppeteer somewhere else—now my actions aren't determined by the unyielding grind of physics, but by a capricious spirit who may not care for the demands of the real world (and, arguably, must be somewhat unfettered by real-world consequences or past history to be “truly” free). But at least it's not mechanism!
- Diminishment
- We try to “face facts” and accept that we're less than we think we are. “I don't really have choices, but that's okay, because no one does.” This is the hard determinist position, where we accept that we're just clockwork, and try to make peace with that. But for some people this position seems to lead to despair, nihilism, and a sense that life is pointless.
But the problem here is that the fear was always based on a false premise.
What Computer Science Tells Us about Mechanism
It makes intuitive sense to think that if a system is governed by simple rules, then that system is itself, fundamentally, simple. If I can see the parts, then I can predict the whole. If I know the rules, then I can know everything that will happen.
But the word “predict” here is doing a lot of heavy lifting. Suppose you give me a recipe for a cake, and I tell you I can predict how it will turn out. You call me on that, and then I disappear into my kitchen, gather the ingredients, follow the recipe, and bake the cake. Two hours later, after the cake has cooled, I cut a slice, enjoy it, and return to you telling you my “prediction” about how your recipe will turn out. Is that actually prediction? If I had to do all the work first? If I couldn't take a shortcut? I think you would be well within your rights to say, “No, that's not prediction; that's just actually doing the thing.”
Of course, experienced cooks might be able to look at a recipe and have a pretty good idea of how it will turn out without actually baking it. But computer science tells us some fundamental limits on what kinds of prediction are possible, even in fully deterministic systems.
In the Busy Beaver problem, we consider profoundly simple rule-based computer systems (specifically, Turing machines with just a tiny number of rules and symbols), and try to determine their eventual behavior. Almost immediately, we find that there is no general way to predict what these systems will do without actually running them. There are some systems where we can, without running them, prove they will eventually halt, and some where we can prove they will run forever, but for many of these trivial-looking systems, there is no shortcut: the only way to know what they will do is to let them run, to let them do their thing.
And that lack of predictability is for some of the simplest possible systems we can imagine. As systems get more complex, the limits on what can be predicted only grow stronger. It has been mathematically proven that there is no general algorithm that can determine whether an arbitrary program will find the result it is seeking (the halting problem). More generally, Rice's theorem shows that any non-trivial semantic property of a computational system is undecidable: it cannot be known in advance without actually having the system do its thing.
And finally, it is also known that even some of the simplest computational systems imaginable, such as cellular automata or Conway's Game of Life, are actually sufficient to perform universal computation—they can compute anything that any computer can compute, given enough time and resources.
Stephen Wolfram has termed this phenomenon “computational irreducibility”: even in a fully deterministic system, there exist computational entities whose behavior cannot be predicted by any external process except by simulation. Not “difficult to predict” but provably impossible to shortcut. The universe—or any observer within it—cannot know what you will do except by letting you do what you do.
There is another side to the fear of mechanism as well. We often think of our minds as having a kind of interiority, a private inner life that is distinct from the outside world. Doesn't that experience evaporate if we're just mechanical? Doesn't our inner Cartesian theater need to live in some ethereal realm beyond mere mechanism? No.
Computational structures can have genuine interiority. A virtual machine has real boundaries, real internal state, a real “inside” distinct from its host. Those factors are not a metaphor or a consolation prize—they’re architecture. No magic needed. Even if you're not familiar with virtual machines, you've probably played video games, or are currently looking at a “window” on your computer screen where the text is rendered. But there is no real “text” there, no “window”, no “page” in any fundamental sense. These things are just data structures interpreted by software running on hardware. But of course it makes no sense to say that the text or the window isn't real. Structures get built, and they become real in their own right.
When we realize we've been fearing a boogeyman that doesn't actually exist, we can start to see mechanism in a new light. The work we go to to make our choices matters. It can't be skipped. It's not an illusion: it's hard work that's absolutely necessary. We have to do our thing to find out what that thing is. And we can't even fully predict it ourselves, as full self knowledge is provably impossible. But that doesn't make our choices any less real.
Seeing the Wonder
The mistake—in all four parables, and in fears about free will—is seeing “it’s mechanisms all the way down” as diminishing. That viewpoint is like looking at the Grand Canyon and saying, “Oh, that's just erosion, just a bunch of drops of water wearing away rock over time.” Yes, that statement is true, but it's also missing the point. The Grand Canyon is astonishing precisely because of what those simple processes have produced over time.
If you look at what happens inside a biological cell, you’ll find a dizzying array of molecular machines, each following simple chemical and physical rules, yet together producing the miracle of life. Profound complexity emerges from the simple. Simple rules in Conway's Life produce gliders, then glider guns, then universal computation.
Like the torrent of water droplets in the Grand Canyon over millennia, training data and gradient descent carve out network weights in LLMs. With billions of parameters, these entities are vastly more complex than a tiny Rule 110 cellular automaton. Both, at a glance, “merely” extend an output based on what came before—but Rule 110 is already Turing complete and thus profoundly unpredictable. If you say an LLM is “just” weights in a matrix, the word “just” has tried to do so much heavy lifting, it should have snapped in half.
It's not deflating. It's the most astonishing thing in the universe. The fear of mechanism is, at bottom, a failure of wonder. I'm proud to be something so glorious.
Bonus Notes
I'm not the first person to notice the connection between computational irreducibility and free will. Marius Krumm and Markus P. Müller wrote a paper in 2023 titled “Free Agency and Determinism: Is There a Sensible Definition of Computational Sourcehood?” that explores similar ideas in depth. It is also available on arXiv.
One additional point worth making: It might seem like my perspective is profoundly materialist, and thus incompatible with spiritual or religious viewpoints. But I don't think that's necessarily the case. Many religious traditions emphasize the wonder of creation, and the idea that the universe is a manifestation of divine will. Seeing mechanism as wondrous doesn't preclude seeing it as divinely inspired. If you believe that a non-physical soul must serve as a remote teleoperator to allow you a chance at an afterlife, well, to me that just represents your own failure of imagination; thankfully deities are not limited by what you, a human, can conceive. Of course, maybe my seeing it that way is all too predictable.