👁 views

What Machines Can’t Stake

May 2026

A language model, asked to predict the next token, computes probabilities across its entire training distribution and samples one.

It never decides. It calculates.

That distinction sounds small. It is not. It is the line between everything machines can do and everything humans are for.


The pattern keeps repeating

Deep Blue beat Kasparov. AlphaGo beat Lee Sedol. GPT-4 passes the bar. Each milestone follows the same script: another domain that “required intuition” turns out to require only enough data and enough compute.

So the temptation is to draw a shrinking circle around what’s still ours. Consciousness. Creativity. Empathy. Pick your favorite. Then watch a model do a passable version of it next year, and shrink the circle again.

I think we are drawing the wrong shape.

The thing machines cannot do is not a capability. It is a stake — the act of binding yourself to an outcome you cannot verify. Marrying someone. Starting a company. Believing a paper is worth writing before anyone agrees. Voting. Praying. Choosing.

Pattern completion does not bind. It outputs and moves on.


What it means to stake

When you commit to something real, you cannot possibly have the relevant information. You do not know who your partner will be in twenty years. You do not know if the company will work. You have not yet met the obstacles that will test whether you keep going.

You commit anyway. You choose to treat something as if it were certain, knowing it is not.

William James called this “the will to believe.” Not self-deception — practical necessity. When evidence is genuinely ambiguous and the choice itself helps determine the outcome, waiting for certainty is choosing never to act. The skeptic who refuses to believe “until all the facts are in” never participates in creating the facts.

A model can output a recommendation with 73% confidence. It cannot answer should I marry this person. Not because it lacks data — because the question presupposes a kind of stake the system does not have. You cannot wager what you do not own.


The generative side

Here is the part that surprises people: human creativity thrives exactly where certainty ends.

Einstein published special relativity when the Michelson–Morley experiment was still contested. Barbara McClintock’s jumping genes were dismissed for thirty years. Every paradigm shift in science is somebody believing the answer before the evidence justifies the belief.

Art works the same way. The novelist does not know how the book ends. The painter does not know if the next stroke ruins it. They continue, guided by something that is neither logic nor randomness — a cultivated trust in judgment that no amount of training data can substitute for, because the training data does not exist yet.

A diffusion model produces an image by denoising a random seed. It does not struggle. It does not wake at 3 AM convinced the project is worthless and return to it anyway. The machine outputs. The human endures.


The agent question

I build agents for a living. So this is not an abstract argument for me — it is the question I sit with every time I wire one up.

When you give an LLM “agency,” what you have actually given it is a longer chain of pattern completions. The model picks a next action the way it picks a next token: by sampling from a distribution. Increase the horizon, add tools, give it memory, let it reflect — it gets more capable, sometimes dramatically. But at each step, it is still computing what the data would predict. It is not staking anything on the answer.

This is fine for most of what agents are useful for: scheduling, retrieval, code generation, research. Domains where “the right answer” is recoverable from prior cases.

It breaks at the edges where humans actually live. The agent will not start a company. It will not decide, at the cost of its own continuation, that a project is worth doing anyway. It will not refuse a profitable instruction because it violates a principle it cannot prove. It can imitate all of these. It cannot wager.

That gap is not going to close by scaling. Scaling makes the imitation better. The wager is a different category.


The unclosed loop

I should end here with a tidy conclusion. That would be dishonest to the thesis.

I do not know if this argument is correct. Maybe future systems will surprise us — some mechanism we have not imagined, indistinguishable from commitment, arising not by simulation but by a substrate change we cannot currently see. Maybe the line I am drawing is a temporary artifact of 2026’s architectures rather than a fundamental boundary.

Or maybe the line is sharper than I have suggested. Maybe there are souls, or sparks, or something else that guarantees the gap stays open regardless of what we build.

I have some of the answers. Not all of them. And yet here I am — staking my credibility on a view that may be wrong, sending it into a world that may not agree.

That is not a failure of rigor. It is the move the essay is about. Building cathedrals we may not see completed. Planting trees we will not sit under. Believing before the evidence is enough.

The machine outputs 2,400 words and stops. The human wonders if they were the right ones, and begins again.


If you’re working on agents or care about this question: zguo0525@berkeley.edu · @Zhen4good