The Shoulders · Reveal 04

How an LLM completes your sentence.

Not retrieval. Energy descent into a learned attractor.

Act 1 · The Moment

In Zone 02, Q8, you compared a log to a receipt. The receipt rejected the tampered version. That was pattern completion. Here is the math underneath.

Act 2 · The Reveal
Three patterns are stored in the network. Pick one. Corrupt it. Then complete.

Hopfield proved in 1982 that a network of neurons can store memories as energy minima: stable configurations the system falls into when given a partial or noisy input.

The network does not retrieve. It descends. Each step lowers the energy by aligning each neuron with the others it is connected to. The pattern emerges because it is the nearest valley.

This is what happens inside an LLM when it completes your sentence. Not a database lookup. A multi-billion-dimensional energy descent into the attractor closest to your prompt.

Act 3 · The Human
John Hopfield · born 1933

Hopfield is a physicist at Princeton. In 1982 he published a paper showing that a network of neurons could store memories as energy minima — stable states the system naturally falls toward when given a noisy or partial input.

For twenty years the paper was considered a theoretical curiosity. In 2024, at age 91, he was awarded the Nobel Prize in Physics.

The Nobel Committee described his work as foundational to modern AI. He shares the prize with Geoffrey Hinton, who built on his framework to create the neural networks that power every AI system in use today.

When an LLM completes your sentence, it is performing Hopfield energy minimization across billions of dimensions. The math is from 1982. The scale is new.

← Boltzmann 04 / 05 Kolmogorov →