Origin

How all this was built

Neurons. LLMs. Humans. Together.

Everything Inquiro built came from one pattern: a human with a question, working with AI systems as thinking partners rather than answer machines.

This page is seven steps. Each builds on the last. Work through them in order. The last step is the reason the rest exist.

What is a neuron?

A neuron is a cell in your brain that does one simple thing.

It receives signals from other neurons. If those signals are strong enough, it fires a signal forward. If they are not, it stays quiet.

That is the entire operation: signal in, threshold check, signal out.

You have 86 billion of them. What you think, feel, and know emerges from 86 billion of these yes-or-no decisions, happening in parallel, continuously, for your entire life.

Signal 10
Signal 20
Signal 30
Threshold: 15
Total signal: 0
QUIET
Try to make it fire using only one signal. Try to keep it quiet even with two strong signals.

What is an artificial neuron?

An artificial neuron does the same thing with math instead of chemistry.

It takes numbers in. It multiplies each by a weight. It adds them up. If the total crosses a threshold, it outputs a signal.

That it?!?

The difference: biology runs at maybe 100 signals per second. Code runs at billions. Biology is unique. Code is copyable. Put a billion of these together and things start to happen.

Input 10
Weight 11
Input 20
Weight 21
Input 30
Weight 31
(0 * 1.0) + (0 * 1.0) + (0 * 1.0) = 0
Threshold: 15. Status: QUIET
The weights are what gets learned during training. The entire intelligence of a neural network is encoded in the pattern of its weights.

What happens when you connect millions of them?

Connect a few hundred neurons in layers and you can recognize handwritten digits. Connect a few million and you can recognize faces. Connect a few billion and you can understand language.

The magic is not in any single neuron. It is in the pattern of connections.

The weights between neurons adjust during training. The adjustment happens through a process called backpropagation: make a mistake, measure the error, trace the error backward, adjust the weights that contributed to it. Repeat millions of times.

After training, the entire network encodes what it has learned in its weight pattern. There is no rule file. There is no knowledge base. There is only the geometry of weights.

You just ran inference. The network already knew the pattern before you started. Its knowledge lives in the connections.

What makes an LLM different?

An LLM is a specific kind of deep neural network trained on one task: predict the next token.

That is the entire training objective. Given a sequence of words, what word is most likely next?

To do this well across all of human writing, the model had to learn grammar, logic, facts, coding conventions, argument structure, scientific notation, legal language, emotional tone, and narrative arc.

In learning to predict text, the model accidentally learned to reason. Not because anyone planned it. Because predicting text well enough requires reasoning.

This is the entire foundation of the industry.

Notice: the model is not deciding what to say. It is estimating what comes next. The sentence is a byproduct.

What can an LLM do alone?

An LLM alone can do these things reliably: write, summarize, translate, explain, answer questions whose answers are in its training data, edit, reformat, classify text, and generate code for patterns it has seen.

An LLM alone cannot do these things: know what happened after its training cutoff, verify that its own outputs are accurate, access the internet or your systems, execute any action in the world, and know what it does not know.

The limitations are not bugs. They are architectural boundaries. Understanding the boundary is the whole game.

Can do

    Cannot do

      Click any item.

      What happens when a human joins?

      The LLM provides speed, breadth, and fluency. The human provides judgment, verification, and intent.

      A human can check LLM output against truth. A human can ask follow-up questions that expose contradictions. A human can notice when the reasoning is circular. A human can connect the LLM to tools and data it cannot reach alone. A human can decide when to stop.

      Neither alone is sufficient. Together they are something new. This is not replacement. It is amplification. The output of the combination exceeds what either produces solo.

      This is the pattern that built Inquiro.

      Path A: LLM alone

      Answer (unverified)
      ~40% chance of error.

      Path B: LLM + human

      1. LLM generates initial answer
      2. Human asks a follow-up question
      3. LLM reveals a contradiction
      4. Human consults a source
      5. Answer (verified)
      The extra steps in Path B are not inefficiency. They are the difference between speed and accuracy. Use Path A for drafts. Use Path B for anything that matters.

      What have we built doing exactly that?

      One person. Multiple LLMs. All the questions.

      This required a new kind of team. And a new kind of thinking. It required a question that would not let go and a methodology for working with AI that treats it as a thinking partner and adversary, not an answer machine.

      The pattern is repeatable. Use it or do not.

      If this clicked for you — see it in action.

      Generate a receipt →