Why a confidence score is not a yes.
Most people guess 90%. The math says 8%.
In Zone 03, Q10, the AI gave you a confidence score, not a verdict. That number was not a grade. It was a probability that updates when evidence arrives. Here is the rule it was obeying.
The test is 90% accurate.
You just tested positive.
What is the probability you actually have the condition?
The test flags about 11 as positive: 1 true positive and 10 false positives.
Your true probability of having the condition: 8.3%. Your guess was 50%. The Bayesian answer is 8.3%. That gap is why AI confidence scores require context, not just a number.
Bayes proved in 1763 that rational belief is not binary. It is a prior that revises when evidence arrives.
Every modern AI model is a Bayesian machine. It carries a prior over what it has seen. Each new token of context is evidence. The model updates.
Hallucinations happen when the prior is strong and the evidence is weak. The math does not care. It updates anyway.
Bayes was a Presbyterian minister in Tunbridge Wells, England. He never published his theorem in his lifetime.
Richard Price found it in his papers after his death and published it in 1763. It described how rational belief should update when new evidence arrives.
Two hundred and sixty-three years later, it is the foundation of every probabilistic AI system on the planet.
The confidence score in The Inquiry's dual-lens experience is a Bayesian posterior. Bayes wrote the rule. The AI learned the parameters.