315 - Outcome Bias

Outcome Bias, defined as "The tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made." is at the heart of understanding how people misperceive LLMs, RL, and similar models and algorithms today. Anthropomorphism (bias) is also a close and ubiquitous second influence guiding misperception.

The wildly Anthropomorphic term "Hallucination" is an example of both, as it assigns the classification of "Hallucination" to instances of failure, but the model is completely blind to that "failure". What the model most accurately and technically does is neither "Hallucination" nor "Confabulation", but rather it is termed "Bullshit", as exhaustively covered in the paper here: ChatGPT is bullshit.

A "bullshit machine", as they put it, is built explicitly as an autocomplete function, with or without further bells, whistles, and glorified loops, meaning that it will blindly apply that method to anything, and in the real world that method will be wrong the vast majority of the time, even when the outcome happens to be right, or "close enough". Put another way, "even a broken clock is right twice per day", and the right outcome is entirely different from the right method.

Methods can be quite robust, even antifragile if designed for that capacity, but applying the wrong methods and happening to get the right outcome only gives you that broken clock. The broken clock is inherently fragile, a shadow puppet that only momentarily imitates the desired outcome, being an entirely different structure that casts the shadow.

The problem then largely rests on the shoulders of the perceiver, or misperceiver, where around 200 distinct cognitive biases may give rise to any number of "Mirages" and easily debunked beliefs, such as the inexcusable delusions of "emergence" in trivial AI systems: Are Emergent Abilities of Large Language Models a Mirage?

Humans are amazing creatures in terms of the outcomes of imagination, such as the belief that the rough equivalent of a toaster with a magic 8-ball strapped to it could be some "alien form of intelligence". The human brain does the heavy lifting to turn that shadow puppet into "alien intelligence", like a child playing with dolls.

The next time you see anyone in the AI domain engaging in Anthropomorphism, just picture them playing with dolls or sucking on a pacifier. The basic structure of why they do it is much the same, even though the multi-billion dollar price tag is notably higher.

Outcome Bias