249 - Infinitely More Ways

To breathe new life into an old quote “There are infinitely more ways to (poorly address a problem) than there are ways to (address a problem well).” Dawkin’s original quote refers to being dead versus being alive, but the same asymmetry holds true for problem-solving.

This principle is also subject to a “combinatorial explosion”, where the combination of more factors, such as the levels of complexity you find in any real-world system, causes an explosive increase in the number of wrong ways to address any given problem. This inherently causes the viable potion of solutions to shrink as complexity increases.

Human Cognitive Bias supplies another critical mechanism in this process, since as complexity increases so too must the level of cognitive bias applied to compensate for the increased cognitive load placed on finite cognitive resources. Your brain can’t scale outside of your skull, and so we rely on cognitive bias to cope with arbitrary levels of complexity in daily life, constantly, and unconsciously.

While cognitive biases exist because they are “useful, but wrong” tools that offer humans the ability to cope with uncertainty and spikes in complexity, when they fail, they systematically fail. That means that humans relying on those cognitive biases under circumstances where they fail become less competent than a literal random number generator, or as Prof. Tetlock famously put it, a “dart-throwing-chimpanzee”.

Remember, the number of wrong (or poor) answers explodes at the same time that a human’s ability to wisely choose between answers is unconsciously reduced, causing these detriments in decision-making quality to rapidly and invisibly compound upon one another.

No, “Generative AI” has precisely nothing viable to offer for solving this problem, though it does have a nearly infinite number of spectacularly terrible ways to address it. Many of the trends people grow familiar with in AI today are nothing more or less than the best-performing among the category of “objectively terrible attempts to solve real-world problems”.

As a general rule, any time someone talks about LLMs, RAG, CoT, MoE, or “guardrails” (in AI), they are referring to attempts to duct tape incompatible technologies and/or capacities together. These may improve an even worse technology attempting to address a problem, but a pile of $hit that smells slightly less awful is still a pile of $hit.

The AI that people are familiar with today isn’t built to handle complexity, and even with the advantages humans have over such systems humanity isn’t built to handle complexity above a certain threshold either. However, technology built for that purpose does exist, has been demonstrated for the past 5 years, and could be properly funded.

Complexity never stops increasing in the real world, so the clock is ticking. Humanity handles complexity terribly today and may be terrible at far greater speeds and scale with the addition of comically awful technology like LLMs.

Infinitely More Ways