264 - Biggest Problems
One of the biggest problems holding back progress in the domain of AI is the phenomena of “LLM-brained” practitioners, “researchers”, and “experts”. This phenomenon is a specific case of cognitive biases favoring familiar prior knowledge and/or beliefs, strongly anchoring to them, rather than engaging in any actual exploration.
For example, on the ARC-AGI Challenge Chollet pointed to LLMs being a dead-end, and yet LLMs were still what a majority of people threw at the challenge, despite the proclaimed point of the challenge being to move past the fantasy of LLMs as an omni-tool. Ironically, even Chollet’s own team governing it was LLM-brained, refusing to verify non-LLM-based solutions.
Another example that I’ve encountered routinely is that after describing systems and dynamics that LLMs are fundamentally incapable of, the very first thought that many people tend to voice is “…I wonder how I could emulate that with (some derivative of the same fundamentally incompatible technology)”.
Unfortunately, it takes a significant investment of time and cognitive energy to get most people to a point where they stop running in circles, which is probably why all of the major tech giants have been doing a fair impression of the Large Hadron Collider for the past half a decade, running in circles at large scales and increasing speeds.
The pull of cognitive biases is strong, and algorithmically reinforced and curated familiarity locks most people into this downward spiral from an increasing number of angles over time. The chains that mentally bind most people are growing stronger, controlled by a handful of large-scale (but also largely incompetent) bad actors. However, competence is relative, so degrading the competence of most people is sufficient to lock them into increasing control over time, in the spirit of 1984, but with the addition of Brave New World’s drugs, the worst of both worlds.
I and my colleagues have limited time to dedicate to educating people, my posts here a donation to that effect, and the battles against cognitive bias that we fight focus on far more critical processes than anything to be found on platforms like LinkedIn, but people can escape the cognitive trap of being “LLM-brained”. LLMs are one tiny tool in the technological toolbox, incapable of understanding, reasoning, or human-like & human-level thought, but like adults sitting in the sandbox at a public playground, you can step out of it and into the larger world whenever you choose to.