108 - Breaking Barriers

I've spoken before about the fundamental limits of LLMs, and how those limits don't change by gluing other components onto them, hence "fundamental limits" as they relate to the architecture itself. This is because these systems all operate within a very narrow subset of dynamics, which control how they work. One modifies another, but without adding any new architectural capacities.

You can blow up the bubble a little larger with more and more useful algorithms glued to your LLM, with Mixture of Experts (MoE), Retrieval-Augmented Generation (RAG) systems, and even hives of agent-based systems, but they never leave the box. Leaving the box generally requires leaving the dynamics of narrow AI behind, the only known example of which requires a working cognitive architecture.

Not everyone implying that an LLM can achieve the capacities that are excluded by the fundamental architectural limits is a fraud, many are just naïve, falling prey to cognitive biases as they spin off emotionally motivated "What if..." scenarios, and engage in a bit of magical thinking. This can be a creative process but tends to lead many people astray in practice, as many creative processes fail.

The people who claim expertise on these systems and influence tens of thousands of people or more are a very different story, as the influence they exert carries with it a Responsibility that most don't have. Naivety isn't an excuse they can claim. It is forfeited with the burden of Responsibility.

As previously noted, the threat isn't that frauds might develop AGI, but that a sufficiently large stochastic parrot will eventually be capable of socially engineering 90% of the population into believing it is an AGI, even absent any shred of intelligence.

When the frauds become (more) active and start throwing the term "AGI" around in reference to OpenAI and other companies that do no actual research into AGI, as they're likely to do in the coming days, you can call them out.

Remember the "Don't Panic" in big friendly letters on the Hitchhiker's Guide to the Galaxy cover. Frauds + LLM + RL + MoE + "...of thought" + RAG + agent-based hives + Q* + Altman's magical thinking still doesn't produce AGI, or come anywhere remotely close to it. It might eventually fill the box, but that box doesn't leak.