270 - Another Year

Another year has come and gone since the general public was first introduced to trashbot technology via “Chat-GPT”, initiating the tsunami of hype. In that time, and for the years preceding it, LLMs have surprised me precisely ZERO times, because in understanding the architecture there is no mystery as to what they fundamentally can and cannot do.

The transformer architecture is a relatively trivial bit of code, fitting into even 400 lines, but it is the massive volumes of (largely stolen or “synthetic” [fake]) data running on massive amounts of hardware that convert that code into the plausible-sounding bullshit-generator that it is.

However, if you understand that code, some basic fundamentals of computer science (like data structures and processing), and data science (like not training on your test data), as well as even a little bit about cognitive bias and neuroscience, then an LLM will NEVER surprise you.

The honeymoon with hype appears to be souring for many, even for some of the most deeply delusional “e/acc” cult investors like Marc Andreessen. However, all of the trashbot failures were both predictable and entirely obvious to any genuine expert in AI as far back as before the term “LLM” even came into common usage.

All of my predictions about LLMs have been proven right over and over again, no matter how much time and money are wasted on the technology in attempts to reach some other conclusion. When overt attempts at fraud are made they’re promptly debunked by the research community, though often they aren’t aimed at the research community, so they still do the damage that bad actors intended.

That said, I’m by no means an “AI Skeptic” because these predictions originated in understanding the technology, not skepticism applied to uncertainty, as the LLM architecture leaves no room for uncertainty. I’m also not skeptical of what can be done more generally in AI, having worked with vastly superior technology for half a decade already, and knowing how viable architectures could exponentially improve once deployed.

All of my negative predictions of humanity continue to come true as well, even as the odds of a cascade event quickly rise, risking a third world war. Two active and prolonged genocidal wars with both 3+ directly involved countries and proxy war supporters are already in play, with a third front threatening to emerge.

Humanity only requires 1 competent and/or ethical investor in order to avoid extinction, but such a person may well not exist. If they do exist and aren’t found in the near future, it probably won’t matter anyway. Humanity’s odds of survival will soon drop below my ability to calculate them, hiding somewhere within the margin of error.

Sometimes, I do hate being proven right.

Another Year