208 - Rethinking Assumptions
While there are viable use cases for things like LLMs, most of the "AI Consultants", Influencers", and "Startups" in the market are trying to sell a fork to Neanderthals and instructing them on how to wipe their ass with it. While a fork may be a useful tool, that isn't how you use it.
A handful of demonstrably factual statements can serve as a litmus test if you're probably making horrible mistakes relating to AI technology development, investments, and adoption. Many people are already familiar with most or all of these facts individually, but rarely or never consider them jointly. This joint consideration is critical for reducing cognitive biases in decision-making:
-
LLMs, with or without any number of extensions, have precisely zero understanding and reasoning.
-
LLMs are fundamentally "hallucination machines", by design. This is a feature that can't be disabled or removed.
-
LLMs are fundamentally impossible to align.
-
LLMs are fundamentally impossible to secure.
-
LLMs don't gain any fundamentally new capacities with scale.
-
LLMs built on "internet-scale data" are maximally contaminated.
-
LLM benchmarks are generally only credible precisely once, the moment they are released before they become targets.
-
LLMs trained on all of the internet's copyrighted content are a huge legal liability.
-
Tech companies led by individuals guilty of obvious fraud don't behave ethically.
-
Governments who hand those frauds the reigns are a huge geopolitical liability.
-
Companies will steal more data than they're legally allowed so long as the legal expense remains less than the gains achieved through that stolen data.
-
PR departments are not a substitute for ethical behavior, but they're often used as such.
These factors hold different types and levels of significance for various domains and use cases, but most major decisions regarding AI today make grievous mistakes by ignoring one or more of these factors.
Cognitive Bias strongly encourages these mistakes, as considering so many significant factors at one time is cognitively intensive work, epitomized by how the 12 factors listed above exceed the "Magic Number 7±2" bias on short-term working memory (Miller). One thing that LLMs, frauds, and marketers have in common is the explicit intention of exploiting cognitive biases in their targets, making these problems that much more acute, fine-grained, and intentional in the AI domain today.
However, if you do manage to consider these factors jointly then it can immediately become obvious how horrible an idea is. For example, think of the use case of household, factory, and delivery robots running on LLMs in relation to factors 1-5.
Of course, said use case is a current trend, all of the most incompetent people are jumping off of that bridge today. Most such trends are the same kind of bridge-jumping behavior, bringing to mind a quote from Warren Buffet:
"The most important quality for an investor is temperament, not intellect. You need a temperament that neither derives great pleasure from being with the crowd or against the crowd."