316 - Engineering Reality

Mount Rushmore is a good example of a project that required a great deal of skilled labor to complete. While it might technically have been possible for one person with a simple chisel, investing only their spare time, to eventually complete such a task, that isn't a very practical option. Rather, Mount Rushmore required ~400 total workers for 14 years, plus prior planning.

The same is true of any technology that actually has a non-trivial moat, a concept that remains alien to most of the AI domain. Nobody builds infrastructure for systems that they haven't seen before, and if you're building technology that isn't wildly derivative then they've never seen it before, so you'll almost certainly have to build large chunks of that infrastructure yourself.

You can usually demonstrate something with off-the-shelf components and infrastructure, and perhaps do some novel research (as we did with the Uplift.bio project), but anything commercially deployable has the hard requirements of repaying that engineering debt in the end. This is the kind of hard work that most in the AI domain have systematically avoided, leading to equally systematic and predictable failures to achieve any meaningful results, as well as the bad actors attempting to dance around them.

The classical definition of insanity is "to continue doing the same thing while expecting different results", now epitomized by obvious fallacies built on "Scaling Hypotheses", where some Underpants Gnome style magical thinking ignores all current and prior failures, hard evidence, and known limitations in favor of imagining that they'll just resolve themselves if only that slot machine crank is pulled for one more spin.

Whether you call that insanity, stupidity, or addiction is partly a matter of semantic preference, but the AI equivalent of "Crypto Bros" (many of whom are or were actual crypto bros) spread that disinformation like clockwork. If a technology is a failure after being commercially deployed at scale, like ChatGPT, you can safely expect it to be a bigger failure at 10x, 100x, or 100,000x, as OpenAI has repeatedly demonstrated. A child's lemonade stand could generate more profit in a day than OpenAI has managed in 10 years.

New technologies, with actual moats, are what tend to require funding to reach the minimum viable thresholds for demonstration and evaluation. If that goal-post of funds required is increased above the initial value more than once, you're probably dealing with snake oil peddlers. Rather, the required sum should generally go down, as methods improve. This is also a matter of simple best practices.

Engineering Reality