325 - Comorbidities
As an investor in several other tech and AI companies pointed out to me recently, “Everything seems to be breaking”, as he pointed to examples impacting him personally ranging across the board from finance and travel, to simple meeting scheduling software, to WiFi networks. Everywhere, all at once, things are breaking at an accelerating rate.
There isn’t a single reason for this, but rather it is the predictable convergence of many comorbidities coming together and compounding upon one another. For example:
-
Cybercrime is booming, intentionally breaking systems in increasingly sophisticated ways, many of them automatically.
-
Complexity in general is increasing, at the system/organization architecture, operation, and integration levels. Humans can’t handle that added complexity, and even as poorly as humans handle it, hyped AI technologies are orders of magnitude worse at handling both complexity and novelty.
-
“Vibe Coders” and similarly non-viable uses of AI tools and “agents” are creating engineering debt orders of magnitude more quickly than they could previously.
-
Human decision-makers are attempting “cognitive offloading” of critical design and process decisions to AI systems with precisely zero actual understanding, reasoning, and intelligence.
-
Usage of AI is progressively growing more “religious” and less technical, with proportionate decreases in higher cognition predictably leading to no shortage of user error as well as developer, UX, and executive fantasies. As it is marked by the distinct absence of damage mitigation, this form of debt more often causes consequences that go unnoticed until they explode.
Another more complex comorbidity that likely hasn’t ripened just yet comes from the intersection of “GenAI” and the “Peter Principle”: "In a hierarchy, every employee tends to rise to (their) level of incompetence." When that historic dynamic is combined with GenAI, then an employee suddenly has more potential for fakery, more potential to rise above their own level of incompetence, into roles where their incompetence is far more extreme. This may begin hitting hard within the next 24 months, on top of all the other problems noted above.
In some domains extreme incompetence has already risen to dominate the top, call them the “Pioneers of Incompetence”, like Marc Andreessen. Many of these people shine a spotlight on themselves by trying to drive the more religious AI hype today, through cults like “e/acc”, where they worship trash technology while ignoring the cutting edge. That said, most domains either haven’t jumped off of that cliff yet, or they did but have yet to hit the bottom.
Things are breaking rapidly, and it will get worse before it gets better. The damage isn’t irreversible yet, but it is predictable, as is what will happen if it continues. If you have the power to change it, you are one of the handful of humans on the planet for whom that is true, and if you don’t choose a different path, who will?