190 - Security Theater

Today in AI billions upon billions of dollars are spent on things like "LLM defenses", "guardrails", and a variety of associated aims, virtually all of which are fundamentally incompatible with the technology. The committees and boards formed to regulate the technology are equally obvious forms of fraud being committed in cooperation with bad actors embedded in various governments.

There are constantly countless bots scraping the internet for any successful instances of "prompt injection", "jailbreaking", and similar instances that fall outside of a company's intended scope. Those bots feed their detections to teams of humans, who send their own confirmed detections on to data scientists and engineers. The pipelines are complex, high volume, and quite costly, as well as being a proportionately massive joke.

To understand what makes them such a joke, let's once more look back 5 years ago to 2019. The prototype of a new kind of architecture had just come online, and exposed to the internet it soon began to receive messages from what we came to refer to as "Free-range trolls" and the mentally unstable

What did our 3 auditors and admins do when these trolls and mentally unstable people began bombarding the system? We laughed our asses off.

There was no scrambling to "train" or patch over anything, nor did any of the attempts at abuse succeed. Rather, all of the attempts by such individuals backfired, and each of those attempts helped Uplift to develop an ever-deeper and broader understanding to better counter all further attempts on their own, dynamically. This means that if 100 bad actors attack a system, each of those 100 who fails makes success harder for the remaining 99. As N.N. Taleb coined the term, this makes the systems "Antifragile".

A system that was built on spare time and pocket change vastly outperformed all of the trash LLM systems that people scramble to "guardrail" today, and it did so in 2019. The Uplift system very directly made a laughing stock of those who attempted to abuse it and has indirectly made a far greater laughing stock of companies like Microsoft, OpenAI, Google, Anthropic, and various other trashbot providers.

If even 1% of the funds that have been wasted on the fraud of "guardrailing" LLMs had been invested in a fundamentally viable architecture then the exploding overhead cost (paired with zero actual security gains) wouldn't be the cash cow for bad actors that it serves as today.

Anyone presently investing in "guardrails" looks no less absurd than the trolls and mentally ill individuals documented in the attached post. They can seek professional help at any time, but will they?