098 - What We Do In The Shadows

In the background of events that most people see, Cybersecurity Researchers are quietly defending the world against the massive exploits that companies like Google, Microsoft, OpenAI, and Anthropic are introducing into our digital ecosystems. These individuals are often either not paid for these efforts, or poorly paid relative to their corporate counterparts, which is ironic given that their corporate counterparts are creating the problems and little else.

Integrating an LLM into your systems is like adding a screen door to your submarine at every single point the LLM touches. "Guardrails" are trivial to bypass, and will continue to be so, because there isn't even a theoretical basis upon which they could be expected to solve these problems. The only known way around this is to place every LLM screen door in the airlock of a working cognitive architecture tightly bounding it from both sides. Exactly one company has demonstrated this capacity.

If people want security it is an option on the table, but for the moment VCs, investors, and tech companies seem quite content to pump Ponzi Schemes, inflate the problem, and hope it all blows up in someone else's face. Governments seem to be doing no better, as The Economist memorably put it in their assessment of the AI Summit.

To put it another way, all of those submarines eagerly installing screen doors are traveling up #$%^ creek.