072 - Burning Strong

The dumpster fire of Generative AI is still burning strong. With multi-modal LLMs come multi-modal "prompt injections", both direct and indirect. It is already trivially easy to do this with images, and indirect prompt injection via watermarks offer immense potential. Given the differences in data structure relative to text, as well as the greatly increased attack surface, this problem will only get worse.

Both OpenAI and Google are currently busy deploying integrated LLMs that break under the lightest touch and can be used for easy data exfiltration by third parties. Both Google and Microsoft have been dumb enough to integrate these systems such that extensions may be called upon by compromised models, and data may be easily exfiltrated. In Microsoft's case, they're basically handing all of the US government's data over to China on a silver platter.

Botnets from several state-sponsored bad actors are already actively "mining" the larger LLMs for "intelligence", and combining the results to reverse-engineer their own distilled systems. They've also proven sufficiently adept at obtaining personal information the LLMs were trained on, as well as other supposedly "secure" information such as software license keys.

The particular brand of snake oil many of these companies are pushing is a slow-acting and entirely lethal poison. Compromise all of your systems, expose your data, and you'll find yourself under the wave, not riding it. Inertia may still push you along, as you drown.

Listen to cybersecurity researchers, or deploy your own dumpster fire, the choice is yours.

P.S.: Don't stand in the smoke that Disinformation Brokers are blowing. Those are distilled dumpster fire fumes, likely to cause you to hallucinate even more than the models.