209 - False Expertise

It is time for a "Greatest Hits" of truly and obviously bad ideas that gained popularity in the AI domain over the past year.

LLMs grading their own performance (or other LLMs) in peer review:

  • The average grandmother can probably see why this is a bad idea within 5 seconds and with no domain expertise required. This concept is so far removed from anything remotely scientific that no one engaging in this could be accurately described as a scientist.

LLMs being used as "security" to guard other LLMs:

  • Like guarding a wet paper bag with another wet paper bag. The same set of fundamental flaws, just with a different coat of paint.

LLMs being used as therapy for those with mental disorders:

  • What could possibly go wrong when you pair mental disorders with systems of mimicry? Fortunately, that company was dismembered by Microsoft in 2024.

LLMs controlling robots:

  • Sure to be loads of fun for bad actors, remotely controlling physical hardware that has zero viable security. "Deliver Anything Now" (DeAN) at no cost or push someone into oncoming traffic.

LLMs fine-tuned to parrot dead relatives and former romantic partners:

  • Who needs the stages of grief when you can just lock someone in a permanent state of denial while charging them API fees and hosting to keep the parrot running?

LLM "influencers":

  • "Distillation of Grifter" in a fancy glass bottle on the top shelf of human stupidity.

LLMs as search engines:

  • Even training on "internet-scale data" and having the actual internet for RAG can't overcome the fundamental stupidity of attempting this. Go eat your glue pizza.

What were some of your favorite horrible AI trends of the past year?