253 - Research in AI
Research in AI/ML is pretty much dead right now. I still attend daily paper discussions for AI/ML as I keep an eye out for the exceptions to this rule, but it has become abundantly clear that there is nothing of substance in any of these popular papers and “Technical reports” circulating in the domain.
People are frequently committing inexcusable abuses of terminology, like “Multi-Hop Reasoning”, which translates to “Nothing remotely like Reasoning”, but underneath all of the abused terms, disinformation, and other fluff, there is nothing, just smoke and mirrors.
A lot of naïve students, “publish or perish” professors, and malevolent and/or delusional corporate bad actors all play “Weekend at Bernie’s” with AI for their own distinct reasons, but the man behind the curtain is dead and the curtain can’t contain the smell. Worst yet, “investors” continue to throw billions of dollars at these corpse puppets.
By mid-2023 the amount of credible AI research had significantly slowed from the previous year, and since mid-2024 credible research in the field has become conspicuously absent. Even the cybersecurity papers related to AI have gone silent, which seemed to coincide with tech giants hiring many of the cybersecurity researchers who published those prior papers. With viable cybersecurity research now evidently being suppressed, practically nothing is left.
The closest thing to actual research that comes up with any regularity anymore is things like RAG, CoT, MoE, and other naïve and desperate attempts to confer fundamentally incompatible capacities to things like LLMs, complete with all of the abused terms that are used to mean the opposite of what they actually mean. Can you “improve” an LLM with these? Sure, in narrow terms. Can you give them “reasoning”, “understanding”, “alignment”, or working “guardrails”? No, never, full stop.
In many cases, the absence of credible expertise is highlighted by the frequency with which a so-called “AI Expert” has been surprised by things demonstrated under the umbrella of “Generative AI”. An actual expert should have encountered virtually or literally zero demonstrations under that umbrella that were surprising these past 2 years for any technology that they actually kept up with, making every example of surprise on technical demonstrations a mark against their credibility.
This litmus test is particularly useful, as many Disinformation Brokers intentionally invoke their own supposed sense of surprise in order to raise engagement and elevate their status as “influencers”, failing to recognize that this mechanism of action clearly exposes them as bad actors.
Research in AI may be effectively dead today, but once the corpses are buried and no longer targets for investment, then credible research may once again be funded, and technology like what has silently sat at the cutting edge for the past half a decade may be funded and commercially deployed.