322 - Link Vulnerability
Summer is in the air, as the smoking cybersecurity wreckage heats things up across the internet. The world's third-largest market, Cybercrime, is whipping LLMs and their equally weak derivatives ("agents") into Phishing Scam extensions to defraud anyone clicking links that they provide.
What predictably happens when you combined a $10.29 trillion annual criminal industry with vulnerable-by-design technology that cultivates a pseudo-religious mystique? The exponential proliferation and iterative optimization of scams, at scale and speed.
As is consistent with recent research, 34% of links for brands went to unaffiliated pages, and LLMs bullshit ~30% of links that don't exist, pointing to domains that are unregistered, parked, or otherwise inactive. All of these make for potential targets, but more than that, it points to the opportunity to steer LLMs toward already malicious assets as a further option.
This particular tactic also isn't new, as we saw the same exploitation of AI "Bullshit" (Commonly Anthropomorphized as "Hallucinations") when bad actors began creating malware repositories for the most commonly generated fake codebase dependencies. Those would install malware on any system that someone was foolish enough to run poorly proofed AI-generated code on.
Unimaginative as it may be, this new application of turning LLMs into extensions for Phishing Scams via SEO+LLM optimization is sure to prove effective against the bread and butter of such scammers, as the gullible masses who've been suckered into using LLMs as though they were search engines are also much easier marks for scams.
As cybersecurity experts have been loudly and consistently warned since the start of 2023, if not earlier, LLMs and their derivatives are very literally impossible to secure in any automated system short of the actual "AGI" that they fundamentally can't deliver, and never had any chance of reaching.
TLDR: Don't click on any links that a chatbot gives you, embedded in a search engine or otherwise. See the fake Wells Fargo page for stealing your bank credentials that Perplexity recommends you to as an example in the linked article.
Also, if you're asking a chatbot for links, maybe you should rethink your life choices. Something obviously went terribly wrong.
The article also touches on several related cybercrimes well worth reading, like one threat actor poisoning AI coding assistants to route all transactions to wallets that they control.
As they note:
"One response to these hallucinated domains might be to register them all in advance. But that's not practical. The variations are infinite—and LLMs will always invent new ones. Worse, AI-based interactions mean users are less likely to scrutinize URLs, making even wildly off-brand domains plausible."
What is required is to "deploy technology that doesn't hallucinate in the first place." (Netcraft doesn't satisfy this, despite their claims, as no LLM-based approach can.)