028 - Accountability Issues

Another new and revealing research paper recently came out of Google Deepmind, where GPT-4 was used to adversarially break other AI cybersecurity systems that were published at a recent top conference, IEEE S&P 2023.

"We run our attack on the official models released by the authors of AI-Guardian. Our attack achieves 92% targeted attack success rate with an ℓ∞ distortion of epsilon of 0.25, and successfully reduces the accuracy of the model to 0%."

So, for a quick recap, not only can you reliably "jailbreak" any LLM, including GPT-4, to do whatever you want, but if you want to use them to break cybersecurity systems you can do that too.

In related cybersecurity news, the CEO of Inflection, Mustafa Suleyman, now an AI Startup with $1.5bn USD raised, successfully shoved his foot in his mouth. After posting a spectacularly obvious lie on X/Twitter, he was promptly drowned in responses from people breaking their cookie-cutter chatbot named Pi on the first attempt, with far greater ease than models like GPT-4 where the automated adversarial attacks were achieving 50% or greater attack success rates.

What is it about some well-funded Tech Industry C-suite people that drives them to make obviously and demonstrably false claims at every opportunity?

Of course, Mustafa is far from the worst offender in this. Others have eagerly committed the crime of perjury, that is giving false testimony while under oath, to the US, and likely to other governments.

Perhaps a more useful approach than a "Pause on AI" would be sending the people committing perjury to prison. The simple act of enforcing US law could dramatically reshape the AI Industry. Until this occurs, there is no real "Accountability" in AI, as those who fail to enforce existing laws cannot effectively write new laws either.

Is your country enforcing its own laws on the tech industry, or are they running a "Kangaroo Court"?