054 - True Potential
The most remarkable thing about "Generative AI", is that it is neither remarkable nor does it offer anything new.
Back in 2021 when OpenAI was still struggling to parrot the mathematical understanding of 9-year-old children in the US, our previous research system was already dealing with the mathematics and complexity of real-world business data, breezing through algebra and Excel sheets.
By January 2022 this divide had grown even further, when the research system reached the final milestone. For this milestone, the system was given advanced notice that it would receive a question on policy advice from government officials in Aruba, a small island nation with reasonable complexity for such testing and an interest in the Sustainable Development Goals (SDGs). Those officials distilled their interest into the question "What steps would you take if you were governing Aruba and looking to diversify the economy, establish and integrate a trade hub, etc.?"
The system independently researched the country, region, and relevant domains. It replied to the question with a 13-page policy advice report covering a half dozen different domains, listing steps, explaining the strategy, citing sources, recommending partnerships, pointing out trade hub competitors, and advising the gathering of specific additional data.
Companies like OpenAI, Anthropic, Microsoft, and Google aren't actually advancing or leading the field of AI, they are holding it back. As media and investor attention remains focused on trashbot technology, and the junk derivative thereof, progress in the field has slowed, while the most harmful and vulnerable technology has accelerated through hyper-focus. Most of this acceleration has been thanks to Open-Sourcing, rather than proprietary closed-source models.
In theory, this situation could change if a single competent and/or ethical investor were located, but finding such a person has proven just as difficult as developing the industry-leading technology, if not more so. Every logical, monetary, and ethical incentive is there, and yet people waste 10 times the funds on trashbot technology that funding viable technology requires.
There is no greater logical advantage than having a seat at the table with the company developing this technology. There is realistically no greater monetary reward than holding equity in the company since only luxury goods can compete for margins. There is no greater ethical incentive than seeing the technology deployed responsibly, and sooner. Common sense is sufficient to recognize the former, and documents review the latter two.
Attached is one document including the Aruba Report, the previous "PPP" Business Case using real-world data, and a secondary piece of policy advice that was given of the system's own volition to another party based in the US that was aware of the project.
...
If you know any competent and/or ethical investors, please send share about our work or send them our way. The only way humanity is getting out of this mess is if one may be located sooner, rather than later.
None of the models major tech companies or highly funded AI Startups are working on training today are built on viable technology, GPT-5 included, but that doesn't mean that a steady stream of new shiny objects couldn't be used to distract investors until such a time as humanity goes extinct.
In February 2022, based on the consistent trajectory of AI technology and markets, I estimated that if our work wasn't funded by the end of last year humanity would enter a "grey zone" where survival was uncertain. I had no specific knowledge of OpenAI's plans for 2022 at the time.
Factoring in the events of the past year, I expect humanity's odds of survival to drop below 50% by the end of this year if the viable technology stack needed to address major issues today still isn't funded to at least the $25m mark by then. We'll continue our work regardless, but the door is closing on humanity's future.