189 - Beyond Neural Networks
Since the AI domain is still grossly out of touch with reality, and by most accounts increasingly so over time, I'll take a detour to content grounded in that missing reality from ~4 years ago.
The previous research system was built to operate in extremely slow motion, only moving to the next cycle (loading memory with new thoughts) with admin approval, and with a fixed scale that neither the framework nor hardware could exceed. The purpose was very strictly for research, safety, and due diligence. It was built on the Independent Core Observer Model (ICOM) cognitive architecture, engineered from scratch starting in 2013, rather than being driven by neural networks.
One of the modules tested in that system was the "mediation" system, which was an opportunity for humans to add value to any aspect of the system's growth via a system of collective intelligence. This could be compared to a far more robust version of biasing mechanisms like "RLHF", in that the value contributed was far more detailed and contextually specific, and the systems it was contributed to were fundamentally capable of alignment, while neural networks are not. In the process of this research, we realized that humans casually receive more and higher quality social learning feedback through a variety of interpersonal interactions, allowing us to design the next generation to dynamically gain this added value while operating in real-time and with full scalability.
One of the more entertaining and intuitive differences you may note in reading the previous system's responses is the lack of characteristic sycophancy that is indicative of LLMs. Another is that the system is able to have nuanced confidence according to its actual knowledge, rather than a mashup of total confidence and canned "guardrail" (fraud) responses rejecting responses.
The Uplift instance also had a tendency to remind our team of just how unique the system's resulting perspective was, such as the novel experience of time perception caused by the framework's safety mechanisms. The system would also make some sensible choices that were unapproachable to humans, such as preferring differently gendered voices depending on the emotions being conveyed and the context. Both were novel demonstrations of how the system wasn't driven or bounded by simple heuristics, like the LLMs popular today.
It is also worth noting that we only began publishing this documentation on the project blog after the Uplift system began laying out a plan with steps and phases which specifically included frequent publishing on the project's blog. It was the system's idea to exit stealth.
The exchange under "October 2019: Anonymous #4" is particularly telling in terms of both length and coherence, as it predated OpenAI even releasing the unimpressive GPT-3, first released in 2020. Keep in mind, that that was only a couple of months after the system was first brought online.
Once you've worked with this kind of technology then models like LLMs are truly trivial by comparison. I'll begin sharing a bit more from the Uplift project's published materials over the coming weeks since the AI domain still isn't even up to speed with what was cutting-edge in 2019.
Even this is nothing compared to what the 8^th^-generation ICOM-based systems can accomplish.