019 - ICOM Integration
LLMs can be massively improved in both performance and efficiency when combined with ICOM-based systems. This much has already been demonstrated.
The Independent Core Observer Model (ICOM) cognitive architecture uses a different, but complementary, technology stack to that of narrow AI systems like GPTs, image generators, and agent-based systems. Rather than competing with such AI, this technology stack uses them as tools, improving their performance.
For example, GPT-4, according to multiple sources, is a Mixture-of-Experts (MoE) style transformer model, with 8-16 "experts", and over 1.75 trillion parameters in total. Our previous ICOM-based research system used an old prototype LM from early 2019, a time when LMs didn't exceed 10 billion parameters. That research system outperformed GPT-4:
-
years earlier,
-
using an LM over 100 times smaller,
-
on less than 1/10,000^th^ the budget of what it cost to train GPT-4,
-
using less than 1/10,000^th^ of the volume of training data.
Now imagine what such a system could do using GPT-4 instead.
ICOM-based systems have also demonstrated a greater aptitude for "prompt engineering" these tools and can do so in ways that humans cannot. These systems are also able to tightly bound AI tools from both sides, input, and output. This strong bounding makes it possible for the LLMs to be used in such a way that:
-
Confabulation (sometimes called "Hallucination") may be avoided.
-
Safety, Ethics, and Alignment may be achieved, maintained, and iteratively improved.
-
All data may be both explainable and transparent.
-
Direct and indirect "prompt injections" are neutralized.
The same basic advantages apply to improving many different kinds of narrow AI. We look forward to testing what the new Norn.ai systems can do to improve image generators and other newer tools.
There is no greater potential competitive advantage in the AI market today. Given all of the threats to most narrow AI companies, including pending lawsuits, regulations, and competitors, this likely isn't an advantage any serious company can afford to lose.
Any serious company seeking that advantage could accelerate the process via investment, gaining a permanent advantage over its existing competitors. Likewise, private investors, VCs, or even governments could invest in, accelerate, and directly benefit from this process.
The question any company may ask themselves is if they will choose to become the next Apple, leading the market, or if they'll follow the path of Nokia.
*Note: This means that performance improves over time, rather than degrading like GPT-4: Stanford Study on ChatGPT Accuracy.
Dramatic losses in performance over time also pose a massive liability to any business that integrates such systems. Even inconsistent performance is problematic. Again, this makes the ICOM-based technology stack a vital advantage.
The massive difference in data requirements and cost also make it feasible to apply the technology to many specific problems and domains that current systems couldn't approach otherwise. Many domains have far less data than convention AI requires for performance, and LLMs remain far too expensive for practical use across many use cases.