265 - To Better Understand
To better understand the dynamics of a “graph-native” system as I term it, let's walk through a quick comparative example (character limit allowing, as a full example would require a research paper length).
Example 1: A system has access to a company’s body of knowledge and the internet, and it is asked to assist the company in analyzing the viability of options for a major decision the Board of Directors faces, such as a potential M&A, or pivoting into a new market.
Graph-Native: The system analyzes all relevant knowledge currently in the graph, and searches available resources, such as the internet, to vet, refine, and expand upon that knowledge and understanding. This added value also comes in the form of the connectome of the graph further developing and refining over time, the connections between all contexts it contains, which includes improving the motivational data that guide exploration and action.
This allows the system to perform much as an analyst would, but at greater speeds and scales, considering and integrating far more data into the process, absent the biases of limited human cognitive bandwidth and lossy human memory. The result is also far more explainable than the analyst, as every step of the process, though hyper-complex, is both deterministic and fully auditable. This is possible because the process isn’t driven by the weights of neural networks, but rather by a dynamically growing and evolving graph database flowing through a cognitive architecture over time, similar in flow to Chaos Theory and the Three-Body Problem.
This flow is only possible for systems without fixed goals and narrow optimizers, like those that neural networks “train” and otherwise rely upon. Rather, a human-like motivational system is required for fluid and endless navigation absent fixed goals across an ever-growing and ever-evolving graph-structured knowledge base.
In this scenario, the system can greatly enhance the work of a human team of analysts, while reducing the time they require by 80-90%, conservatively.
LLM-based: The LLM model, with any number of methods and extensions duct-taped to it (like CoT, MoE, RLHF, and RAG) can only predict the next token in any sequence based on the training data, and any additional sources of biasing (like RLHF or “prompt engineering”). The system predicts based on the prior data it is fed while remaining blind to both context and concepts, using only biased heuristics and potential hand-engineering.
The outputs are known to vary even with subtle changes like the model of GPU that the LLM runs on, while offering no trace of explainability, and only “post-hoc” substitutes for explanations may be applied. It is a very fancy and intentionally biased Regression to the Mean via sequential Magic 8 Balls fitting to the curve of data distributions.