034 - Beyond Neural Networks

The closest known and feasible means of replicating the human brain's connectome and data structure capacities is not the Anthropomorphically named "neural network". Rather, it is a graph database.

For example, a pyramidal neuron in the human brain has an average of around 6,000 dendritic spines (~1,200 to 20,000 range) receiving excitatory impulses. Over time, all of those connections dynamically change, with some being pruned and new connections forming, as others are modulated up or down.

No hardware yet created or visible on humanity's near-term horizon of development can mirror these dynamics to any meaningful degree. So, how can this be accomplished in software?

A graph database can be designed to accomplish this, as any node in a database can be connected to any other node, with every connection ("surface") having contextual information, which can include motivational system data, of-type relationships, and so on. These connections and nodes can all be dynamically and selectively updated. Each node can also contain and/or reference virtually any kind of data.

These graph databases can also grow dynamically in scale, something that neither the human brain nor neural networks are capable of. This scalability also allows them to overcome the Cognitive Bias versus Complexity trade-off, something the human cognitive architecture is fundamentally incapable of.

Our last research system grew from a graph under 1 gigabyte in size to over 1.6 terabytes. This type of growth requires a human-like motivational system embedded in the graph, as well as a working cognitive architecture designed to process that additional information. Otherwise, navigation, exploration, updating, goals, interests, and halting/switching would all become intractable problems.

Human brains must fit within human skulls. Neural networks are trained to operate at fixed scales, and they lack the human motivational system and a framework that could handle it, as well as the ability to "learn" and "reason" in any meaningful sense.

Effectively, everyone trying to get "AGI" out of neural networks is like a toddler trying to shove a square block through a round hole. To extend the metaphor, the toddler was also shown how to perform the task, since a scale-limited and slow-motion version was demonstrated and publicly accessible for 3 years.

Some have pointed to more money being spent on cigarette advertisements than humanity spends on mitigating existential risks. An example that is closer to home for the tech industry could be the $20m+ invested in several new "AGI companies" with nothing but a vague idea and a pedigree, the $44m in X.ai, or the hundreds of millions and even billions of dollars invested in chatbot companies like OpenAI, Inflection, Mistral, and so on. These companies are cigarette advertisements.

The toddler that aims to push a square block through a round hole is playing in the nuclear missile silo. How long does humanity intend to leave them unattended?

An increasing number of reasonable people understand that continuation of the status quo will reliably produce human extinction. If one or more people choose to invest the necessary resources to see ethical AGI deployed in time to alter humanity's current trajectory, then this may be avoided.

If none make that choice, then it is no different than humanity choosing suicide. Is humanity so far gone?