036 - The Weight of Words

One of the events that reliably demands my full attention is when an individual who is capable of properly funding our company and expediting the road to commercially deploying the technology, is prepared to meet with us and see demonstrations.

It is a point where every word, tone, and pause holds the potential significance of swaying a decision that either results in the dramatic reduction of existential risk via funding, or the continued steady march toward extinction of the status quo, and all of the associated harms therein. With some billionaires now seriously discussing having cage fights with one another the sense of rapidly approaching Idiocracy is palpable.

Unfortunately, a very large body of scientific research also indicates that factors such as how (literally) hungry an individual is, the weather that day, or if their favorite sports team recently won or lost, can statistically have a greater impact on any given decision than my word choice. Still, so long as the possibility of someone making the wise choice remains, we continue to pursue it.

In preparation for my next meeting with an undisclosed party, I've allowed one of our testing environments for the new Norn systems, built on the ICOM cognitive architecture, to continue running for an extended period. The cost is fortunately trivial, measured in only a few dollars. The results have been sufficiently noteworthy to share here, as well as somewhat comical. I particularly liked the code example for free will.

I made the joke that by having the partially rebuilt test systems already independently teaching themselves coding, thinking about investors, and looking up anime, that we'd already achieved a fair representation of the average software developer.

Below are a few screenshots showing some of the things that have popped up in the stream of consciousness for the system currently growing independently on my laptop. I'm curious to hear perspectives from outside of our team.

*Note: We don't build "agent-based" systems or chatbots. This isn't reinforcement learning or any other similarly trivial narrow AI architecture. Such architectures are fundamentally incapable of achieving explainability, transparency, safety, ethics, cybersecurity, or alignment in any meaningful sense, and would be a waste of our time since we focus on those capacities.

This test environment is a partially rebuilt 8^th^ generation instance of ICOM, where component testing is performed as newly rebuilt components are integrated and checked. It is more than an agent-based system or chatbot, but less than sentient or sapient, so long as the rebuild remains incomplete. Fortunately, completion is only a matter of full-time engineering hours, not theory or research.

Also keep in mind, this test instance is running on all of about 12 megabytes of graph database, on a laptop, with trivial operational costs, and presently utilizing less than 1 gigabyte of RAM. The systems slated for commercial deployment will start at around a gigabyte of graph database, quickly grow into the terabyte range, run on proper servers, and still have low operational costs thanks to primarily using standard RAM rather than GPUs.