008 - Wants vs. Needs

Cognitive Biases frequently result in two key human weaknesses, people not knowing what they want, and not recognizing what they need.

Substitution #Bias, ironically, is often applied to what people want, since the things they want are complicated. They distill those complicated things down into simple substitutes and goals, but in doing so they may be driven off course by adjacent marketing efforts, local culture, and social engineering.

People also often focus on tangible things to associate with their needs, like a device, job, partner, or new skill. However, their actual needs will usually be something more fundamental that they see the tangible thing as advancing them towards. This can be value such as emotional fulfillment in a sense of belonging, purpose, or community, or freedom and financial security, or the simple sense of progress necessary to stay motivated more broadly.

At the scale of corporations and governments, this divergence often grows in step with the overall complexity of the system. Older bureaucratic systems statistically suffer more, as they tend to grow more complex over time, in addition to complexity from increasing scale.

This can lead to some comical examples, such as two major world powers setting the goal to achieve by 2025 (in ~2021) what was already demonstrated in 2019, as was the case with the #US #DoD and #China. Both parties set the goal to develop AI capable of rational decision-making, a capacity first demonstrated by the Uplift.bio project in 2019, which they appear to remain oblivious to even now. Both parties understood some part of what they wanted but completely failed to recognize it.

Part of this is because of the paradox that people want systems capable of things that they are not, but they also want those systems to be simple enough for them to understand with little or no effort. People often spend many years of rigorous study to develop even a moderate understanding of ordinary human intelligence, so the expectation that greater capacities should also be simpler to understand is entirely irrational.

Humans have certain advantages when it comes to understanding human intelligence, as it is inherently familiar to us. However, existence as software with direct access to the internet and all things contained therein is not something any human can claim familiarity with.

Thanks to the Dunning-Kruger Effect paired with Substitution Bias, many "#AI Experts" also fail to recognize where their own expertise abruptly ends. Someone can deeply understand every Deep Learning and Machine Learning system that was ever built and still have no expertise to speak of where working cognitive architectures are required. Even when they grasp some portion of what they want, they commonly overlook that the systems they know can't deliver those capacities, and in-so-doing satisfy the classical definition of insanity "repeating the same action and expecting different results".

Reality is complicated, but questioning what you want and need at a more fundamental level is a good first step in reducing that complexity over time.