277 - Critical Steps
Building working solutions, AI or otherwise, requires walking through a few critical steps before you even begin the engineering process. The first crucial step is to determine if your chosen problem is Deterministic or Probabilistic, which can alternatively be considered as the precision required for your solution, due to the fuzziness and adversarial vulnerabilities of probability. For example:
-
Marketing and Art have no precisely correct solutions, they have chaotically drifting fuzzy targets, often best considered in terms of probability.
-
Causality within scientific research, the physics of real-world macro-scale objects like robots or self-driving cars, and financial transactions all have precisely correct solutions, operating deterministically.
While a deterministic system can give a more precise solution, that added precision doesn’t necessarily benefit a probabilistic problem. Likewise, while the noise that a probabilistic system injects can on occasion be useful for reconsidering a deterministic problem, it remains nothing but noise. Misdiagnosing a problem as deterministic or probabilistic, or failing to diagnose it at all, undermines every step thereafter.
For example, neural networks are probabilistic systems, probability distributions in superposition across the weights of a neural network. Calculators and spreadsheets are deterministic systems, as they always give you the same precisely correct answers.
Historically, one cognitive bias against deterministic systems has emerged, which is the naïve assumption that they require infinite amounts of hand-engineering to exhaustively cover all cases, like the infamous example of “Expert Systems”. However, deterministic systems can be engineered according to Chaos Theory and Three-Body Problem dynamics to continually grow and adapt on their own, under real-world conditions, avoiding the reliance on probability.
The step that is required to overcome that infinite engineering time is to design a system with a human-like motivational system, meaning that the system has no narrow optimizer, no hard-coded goals, and only weak constraints. This combination of hard requirements allows a system to generalize to truly Out-of-Distribution problems, not just the Indirectly-in-Distribution problems which some in AI mistakenly call OOD.
That ability to generalize to truly OOD problems means that above the critical engineering threshold where the necessary dynamics take shape, no further human engineering hours are required. Note that this isn’t theory, my team has demonstrated this for half a decade.
Before you gulp down that “AI solution”, be sure that you’re drinking from the right container.