175 - Conflicts of Interest

In the modern world, virtually anyone with expertise and a sincere passion has a conflict of interest. This is because people with these qualities will tend to invest their efforts, and often their funds, into trying to solve the problems they care about, what N. N. Taleb calls "Skin in the Game."

The first question is if they are sincere and indeed do this, but the second and equally important question is if they take the best approach. That second question is entirely separate from the first, as sincere efforts and funds can be thrown at an entirely bankrupt concept, as no doubt has been the case for methods found in AI like "guardrails", which never had even a theoretical basis supporting them serving any viable function.

The first question is fairly trivial to find the answer to, while the second requires both broad and deep expertise, as well as the ability to overcome the cognitive biases that otherwise produce systematic deviations from reality. The systematic nature of those deviations, if left unchecked, makes it trivially easy for bad actors to dominate a system. However, there are some methods that can help.

When examining a proposed solution, first ask if there is anything fundamentally preventing the technology and/or methods from serving the intended function. This negative-validation focus is far more practical than determining if it can solve the problem, which is sometimes intractable to predict, because the negatives that prevent a problem from being solved are a much smaller and firmer subset. You can likewise quantify the "fragility" of technology and methods even when you can't reliably determine when a risky event would likely occur.

Second, once you have a set of disqualifying/negative validators then you can not only rule out many non-viable "solutions" to the problem, but you can determine if any new proposed solution falls into that fuzzy grey space where the problem might actually be solved.

Third, with these boundaries defined, you can put your time to better use. I personally recommend giving people one opportunity to educate themselves, by default and time allowing, but beyond that everyone is responsible for their own actions (regardless of determinism or any variation thereof). If someone chooses not to learn, or to continue repeating demonstrably false claims, simply block them and move on. A very large chunk of the AI domain could be blocked today based on these criteria, potentially saving a great deal of time.

For all of the "What ifs" that PR doctors spin off to drive hype, the vast majority of them are trivial to completely shut down through disqualifying/negative validators. Everyone has conflicts of interest when it comes to the things that they care about: "We're all hostages to what we love. The only way to truly be free is to love nothing. And how meaningless would that be?"

The difference is, not everyone has viable solutions to offer.