179 - Exceptions

We were all once "young and foolish", but what distinguishes us over time is how we either learn and adapt to shed that foolishness, repeat our mistakes, or double down on them.

Life is a process of both learning heuristics and recognizing exceptions to each rule. Foolishness can stem from naivety, but it is also often caused by the vigorous application of heuristics absent recognizing the exceptions. The worst offenders in this case are usually those who systematically discard and/or miscategorize any statistical outliers, as those outliers are often the exceptions, where the most attention should be paid, rather than the least.

It has historically been a common practice in "data science" to cook the statistical books and performance benchmarks by discarding or blatantly altering statistical outliers, thus invalidating any shred of credibility systems might otherwise have held. This effectively serves to carve foolishness into stone by making it a structural feature, leaving the blind application of cognitive bias to automation, like a cat $hitting on the floor and covering it with a bathmat. That surprise is still easy to smell and just waiting for you to step in it.

Cognitive biases evolved into how they are fluidly and dynamically applied by the human mind today because they are frequently very useful, but when they are wrong (exceptions to the rule) they are systematically wrong. This makes for a kind of predictable stupidity, which is transient for those who learn from these mistakes, and persistent for those who repeat or double down on them.

When people choose to automate these tasks then the risks are often greatly multiplied and blindfolded. A well-established example is the use of "Applicant Tracking Systems" (ATS), which automate the process of discrimination based on arbitrary criteria, usually absent any rational or otherwise credible grounding. They often claim to "reduce bias", though they usually do nothing of the sort. Rather, they reduce "noise" (statistical) as Daniel Kahneman termed it, which any dumbfire script composed of a couple of lines of code could accomplish, as the only criteria for reducing "noise" is processing the data more consistently.

Looking back on my own life I can see that I would start from a position of giving a group the benefit of the doubt, gradually shedding that doubt and the associated benefits through study and experience. The first time was the hardest, as US culture inherently assigned high credibility to doctors, which proved virtually bankrupt in practice. After seeing myself succeed with self-study and 3 months of experimentation where they systematically and miserably failed for 20 years, the actual divide between those cultural default expectations and reality grew abundantly clear.

Shedding naivety, recognizing exceptions, and rethinking your worldview is an innately uncomfortable process, but few things pay off more in the long run.