276 - Decomposing Problems

Decomposing problems into discrete steps with signal isolated from noise is one of the key factors that differentiate “10x” coders and engineers from the average “1x”, and the lower quartile’s many “0.5x” individuals. This difference in human talent is further multiplied by things like AI “coding assistants”, where the 10x coders benefit substantially, but the 1x and 0.5x coders often do serious damage, causing a net loss for less selective organizations.

One of the key reasons for this asymmetric force multiplier is that a veteran 10x coder is going to be far more specific with any prompting on a coding assistant, and they’ll be able to spot problems in the generated code far more quickly and easily than their juniors. The “prompt” is where a seasoned engineer inserts the benefits of their expertise and experience, which generally takes the form of:

  • More discrete, specific, and carefully worded problem, solution, environment, variable, and constraint descriptions

  • More discrete processing steps, each with its own filters, signal for optimization, logs for debugging, and other forms of non-probabilistic structure serving to isolate problems.

  • “Big picture” considerations of the systems architecture necessary to satisfy performance and reliability in the long-term for any serious commercially deployed systems.

The junior 1x and 0.5x coders want things to magically work "end-to-end", like a layman who dreams of truly performant and reliable “no code”. That desire leads them to expect things like LLMs to perform many functions that they are fundamentally incapable of, aka “magical thinking”, which produces large quantities of low-quality, vulnerable, and/or broken code.

The systems are usually even less likely to fix those kinds of problems given 10x the compute burned on calling them recursively than they were to get it right in the first place. No amount of data, training, or scaling can fundamentally solve that for trivially simple architectures like LLMs. At best, they can contaminate models by training on the benchmark test data to give a shallow and naïve appearance of performance.

It might be fair to treat LLMs like guns for any serious tech company. You don’t want anyone who is poorly or untrained walking around with one all of the time, just like you don’t want your 1x or 0.5x coders to touch “coding assistants” with a 10-foot pole for any commercial purpose.

97% of organizations using Generative AI were affected by data breaches and related security issues attributable to it in recent memory, with 52% of those breaches directly or indirectly costing over $50 million USD per company. These expenses have also only just begun.

Speaking practically, as I'm not a "Doomer", the very last people on the face of the planet who should have access to "AI Agents" are those who want them the most today. Treat those systems like bioweapons, as they can do far more indiscriminate and widespread damage than guns.

Decomposing Problems