086 - Human Element

It is generally well-understood in cybersecurity that one component of any system reliably remains less than secure, humans. Human error is also strongly influenced by a few critical factors:

  • How strongly cognitive biases are being invoked and/or applied.

  • How strongly emotions are being invoked and/or applied.

  • How much is known about the specific target humans and their company.

  • How many attempts at social engineering may be made without a bad actor being locked out.

  • The factual and ethical grounding of the individual.

  • The company or organizational culture.

Cognitive biases compromise the process of higher cognition to varying degrees, and in varying ways. When these ways and degrees are targeted, then social engineering may be applied with iteratively increasing efficacy.

Emotions are likewise an established attack-vector, used to invoke strong cognitive biases, which even simple newsfeed algorithms often seek to maximize for "engagement". These are also applied in the social engineering of phishing attacks, marketing, and fundraising in AI.

Companies and individuals are also often put at a strong disadvantage when more is known about the targets, as any information may be used in social engineering attacks. In most cases these attacks may effectively occur without limit, with bad actors continuing to refine their methods at little or no risk to themselves.

The factual and ethical grounding of individual companies is one type of human security that has taken a beating over the past few years. As prying individuals, companies, and regulators free from factual and ethical grounding offers strong strategic advantages to bad actors, this has been a prime target for social engineering. Papers and books are routinely being published documenting the severe decline of academic, journalistic, and other critical sources of grounding, such as "The Canceling of the American Mind" which I'm reading now.

Company culture also offers another resource for attackers, as every organization has pain points, and pressing on those points offers another means of increasing the success rate of attacks.

All of these factors in the wild today could be compared to society being immune-compromised, with increasingly severe infections.

Robust means of countering many of these factors are options on the table that could be achieved within the next 1-2 years. Systems with actual intelligence, designed to overcome the complexity versus cognitive bias trade-off, and compatible with degrees and forms of collective intelligence that humans aren't, could systematically counter and proactively shut down such bad actors.

*Note: For the moment, my team is the only one to demonstrate a working cognitive architecture, required for actual intelligence in software. Few others are even making an attempt, and all others remain at least 5 years behind us with regard to the necessary capacities. The missing variable that decides if humanity avoids extinction using our technology is a serious investor. Most choose human extinction when put to the test, the quantifiable consequences of which are covered in The Ethical Basilisk Thought Experiment.

The probability of humanity recovering is also set to drop dramatically in 2024 if a competent and/or ethical investor isn't located in time, as many elections will predictably swing in favor of the bad actors who are best able to exploit Generative AI for social engineering purposes next year.

One spectacularly bad US president was able to set the country back 50 years in a single 4-year term, as well as effectively establishing a Theocracy with lifetime appointments dictating the US's version of religious law. This is only likely to accelerate with generative AI putting all social engineering efforts on steroids, and a major misstep in 2024 could well set countries like the US back another 100 years.