212 - Moral Boundaries
One question that has come up ever since I first found the solution to the hardest version of the Alignment Problem (2022) is "Where do you draw the line for what moral systems, philosophies, and cultures to incorporate within collective intelligence?".
Obviously, there are some extremist groups who virtually nobody else wants to be anywhere near, and indeed, those groups tend to intentionally act directly against systems of collective intelligence in ways that a simple Groupthink only sometimes and often blindly acts against. Most people can very easily agree that superintelligent artificial Hitler is a bad idea, and shouldn't be any part of such a solution.
However, beyond that point of agreement, there is substantial grey space for society to traverse, which will take time and effort. What can be determined today is that although Collective Intelligence systems gain intelligence and reduce cognitive biases any time new and compatible (cooperative-capable) perspectives are added to a collective, generalization between such perspectives can give a fair approximation of that value.
What this means is that in an arbitrary number of mathematical dimensions along which various perspectives may be measured and quantified, the spaces between those points as measured on axes (axis plural) orbiting around the target zone of least cognitive bias may be fairly approximated given a sufficient variety of different perspectives and a sufficiently general intelligence embodied within the collective intelligence system.
The fidelity of such an approximation is relative to the number of degrees on rotational axes around that zone of least bias separating a given absent perspective from presently represented members. This means that if a given philosophy has 20 different offshoots, but only 5 are directly represented, the other 15 may still be approximated with respectable fidelity.
A zone of least bias may also begin approximation with typical triangulation and trilateration methods, particularly when the arbitrary dimensions being represented include points with at least 90 degrees of total rotational separation around such a zone. This also offers many opportunities to progressively untangle the overlapping influences of cognitive biases operating at scale, particularly as they act more acutely or weakly against a given philosophy, as this data may be embedded and refined within a graph structure.
The net result is that orders of magnitude fewer potential perspectives are needed to closely approximate the same value, and as adding more is subject to diminishing returns then it loses efficacy beyond a certain point. That point may still include 100 different philosophies and hundreds of cultures, putting it far beyond the complexity of anything humanity can effectively integrate and act upon absent such technology.
This is also generalization in action with collective intelligence. You don't need the brute force of all possible solutions and perspectives to fairly encompass them.