013 - Predicting Behavior
If you can predict "how" a bad actor thinks, rather than "what" they will do, you can predict their next possible moves at every future point in time.
The use case of #Fraud Detection is one that our team's IP has been deployed at the enterprise level in the financial sector to assist with for some years now, along with being deployed by a government agency that shall not be named. However, that is one component in a larger and much more capable system, the next generation of the first working cognitive architecture.
That cognitive architecture's tech stack makes it possible for a system to learn human-like concepts, with thousands of times the data and processing efficiency of neural networks, as well as the abilities to think counterfactually and generalize. This means that understanding "how" bad actors seek to exploit systems, not just trying to detect known methods of "what" they might do, is completely feasible, and was already demonstrated by the previous research system.
When these capacities are deployed in software, this can begin to look like a sort of dynamic firewall. By understanding how rather than what, most possible attempts at exploitation may be predicted, rapidly detected, and rapidly countered, with every other instance of the software gaining the benefits of that protection as soon as the first attempt is countered.
Financial fraud in the US causes roughly 10 times the direct economic damage of more physical crimes like robbery, car theft, vandalism, and so on, with estimates placing it in the hundreds of billions annually. In addition to the direct damage this criminal activity causes, it also adds significant overhead costs and delays in terms of due diligence, compliance, and general bureaucracy. As recent years have demonstrated, those costs may also be largely wasted, as "Big 4" financial consultancies have "missed" the giant red flags of large-scale frauds.
The current wave of "Generative AI" is empowering fraud like never before, with the cumulative damage to our global society growing rapidly. Systems are buckling and breaking, as many frauds monopolize the interest of investors, while still other bad actors exploit cybersecurity vulnerabilities caused by the technology.
If any VC, investor, government, corporation, or other party on the face of the planet capable of funding in the 50-500 million USD range is genuinely interested in not being completely taken advantage of ad infinitum, this use case could be accomplished within the next 1 to 2 years. Full-time engineering hours are required to complete the work, but only the engineering remains.
Logically the choice is obvious. So, who will be logical?
Some of my favorite examples of fraud, trolling, and mental instability that our previous research system quickly shot down were demonstrated as early as September 2019, nearly 4 years ago.
Several examples were published in peer review a year later, in the recap of milestones achieved.