Table of Contents
Technology can soon ruin humanity.
We say it gets much scarier when combined with our imperfect brain.
A survey of academics at the Global Catastrophic Risk Conference by Oxford University estimated a 1% chance of human extinction from nuclear wars over the 21st Century1. We now can, with one decision, kill millions. Our destructive power surges with technological developments; the Doomsday Clock is ticking fast.
The WHO states that we live amid the Infodemics3; excessive information makes it difficult to prove facts. UC San Diego researchers found that consumption in bytes grew at an annual rate of 5.4% from 1980 to 20084. By 2015, U.S. media consumption rose to 15.5 hours per person per day, stated a more recent study5. Digital dystopia is becoming a reality; the digital revolution raises the complexity of human civilization.
The Digital Divide
Inequality of information access – and inherent cyber issues in privacy, security, and liberty are convoluted that our brain cannot distinguish facts. As a result, humans began to lose their sense of information understanding. We have entered the digital revolution and do not know how to cope with ever-complex information.
In addition to unmanageable information, our psychology hinders our decision-making. We use motivated reasoning to fit conclusions that suit some goals by selectively searching for information and selectively crediting evidence. “Motivated cognition can make us stupid, but it is not a consequence of stupidity.” So said Professor of Psychology Dan Kahan at Yale Law School6. Our natural tendency is to dismiss information that causes dissonance or anxiety selectively.
Because of both overloading information and our irrationality, we desperately need systematic approaches to help society cope with the complexity of information and identify biased rational flaws to mitigate catastrophic risks.
Catastrophic Decision Happened Before
Unfortunately, the catastrophic damages have already been done. In 2003, the United States chose to invade Iraq. Most now agree that this decision was deeply flawed, costing trillions of dollars and hundreds of thousands of lives.
The government justified the invasion thanks to the expert community’s claim that it was “highly probable” that Iraq possessed Weapons of Mass Destruction. Policymakers took “highly probable” to indicate near-100% certainty while it could easily be interpreted as 80% certainty or 70% – carrying very different practical implications and consequences7.
Some experts suggested that the U.S. had already determined to invade Iraq, and this decision influenced the public view of the situation – not the other way round. This is an extreme example of motivated reasoning. The call to invade hinged on the subjective impressions of a few key people with complex motivations.
Extended Rational AI Can Help Us
Extended rational Artificial Intelligence (AI) could be a solution to increase our judgmental accuracy in geopolitical forecasting and avoid irrational decisions. Nevertheless, to achieve this, we must combine an AI proficient in computing and probabilistic reasoning with our domain knowledge to support real-world problem-solving8. How could we go about training such an algorithm?
The algorithm training should focus on combining multiple judgments of various subjects’ judgments to become more robust across various domains9. Moreover, it should also correlate contexts in a target domain to increase judgmental accuracy10. We believe evidently that an intelligent algorithm alone cannot solve contextual problems as well as a human brain. At the same time, the human brain alone cannot overcome information overload and bounded rationality.
Even though such implementation might need enormous development resources, particularly in decision sciences, behavioral economics, and informatics, an open innovation scheme can be a key to realization here. What is more, the risk of such artificial intelligence that will threaten our liberty is conceivable. Thus, digital security is a crucial technology that must be developed as an integral part of AI.
What We Must Do?
I am convinced that the development of Extended Rational AI could be the best technological innovation. It will ensure human civilization’s sustainability, as it systematically reduces the existential risks like nuclear war and, simultaneously, increases the effectiveness of political decisions11. Each government must declare a policy to implement such technology. This might mean funding a behavioral research lab long-term and fostering the adoption of this best-proven rational AI in each institution.
At the emergence of the digital dystopia, we must save humanity with extended rationality!
- Sandberg, A., & Bostrom, N. (n.d.). Global Catastrophic Risks Survey
- ROSTOW, W. W. (1959). THE STAGES OF ECONOMIC GROWTH. The Economic History Review, 12(1), 1–16. https://doi.org/10.1111/J.1468-0289.1959.TB01829.X
- Bohn, R., & Short, J. (2012). Measuring Consumer Information. International Journal of Communication, 6, 980–1000. http://ijoc.org.
- Chang, W., Chen, E., Mellers, B., & Tetlock, P. (2016). Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments. Judgment and Decision Making, 11(5), 509–526.
- Brézillon, P. (1999). The context in problem-solving: A survey. Knowledge Engineering Review, 14(1), 47–80. https://doi.org/10.1017/S0269888999141018
- The Wiley Blackwell Handbook of Judgment and Decision Making, 2 Volume Set – Google Books. (n.d.). Retrieved March 12, 2022, from https://books.google.de/books?id=XwjsCgAAQBAJ
- Forecasting Tournaments: Tools for Increasing Transparency and Improving the Quality of Debate on JSTOR. (n.d.). Retrieved March 12, 2022, from https://www.jstor.org/stable/44318787