extended rational ai cover

Extended Rational AI: Preventing existential crisis due to our confused brains!

Technology can soon ruin humanity.

We say it gets much scarier when combined with our imperfect brain.

A survey of academics at the Global Catastrophic Risk Conference by Oxford University estimated a 1% chance of human extinction from nuclear wars over the 21st Century1. We now can, with one decision, kill millions. Our destructive power surges with technological developments; the Doomsday Clock is ticking fast.

The WHO states that we live amid the Infodemics3; excessive information makes it difficult to prove facts. UC San Diego researchers found that consumption in bytes grew at an annual rate of 5.4% from 1980 to 20084. By 2015, U.S. media consumption rose to 15.5 hours per person per day, stated a more recent study5. Digital dystopia is becoming a reality; the digital revolution raises the complexity of human civilization.

The Digital Divide

Inequality of information access – and inherent cyber issues in privacy, security, and liberty are convoluted that our brain cannot distinguish facts. As a result, humans began to lose their sense of information understanding. We have entered the digital revolution and do not know how to cope with ever-complex information.

In addition to unmanageable information, our psychology hinders our decision-making. We use motivated reasoning to fit conclusions that suit some goals by selectively searching for information and selectively crediting evidence. “Motivated cognition can make us stupid, but it is not a consequence of stupidity.” So said Professor of Psychology Dan Kahan at Yale Law School6. Our natural tendency is to dismiss information that causes dissonance or anxiety selectively.

Because of both overloading information and our irrationality, we desperately need systematic approaches to help society cope with the complexity of information and identify biased rational flaws to mitigate catastrophic risks.

Catastrophic Decision Happened Before

Unfortunately, the catastrophic damages have already been done. In 2003, the United States chose to invade Iraq. Most now agree that this decision was deeply flawed, costing trillions of dollars and hundreds of thousands of lives.

The government justified the invasion thanks to the expert community’s claim that it was “highly probable” that Iraq possessed Weapons of Mass Destruction. Policymakers took “highly probable” to indicate near-100% certainty while it could easily be interpreted as 80% certainty or 70% – carrying very different practical implications and consequences7.

Some experts suggested that the U.S. had already determined to invade Iraq, and this decision influenced the public view of the situation – not the other way round. This is an extreme example of motivated reasoning. The call to invade hinged on the subjective impressions of a few key people with complex motivations.

Extended Rational AI Can Help Us

Extended rational Artificial Intelligence (AI) could be a solution to increase our judgmental accuracy in geopolitical forecasting and avoid irrational decisions. Nevertheless, to achieve this, we must combine an AI proficient in computing and probabilistic reasoning with our domain knowledge to support real-world problem-solving8. How could we go about training such an algorithm?

The algorithm training should focus on combining multiple judgments of various subjects’ judgments to become more robust across various domains9. Moreover, it should also correlate contexts in a target domain to increase judgmental accuracy10. We believe evidently that an intelligent algorithm alone cannot solve contextual problems as well as a human brain. At the same time, the human brain alone cannot overcome information overload and bounded rationality.

 

Even though such implementation might need enormous development resources, particularly in decision sciences, behavioral economics, and informatics, an open innovation scheme can be a key to realization here. What is more, the risk of such artificial intelligence that will threaten our liberty is conceivable. Thus, digital security is a crucial technology that must be developed as an integral part of AI.

What We Must Do?

I am convinced that the development of Extended Rational AI could be the best technological innovation. It will ensure human civilization’s sustainability, as it systematically reduces the existential risks like nuclear war and, simultaneously, increases the effectiveness of political decisions11. Each government must declare a policy to implement such technology. This might mean funding a behavioral research lab long-term and fostering the adoption of this best-proven rational AI in each institution. It also means creating the right incentives for people to work on Extended Rational AI so that it can help us make better decisions. With this technology, humanity will be more confident of its future than ever before.

The extended rational AI is a powerful tool and needs to be managed responsibly. We must ensure that the data used is accurate and secure, while making sure that the technology does not lead to any unintended consequences such as discrimination or privacy violations. We must also recognize that despite its immense potential, we cannot expect this technology to solve all of our problems – some aspects of decision-making still require human judgement and creativity. But by promoting global cooperation and research in extended rational AI, we can trust that our future is based on sound decisions made with the help of these systems.

At Wasu, we strive to empower our users through the use of extended rational AI. Our goal is to create a safe and secure environment where people can make informed decisions on their own terms. We believe that the power of this technology should not be taken lightly, but rather leveraged responsibly to improve decision-making throughout society. This is why we are committed to developing innovative solutions that are designed to ensure the privacy and accuracy of data used in our algorithms, while keeping our users’ interests at heart. We hope that through our efforts, we will enable people everywhere to benefit from extended rational AI without sacrificing their freedom or safety.

Save the world with extended rational ai

At the emergence of the digital dystopia, we must save humanity with extended rationality!

References

  1. Sandberg, A., & Bostrom, N. (n.d.). Global Catastrophic Risks Survey
  2. ROSTOW, W. W. (1959). THE STAGES OF ECONOMIC GROWTH. The Economic History Review, 12(1), 1–16. https://doi.org/10.1111/J.1468-0289.1959.TB01829.X
  3. https://www.who.int/health-topics/infodemic
  4. Bohn, R., & Short, J. (2012). Measuring Consumer Information. International Journal of Communication, 6, 980–1000. http://ijoc.org.
  5. https://ucsdnews.ucsd.edu/pressrelease/u.s._media_consumption_to_rise_to_15.5_hours_a_day_per_person_by_2015
  6. https://www.discovermagazine.com/the-sciences/what-is-motivated-reasoning-how-does-it-work-dan-kahan-answers
  7. Chang, W., Chen, E., Mellers, B., & Tetlock, P. (2016). Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments. Judgment and Decision Making, 11(5), 509–526.
  8. Brézillon, P. (1999). The context in problem-solving: A survey. Knowledge Engineering Review, 14(1), 47–80. https://doi.org/10.1017/S0269888999141018
  9. The Wiley Blackwell Handbook of Judgment and Decision Making, 2 Volume Set – Google Books. (n.d.). Retrieved March 12, 2022, from https://books.google.de/books?id=XwjsCgAAQBAJ
  10. Forecasting Tournaments: Tools for Increasing Transparency and Improving the Quality of Debate on JSTOR. (n.d.). Retrieved March 12, 2022, from https://www.jstor.org/stable/44318787
  11. https://www.economist.com/leaders/2021/06/12/how-green-bottlenecks-threaten-the-clean-energy-business

Leave a Comment