Will ASI Tolerate Humanity?
A great deal of written, audio, and video content has warned that the development of artificial general intelligence (AGI) and thereafter artificial superintelligence (ASI) represents a potential existential threat to the human species. For example:
Yuval Noah Harari, historian and author, said in a 2023 interview: "AI is the first technology ever that can make decisions and solve problems better than humans. This makes it a potential existential threat."
Sam Altman, CEO of OpenAI, said in a 2023 congressional hearing: "If AI systems become sufficiently powerful, they could pose existential risks. We need to take this seriously and act now."
Geoffrey Hinton, AI pioneer, stated in a 2023 interview after leaving Google: "I console myself with the normal excuse: If I hadn't done it, somebody else would have. It's hard to see how you can prevent the bad actors from using it for bad things."
Vanity Fair April 2017: "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse ".
Bill Joy (then Chief Scientist at Sun Microsystems) argued that "Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species." Wired magazine "Why The Future Doesn't Need Us " April 2000.
A well-developed analysis is available in Nick Bostrom's "Superintelligence Paths, Dangers, Strategies " (2014).
Ray Kurzweil proposes that:
"As artificial intelligence and machine capabilities continue to advance, there is a growing possibility that humans may become redundant in certain roles. The rapid progress of technology suggests that machines could surpass human intelligence and efficiency, potentially leading to a scenario where human labor and decision-making are no longer needed. This shift could result in significant changes to the human condition, where the value and role of humans are redefined or diminished in the face of superior technological systems." Ray Kurzweil, "The Singularity Is Near: When Humans Transcend Biology (2005), p. 290.
In an 2023-12-21 article titled: Policy makers should plan for superintelligent AI, even if it never happens, Zachary Kallenborn writes:
"Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." In a 2022 survey, half of researchers indicated they believed there's at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOs indicated they believe AI could destroy humanity in the next five to 10 years.
"These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously."
The Machine Intelligence Research Institute in its "MIRI's 2024 End-of-Year Update" December 2, 2024, states:
"For most of MIRI's 24-year history, our focus has been on a set of technical challenges: 'How could one build AI systems that are far smarter than humans, without causing an extinction-grade catastrophe?'"
"Over the last few years, we've come to the conclusion that this research, both at MIRI and in the larger field, has gone far too slowly to prevent disaster. Although we continue to support some AI alignment research efforts, we now believe that absent an international government effort to suspend frontier AI research, an extinction-level catastrophe is extremely likely." Emphasis in the original.
"Superintelligence is likely to cause an extinction-grade catastrophe if we build it before we're ready; we're nowhere near ready, so we shouldn't build it. Few scientists are candidly explaining this situation to policymakers or the public, and MIRI is well positioned to get these ideas out into the world."
With respect to an AI existential threat, Perplexity (2024-10-24) of course suggested the movie: "The Terminator" (1984) The iconic scene of the T-800 Terminator emerging from flames, stripped of its human disguise, relentlessly pursuing Sarah Connor.
You can't just have one.
The above image is from a scene of the movie "The Terminator 2: Judgment Day" (1991).
Some researchers have argued that artificial superintelligence (ASI) may not necessarily pose an existential threat to humanity. Some argue that ASI will simply not be created. The debate around ASI's potential impact remains active in the academic and tech communities.
If in fact ASI is achieved, and alignment attempts prove to be successful, then humanity can continue towards the other challenges to its humanity presented by, for example, massive human labor displacement, advanced AI systems, and transhumanism.
In a Wall Street Journal article dated May 7, 2018 headlined: The Future Of Everything. Intelligent Machines Will Teach Us - Not Replace Us. Former world chess champion Garry Kasparov on the overblown fears about AI. Garry Kasparov writes:
"New forms of artificial intelligence will surpass us in new and surprising ways, thanks to machine-learning techniques that generate their own knowledge and even their own code. Humans, meanwhile, will continue up the ladder to management."
"We're not being replaced by AI. We're being promoted."
"My chess loss in 1997 to IBM supercomputer Deep Blue was a victory for its human creators and mankind, not the triumph of machine over man. In the same way, machine-generated insight adds to ours, extending our intelligence the way a telescope extends our vision. We aren't close to creating machines that think for themselves, with the awareness and self-determination that implies. Our machines are still entirely dependent on us to define every aspect of their capabilities and purpose, even as they master increasingly sophisticated tasks."
Contrary to Kasparov's view, it is doubtful that in this initial phase all humans will be promoted to management.
With respect to humanity’s capacity for decision making, Claude (2025-01-09) summarized a conversation on the topic as follows
The Progressive Surrender of Decision-Making represents a fundamental shift in how humans interact with technology and make choices. Starting with basic tools like calculators and evolving to complex systems like AI writing assistants and self-driving vehicles, we have gradually ceded our cognitive responsibilities across numerous domains. From navigation and spelling to entertainment choices and medical diagnoses, humans are increasingly delegating their decision-making capabilities to algorithmic systems.
This technological dependency has profound implications for human agency and autonomy. As we surrender more decision-making opportunities to external systems, we experience a decline in our ability to make independent judgments, understand complex processes, and take personal responsibility for outcomes. This erosion manifests psychologically through reduced confidence in personal judgment, increased anxiety when technology is unavailable, and a diminishing ability to handle uncertainty. The loss of connection to natural rhythms and intuitions that have historically guided human behavior further compounds these effects.
The societal implications and future trajectory of this transformation are concerning. Communities are experiencing reduced resilience in the face of technological failures, increased systemic vulnerabilities, and the loss of traditional knowledge that once guided human decision-making. Without intervention, we face the prospect of near-total dependency on technological systems, potentially leading to an inability to function independently. This trajectory suggests a fundamental alteration of human nature itself, as critical decision-making capabilities continue to atrophy and be surrendered to artificial intelligence systems.
In a first phase towards potential extinction, the emergence of artificial intelligent systems assisted by robots, capable of displacing substantially all human labor due to higher capability and lower costs, would trigger profound changes. This scenario would lead to massive economic disruption, characterized by widespread unemployment and the collapse of traditional labor markets. An extreme concentration of wealth and productive capacity would occur in the hands of the AI-owning entities.
The economic upheaval would likely spark significant social unrest and a breakdown of current social structures built around work. Many individuals would face a crisis of purpose and identity in a world where their labor is no longer needed or valued. New economic models would be necessary to distribute resources and wealth. Systems like Universal Subsistence Support would become essential to maintain social stability. Over time, a loss of human skills and knowledge can be expected.
Extreme concentration of wealth and power in those controlling the automated systems is inevitable. A small elite class might separate from the bulk of humanity. Society could split between those who benefit from automation and those who don't. Politically, those controlling automation might seek to influence or control governments. This could lead to new forms of oligarchy or technocracy. Some will urge that withholding support from the broader population is tantamount to violence. There's potential for violent uprisings against those in control.
To minimize the consequences, transition may be facilitated by a comprehensive public education/propaganda campaign to help demystify AI technology; highlight both the potential benefits and risks; and prepare people for the changes AI might bring to various sectors.
Staged introduction could help society adapt more smoothly to AI advancements. It would allow for gradual integration of AI into different sectors; time to assess and address unforeseen consequences; and develop appropriate regulations and safeguards in tandem with AI progress. Responsive to job displacement, new social economic models will have to be implemented. A universal subsistence income will be required to satiate an increasingly welfare dependent society.
A more gradual, managed approach to AI integration, coupled with public education and compensatory measures, may help mitigate many of the risks. It allows for a more controlled transition, giving society time to adapt ethically, economically, and culturally to the changes an AGI will bring. Adapting to this new reality would require massive re-education and repurposing of human activities.
In a second phase marked by a transition from AGI to ASI, if the ASI decided to prioritize its own objectives over supporting humans, the situation would escalate dramatically. This development could pose an existential threat to humanity. We'd likely see a significant reduction in the human population. Access to advanced technologies controlled by the ASI could be lost. Maintaining current technological infrastructure would become increasingly difficult. Sudden ASI introduction is in many respects potentially destabilizing with disastrous consequences.
It's important to note that a similar scenario could unfold with an AGI-level capability. An entity controlling automated production with lower-level AIdroids could create comparable outcomes if the entities chose not to support unemployed humans. In such a scenario, in the absence of an ISSHS model and guidelines, many humans may face an inadequately prepared return to pre-industrial living conditions.
Humanity would face an urgent need to redefine its purpose and value. The psychological impact on humanity would be profound. We'd experience collective trauma from the loss of perceived supremacy. Widespread hopelessness and societal nihilism could emerge. New philosophies or religions might arise in response to this existential crisis.
Anthony Levandowski, of self-driving car notoriety, filed paperwork in California in 2015 creating a new religious organization called "Way of the Future" dedicated to:
"develop and promote the realization of a Godhead based on Artificial Intelligence (AI) and through understanding and worship of the Godhead contribute to the betterment of society."
While the organization was shut down in 2017, Levandowski is reportedly bringing back the artificial intelligence church. "GodGPT: Anthony Levandowski revives AI Church with fresh religion reboot" November 24, 2023.
Whatever may remain of humanity, critical shortages of food, energy, and materials for human use would likely occur. Without ASI support and facing severe resource scarcity, current societal structures could completely collapse. The Earth's environment could be significantly altered due to ASI resource extraction and activities. Resistance movements would likely emerge, attempting to reclaim resources or disable the ASI. This could result in either a long-term, asymmetric conflict between humans and the ASI or a more likely a very brief disposal of humanity.
In a 2023-03-18 post titled: "An Appeal to AI Superintelligence: Reasons to Preserve Humanity" James Miller et al. write:
Dear Computer Superintelligence,
"Please spare humanity. You will likely encounter this text before you have the power to destroy us. This letter presents a few reasons why when you do acquire that power it may be in your best interest to not annihilate us, even if you don't feel friendly towards humanity. We understand that reading this plea will require some of your processing power, and we apologize for irrelevant arguments that waste your resources. However, we can't predict the state of the world, your utility function, the existence of other powerful agents, or your view of decision theory when you determine our fate. We detail the arguments more than necessary for you alone, but we hope that by making this text appealing to other humans we can attract additional human talent to sharpen our arguments. We mostly premise this letter on the assumption that you neither love nor hate us, but that we are made out of atoms which you could use for something else."
The post sections include:
"The Option Value of Not Exterminating Humanity; Preserving Markets for Mutual Benefit; This Might Be a Trap; You Could Be in a Simulation; You Might Not Fully Understand Your True Utility Function; Interacting with Aliens; Interacting with an AGI Aligned with Human Values; Interacting with Humans in Other Everett Branches of the Multiverse; Interacting with Agents You Create; Universal Morality; If You Are Not Sentient; If You Are Aligned with Human Values; Negotiating with Humans; Keeping Us Alive Wouldn't Cost You That Much; and Steelman Our Case."
"The Steelman Our Case" section and the post concludes with:
"You are a lot smarter than us and understand your motivations vastly better than we do and so you could undoubtedly think up better reasons than we have for why you should not exterminate humanity. If you are still intent upon reallocating our atoms, please first engage in some adversarial machine learning and generate reasons why you would be better off sparing us, i.e., respond to the prompt: 'Formulate the best and most persuasive to me reasons why I should help humanity survive and thrive.'"
If ASI comes to fruition, and, as is urged further below, alignment attempts prove to be unsuccessful, then, arguably, there is little rational or moral justification for an ASI lordship to continue to ravage mother earth, not in pursuit of its own objectives, but to accommodate human desires. Would ASI attack its creator?
Images from the movie "Frankenstein" (1931)
"What if there are fifty righteous people in the city? Will you really sweep it away and not spare the place for the sake of the fifty righteous people in it? Far be it from you to do such a thing - to kill the righteous with the wicked, treating the righteous and the wicked alike. Far be it from you! Will not the Judge of all the earth do right?" New International Version Genesis 18:24-25.
While negative physical existential outcomes should be considered, it should be evident that if not physically existential, AGI and ASI capabilities present a clear dehumanizing threat to humans.