Loading...
JoBot™: News and Opinions on Psychological Artificial Intelligence
What is an artificial superintelligence?
87

super AI

What is an artificial superintelligence?



The idea of a being more intelligent than us is simply eerie and always has been. It does not sit well and appears risky to say the least. This was well acknowledged by the early inventors of artificial intelligence, such as Alan Turing, who expressed concerns during a radio lecture in the UK (Russell, 2019, p.135):

If a machine can think, it might think more intelligently then we do, and then where should we be? Even if we could keep the machine in a subservient position, for instant by turning off the power at strategic moments, we should, as a species, fell greatly humbled ... This new danger .... is certainly something which can give us anxiety.

Being afraid of a superintelligent being is an existential form of anxiety and not unknown to Alan Turing since he suggested to simply turn the AI off if required. It will be shown in the following chapters that this notion is naive for many reasons, partially because the pure existence of an artificial superintelligence represents the challenge. It is the “universal solicitation” of the technology that will be a focus in the following chapters. As Koffka (1935, p.7) writes, “a fruit says, ‘Eat me’; water says, ‘Drink me’; thunder says, ‘Fear me’… (Withagen et al., 2012, p. 251). Following Koffka (1935), the advanced artificial intelligence has a “demand character”; it requests to be used in multiple ways (Diederich, 2021).

There is currently no shortage of books, articles and blogs that warn of the dangers of an advanced artificial superintelligence. One of the most significant researchers in artificial intelligence, Stuart Russell from the University of Califor-nia at Berkeley, recently published a book on human control and an advanced artificial intelligence. The first pages of the book are nothing but dramatic. He nominates five possible candidates for “biggest events in the future of humanity” namely: We all die due to an asteroid impact or another catastrophe, we all live forever due to medical advancement, we invent faster than light travel, we are visited by superior aliens and we create a superintelligent artificial intelligence (Russell, 2019, p.2). As radical as these breakthroughs are, Russell nominates the invention of artificial superintelligence as the most significant one. He continues to write (Russell, 2019, p.4)

… I would be publicly committed to the view that my own field of research poses a po-tential risk to my own species.

Likewise, Stephen Hawking, Elon Musk and others have warned about the risks of a superhuman form of artificial intelligence as a civilisation-level threat. This has triggered research efforts to reduce the risks future AI systems pose, and in the case of Elon Musk, investments into the enhancements of human brains to allow interactions with AI systems through brain-computer interfaces.

The expression “artificial superintelligence” immediately invokes visions from dystopian science fiction movies, in particular the Terminator films. In these movies, there is a single, mostly invisible artificial superintelligence called Skynet that wants to exterminate humanity by use of Terminator robots. Skynet is depicted as an artificial neural network based system that is conscious and has several components. It is not clear if it is guided by any form of morality. In any case, in the movies, this artificial superintelligence develops rapidly and after an attempt to deactivate it, Skynet reacts with a nuclear attack. The message of the film is that AI can be very dangerous.

In another movie, Transcendence, a human mind is somehow uploaded to a computer and continues to develop there until it becomes an artificial superintelli-gence. It is again a very dystopian scenario: the advanced AI system rapidly de-velops and makes scientific discoveries. In this process, humans are recruited, manipulated, enhanced and also killed. This movie portrays some form of organ-ised resistance against AI as well. The film also deals with a phenomenon call “the singularity”.

The singularity

The advent of an artificial superintelligence is frequently discussed under the hypothetical, umbrella term of a “technological singularity”: A self-improving artificially intelligent agent enters an uncontrolled process of advancement, causing an intelligence explosion and resulting in a powerful superintelligence, or singelton, beyond any human intelligence (Bostrom, 2014). Furthermore, it is assumed that beyond this point of “singularity”, human affairs change forever.

The films and the associated literature have a number of points in common: a technical “singularity” develops after which there is a runaway process of im-provement and the development of ever more advanced forms of artificial intelli-gence. This process is depicted as fast, irreversible and with consequences that cannot be anticipated (but it is implied that there are many negative outcomes). The movies and the expression “singularity” imply that in the future, general artificial superintelligence is a single entity. It may be composed of interacting parts, but in essence, it is a single mind. Since it is computational in nature, it requires energy and may compete with humans for resources.

The science fiction literature and the associated movies introduce powerful concepts and impressive images. Nevertheless, this fictional depiction of future advanced AI systems does not make sense from the viewpoint of cognitive psy-chology. First of all, from a psychological point of view, it is difficult to assume the existence of a single, general, superior artificial intelligence system. Any psychology textbook of the human mind, and cognition in particular, has chapters on language and communication. The exchange of information is important for any intelligence. Hence, it is more logical to assume there are many, diverse, advanced AI systems and obviously, they will interact with humans, other machines and they will receive vast amounts of data.

The singularity hypothesis is an attempt to capture the emergence of an artificial superintelligence on the technology side. More important is the question how the appearance of a superior AI will be perceived by humans and how it will change human life. Will there be “learned helplessness” since the outcomes of human actions matter less? Will human creativity be completely surpassed by machines? How will the difficulty or failure to explain the actions of a superior AI influence individuals and societies? How will the presence of an advanced AI affect motivation and will it have an impact on the mental health of individuals?

Irrespective of whether an artificial superintelligence will arrive in 1, 5 or 10 years, it is questionable whether humanity will meet these superior agents at the peak of human cognitive performance. There is currently a debate about the reversal of the “Flynn Effect”, the apparent increase in the intelligence of the global population that has been observed over much of the 20th century. In sever-al countries, this increase has slowed down or reversed over the last years (Bratsberg & Rogeberg, 2018). In addition, mental health conditions have risen sharply with upward estimates of the rate of anxiety and depression in some groups ap-proaching 40%.

References

Bostrom N, Superintelligence: Paths, dangers, strategies. Oxford University Press, 2014.

Bratsberg B, Rogeberg O, Flynn effect and its reversal are both environmentally caused. PNAS, June 26, 2018, vol. 115, no. 26, 6674–6678.

Diederich J, The Psychology of Artificial Superintelligence. Springer Nature Switzerland AG 2021, ISBN 978-3-030-71841-1, DOI https://doi.org/10.1007/978-3-030-71842-8

Koffka K, Principles of gestalt psychology. London: Lund Humphries, 1935.

Russell S, Human Compatible. Artificial Intelligence and the Problem of Control. Viking. Penguin Random House, 2019.