Created by Materia for OpenMind Recommended by Materia
6
Start The Alienation of Artificial Intelligence
28 February 2024

The Alienation of Artificial Intelligence

Estimated reading time Time 6 to read

In 2019,the IT Glossary of the technological and consulting firm Gartner underwent an interesting change: the definition of ‘Artificial Intelligence’, formerly  ‘Technology that appears to emulate human performance typically by learning, coming to its own conclusions, appearing to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance or replacing people on execution of non-routine tasks’ was rewritten as ‘Artificial intelligence (AI) applies advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions.’

BBVA-OpenMind-Alan Turing and the Dream of Artificial Intelligence
Turing’s idea was that if the Artificial Intelligence could imitate humans on a level where it is not possible to distinguish between both, it is considered to be ‘thinking. Credit: Jon Call

The obvious change is that  artificial intelligence is no longer compared to human cognition as a benchmark, as it traditionally had been since 1950, when the mathematician Alan Turing devised his famous “imitation game,” later known as the Turing test. In the classic setup of the test, a human interrogator is placed in a closed room and communicates in writing with a computer and a human in separate rooms. The human evaluator’s task is to distinguish between the human and the machine. The algorithm is of course programmed to appear as human as possible. Turing’s idea was that if Artificial Intelligence can imitate humans to a level where it is indistinguishable, it should be credited with the ability to “think.”

So far, none of today’s chatbots have passed the test. However, is this truly the most relevant question?

In 1974, philosopher Thomas Nagel posed the question, “What is it like to be a bat?” In his article, Nagel explained that due to the completely different set of sensors possessed by a bat—such as echolocation systems, ultrasonic hearing, and doppler shift detection—the animal perceives the world in a way that is entirely distinct from humans. This divergence in perception challenges our understanding of consciousness and cognition.

Author Ray Nayler further explored this concept in his 2023 science fiction novel “The Mountain in the Sea”, applying it to an even more enigmatic creature: the octopus. These remarkable animals exhibit a high level of intelligence, problem-solving abilities, and tool usage. Their cognitive processes differ significantly from those of humans. Octopuses lack a rigid skeleton, instead featuring a soft body with a brain distributed across their tentacles, containing a larger number of neurons. Interestingly, complex brains with advanced cognitive features have primarily evolved in vertebrates—with one notable exception: soft-bodied cephalopods, such as octopuses.

The evolution of the octopus’s brain stands independently from the complex mammalian brain. Its remarkable abilities extend beyond cognition to include its unique outer appearance. Over time, speculation about octopus evolution has led to theories suggesting extraterrestrial origins. However, scientific evidence confirms that human octopuses share a common ancestor with us—a 750-million-year-old flatworm. Despite their otherworldly appearance, octopuses remain firmly rooted in Earth’s evolutionary history. Perhaps this mysterious creature’s intelligence and appearance will continue to inspire science fiction tales, much like the 2016 film “Arrival,” based on Ted Chiang’s novella “Story of Your Life.”

BBVA-OpenMind-Patrick Henz-diane-picchiottino- as human octopuses share the same ancestor with us, a 750 million year old flatworm
Octopuses exhibit a high level of intelligence, problem-solving abilities, and tool usages

Octopuses, despite their remarkable intelligence, have relatively short lifespans, typically ranging from 6 months to 5 years, depending on the species. Their lives are intricately tied to the process of reproduction. After mating, the last stage of an octopus’s life is called senescence, during which cellular functions break down without repair or replacement. For males, senescence begins after mating, and it may last from weeks to a few months. Females, on the other hand, experience senescence when they lay their eggs. They care for the fertilized eggs in a den until they hatch, after which the female also dies. 

This limited lifespan has intriguing parallels with science fiction. In Philip K. Dick’s novel “Do Androids Dream of Electric Sheep?”, which inspired the movie “Blade Runner”, androids (referred to as “andys” in the book) are intentionally designed with a four-year lifespan. This deliberate limitation prevents them from developing human-like emotions and empathy over extended periods. The theme of limited existence plays a crucial role in exploring empathy, identity, and the essence of humanity in the story. 

Similarly, author Ray Nayler draws upon the short lifespan of octopuses to explain why, despite their high intelligence, they have not developed their own distinct culture. Their evolutionary trajectory remains fascinating, and their unique abilities continue to inspire both scientific curiosity and creative storytelling 

AI21 Labs, an Israeli startup aiming to fundamentally change the way people read and write using generative AI, conducted a groundbreaking experiment called “Human or Not?” in April 2023. This unique social Turing game paired participants with either a human or an AI bot (based on leading language models such as Jurassic-2 and GPT-4). The game allowed users to engage in two-minute conversations and then guess whether they were chatting with a human or a machine. The experiment quickly went viral, with more than 15 million conversations conducted by over two million participants from around the world.

It provided interesting insights, such as the correct guesses by country. Participants from France, Poland, and Germany scored the highest, while participants from India, Russia, and Spain scored the lowest. When we compare these findings with the Transparency International Corruption Perception Index 2022, we find a correlation coefficient of 0.33, suggesting a moderate positive relation between the absence of corruption and the ability to distinguish humans from AI deep fakes. So far, this suggests that there is a tendency for the variables to move in the same direction, but the relationship is not powerful or highly predictable.

Artificial Intelligence (AI) and robots seem increasingly alive. Source: Piqsels
Artificial Intelligence (AI) and robots seem increasingly alive. Source: Piqsels

Jean Baudrillard, the French philosopher and cultural theorist, introduced the concept of simulacra. Simulacra refer to copies or imitations of something that either never had an original reality or no longer exists. In contemporary society, simulations—such as images, signs, and symbols—often replace the reality they are meant to represent. This phenomenon blurs the boundaries between what is real and what is simulated. Deep fakes, in this context, can be seen as examples of such simulacra.

“Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves.” (Philip K. Dick).

To mitigate the described tendency, humans must grasp the concept of AI to enable them to thoughtfully evaluate AI-generated information for decision-making. This understanding should also encompass transparency regarding content’s AI origin. When this transparency exists, humans and machines can collaborate in what Zann Gill aptly termed Collaborative Intelligence. In this context, collaborative intelligence characterizes multi-agent, distributed systems where each agent—whether human or machine—autonomously contributes to a problem-solving network. As the philosopher Aristotle wisely stated: “The whole is greater than the sum of its parts.”

However, if an artificial intelligence were more akin to an octopus than to a human, its behavior and decision-making would be less comprehensible to humans. This lack of understanding extends beyond cognitive aspects and also affects empathy.  Interestingly, this divergence has already led to the exclusion of intelligent cephalopods from the 1966 US Animal Welfare Act (7 USC § 2132(g)), which specifically covers vertebrates (animals with backbones).

(g) The term “animal” means any live or dead dog, cat, monkey (nonhuman primate mammal), guinea pig, hamster, rabbit, or such other warm-blooded animal, as the Secretary may determine is being used, or is intended for use, for research, testing, experimentation, or exhibition purposes, or as a pet; but such term excludes (1) birds, rats of the genus Rattus, and mice of the genus Mus, bred for use in research, (2) horses not used for research purposes, and (3) other farm animals, such as, but not limited to livestock or poultry, used or intended for use as food or fiber, or livestock or poultry used or intended for use for improving animal nutrition, breeding, management, or production efficiency, or for improving the quality of food or fiber. With respect to a dog, the term means all dogs including those used for hunting, security, or breeding purposes.

Based on evolution, the law primarily focuses on our own class, assuming that other species are incapable of human-like feelings, particularly related to pain and suffering. This human bias among lawmakers often leads to an exclusion of empathy toward other organisms. Similarity plays a crucial role in eliciting sympathy, which explains why scientific research has predominantly centered around vertebrates, rather than encompassing other biological classes.

Recent studies, however, challenge this narrow perspective. Complex behavior and cognitive abilities are not exclusive to our class. For instance, octopuses, which belong to the cephalopod group, exhibit remarkable traits:

  • They demonstrate problem-solving abilities, including tool usage.
  • Their behavior reflects curiosity and playfulness.
  • Emerging evidence even suggests that octopuses are capable of experiencing pain and stress.
BBVA-OpenMind-Keith Darlingon-Human level Artificial General Intelligence AGI
Human-level intelligence has come to be known as Strong AI or Artificial General Intelligence (AGI).

Masahiro Mori, in his 1970 essay titled “The Uncanny Valley,” concluded that up to a certain perceived level of human likeness, people react with sympathy toward a robot. However, beyond this point, the experience shifts, and the robot becomes perceived as creepy. On a practical level, most users find robots like ‘Pepper’ to be likable and non-offensive, while creations like ‘Sophia’ may be perceived as threatening. This understanding can guide the design of likable robots, which can assist us in various settings such as airports, restaurants, offices, and nursing homes. Interestingly, their cute outer appearance may deceive us into thinking that the included AI is fundamentally different from human intelligence. It’s crucial to consider this as we form Collaborative Intelligence by teaming up with machines. This collaboration can occur not only in workplaces and universities but also within children’s rooms.

Patrick Henz

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved