Event

Is AI Mindless? AI, Emotion, and Ethics

  • Date: December 2, 2020

About:
AI has encountered Moravec’s paradox: teaching computers high-level reasoning is easy and teaching them simple sensorimotor skills is difficult. AI ethics calls for building “responsible intelligent machines” that are empathetic in that they understand the social meaning of human behavior, which requires understanding the meaning of Self-Other-World. But in what sense can AI “evolve” emotions and morality when machines today lack autonomy, self-awareness, and practical reasoning? Can we re-examine the dilemma between machine and human from the perspective of the framework and conflict of ancient Greek drama, especially Oedipus the King?

Key Discussions:
• How do we resolve Moravec’s paradox?
• Why do we say AI today is mindless?
• In what sense can we talk about the emotions and moral responsibilities of machines?
• Is it reasonable to talk about AI emotions and ethics from the perspective of human cognition?

Presenters:
LIU Xiaoli, Professor, School of Philosophy, Renmin University of China
Liu Xiaoli is Distinguished Professor and the Chief of the Interdisciplinary Center for Philosophy and Cognitive Science at Renmin University of China. She currently chairs the Committee of Philosophy of Science, China Society of Dialectics of Nature and has published many influential articles including The Dilemma and Trend of the Research Program of Cognitive Science, Analysis of the Thesis of the Extended Mind, and Questions on the Computational Theory of Mind in journals such as Social Sciences in China, Philosophical Research and Journal of Dialectics of Nature. She has also published a book titled The Life of Reason—A Study of Gödel’s Thoughts. Recently, her work The Challenge of Cognitive Science to Contemporary Philosophy was published by Science Press.

ZHU Rui, Professor, School of Philosophy, Renmin University of China
Zhu Rui is Distinguished Professor at the School of Philosophy, Renmin University of China and is a visiting Professor at Texas State University. His research interests include neurophilosophy and philosophy of mind, neuroaesthetics, Plato, and comparative philosophy. He returned to China in 2018 after nearly three decades of academic research in the United States, and he is committed to promoting the development of interdisciplinary science including neuroaesthetics.

Moderator:
ZHAN Yiwen, Researcher, Berggruen Research Center, Peking University
Zhan Yiwen holds a PhD in philosophy and is Researcher at the Berggruen Center, Peking University. His research interests include philosophy of science, metaphysics, and epistemology.

Summary:

Let’s start from the international call as part of an ethical code for building “responsible AI”, and ponder if the ability to empathize is a necessary condition for intelligent machines, qua co-inhabitants of the so-called Lebenswelt (life-world). Given that only an autonomous agent with proper emotions can be said to be morally responsible, the issue of machine emotivity constitutes a central piece to understanding if AI will eventually evolve to be a moral agent, and how risky such an evolution can be.

In the 13th Berggruen Forum “Is AI Mindless: Intelligent Machine and the Ethics of Emotions” broadcast at bilibili.com on the evening of 10 December, 2020, Professors Liu Xiaoli and Zhu Rui of Renmin University of China discussed about the ethical and emotional aspects of AI. Zhan Yiwen, a researcher at Berggruen China Center served as the coordinator.

Liu Xiaoli, philosophy professor of Renmin University of China and chief researcher of the university’s Interdisciplinary Center for Philosophy and Cognitive Sciences, shared her insights on seven issues: 1. The proper meaning and context for discussing the morality and emotion of intelligent machines; 2. Moore’s Law and Moravec’s Paradox; 3. Machine: Logician vs. Magician; 4. The emotional grounding of AMA; 5. The Pitfall of “The Uncanny Valley”; 6. The Omohundro dilemma; 7. The dangers of AMA: a general assessment.

Liu started by drawing a distinction between the genuine moral emotions of autonomous agents versus engineered or merely functional moral emotions implanted into a machine by its designer. As long as we are interested in the morality and emotions of AGI (artificial general intelligence), the intelligent machines need to really “experience”, rather than just simulate, moral emotions.

Since its invention more than six decades ago, especially in the last thirty years, AI has been plagued by what is known as the Moravec’s Paradox. As Hans Moravec observed, computers tend to do well on some tasks considered difficult by humans, yet badly on many easy tasks. To address the problem, AI builders have abandoned the strictly computationalist approach that relies on deductive reasoning, and embraced the statistics-driven connectionist approach, but only to find themselves trapped in the black box of algorithms. The current trend of AI is to combine the top-down symbolic approach with the bottom-up statistical approach. However, despite its profound achievements in large-scale computing and intelligence amplification on specific tasks, AI seems unlikely to show any true “mindfulness” in the near future.

Contemporary neural networks and deep learning exhibit features that cannot be explained by human cognitive and psychological common sense. For one thing, current AI models show no autonomy as they interact with the world, and they do not yet count as emotionally grounded moral agents which can tell good from bad, right from wrong. (See Liu Xiaoli, Lack of Autonomy and Self-awareness is the Primary Difference between Mindless Robots and Human Beings, published at Ruin.

In response, Liu proposed that a truly emotional AI has to achieve grounding in three dimensions: 1) symbol grounding, to show the ability to understand the meaning of natural languages; 2) physical grounding, to achieve embodied comprehension of the physical world; and 3) emotional grounding, to appreciate human behaviors in social contexts.

Liu introduced the concepts of Emotional Trigger, Credit Assignment and Practical Reasoning and elaborated on the key features of an emotionally grounded AMA (artificial moral agent).
1. Emotional Trigger: AMA is a cognitive agent that can set its own goals and take actions accordingly. Beyond perception, AMA must be sensitive to moral situations and process information about its surroundings accordingly. And it has to be flexible so that it can change its cognitive mode under uncertainties and the restraints of time, space or the scarcity of resources.
2. Credit Assignment: AMA is a cognitive agent capable of assigning credit. Credit assignment enables AMA to achieve human like learning capability, able to generalize over limited samples and find general solutions.
3. Practical Reasoning: AMA is a cognitive agent capable of practical reasoning.
AMA can form beliefs based on its own representations of the world and act intentionally. It must be capable of autonomous interactions with the world, setting its own goals, reflecting on and evaluating of its own actions, and reason counterfactually.

Liu stressed the role of empathy in emotional grounding. Humans tend to build robots that are amicable, empathetic and look like themselves. Yet as the level of empathy rises, the pitfall of the so-called “Uncanny Valley” may become more evident. The reality of AI emotions from our human perspective may turn revulsive and horrifying. In addition, we are faced with the so-called “Omohundro’s Paradox”: we can imagine an “moralistic” AI hardwired with a set of human-centered ethical codes. However, since we humans have indefinitely many incompatible preferences, if each and every human ethical code is built into an AI in a top-down manner, the morally perfectionistic AI will only be paralyzed by these conflicting rules, and, overwhelmed by the innumerable moral choices it is thrown into, choose the only way out: the robotic suicide, or crash.

According to Liu, robots as we know today are not real moral agents: moral responsibilities of their actions entirely fall on their human designer, manufacturer and operator. An AI can be fully responsible for its actions only if it is constructed as an autonomous AMA. However, in the predictable future where human technologies still fall short of such full-fledged AMA, we may reasonably expect the emerging moral agents to be hybrid extended cognitive systems that combine human and AI features. At the end of her speech, Liu urged the audience to reflect on the apparently distinct evolutionary paths of robots and humans, of the silicon-based and carbon-based forms of life. If our understanding of AI emotions seems overly anthropocentric, how could we conceive of an alternative?

Following Liu’s question, Zhu Rui, philosophy professor of Renmin University of China and guest professor of Texas State University, gave his talk entitled “The Limit to Inquiry: A discussion of knowledge possession, understanding and the value of uncertainty” in which he talked about the relationship between knowledge and uncertainty as illuminated by the Greek tragedy Oedipus the King.

Zhu proposed that contemporary AI discourses are comparable to the discussions of human–god relationships in Greek tragedies. According to ancient Greeks, the gap between humans and gods is not essential, but more or less of a technical matter; in other words, there are certain techniques that could help turn a human into god — and knowledge happens to be one of them. In Oedipus the King, Sophocles examines what knowledge means to humans: is it possible that a human, through the quest for knowledge, overcomes his moral destiny and becomes gods’ peer or even superior to them? The question is somewhat isomorphic to our debates over robot–human relationships today.

Ancient Greeks tend to believe that immortality and knowledge about the future are, among other things, what distinguishes a deity from humans, and that ascending the epistemic ladder by searching for ever more truths offers a way for humans to become “similar to gods.” Yet Oedipus the King highlights a unique danger associated with such a “knowledge project”, by staging an Oedipus who does not understand the truth about himself exactly when he is bent on knowing himself. As a result, his steadfast self-enquiry ends in ruin. It can be said that the play draws a profound distinction between two senses of “knowledge”: possession and understanding. Unlike mere possession which is passive and unintegrated, understanding is active knowledge in which pieces of information are integrated into a creative whole. First and foremost, we may say that such understanding carries within itself various forms of self-limitation.

Oedipus assumes the role of a discoverer in Sophocles’ play. He pursues each and every clue that eventually leads to himself, the murderer of his own father and husband of his own mother. The play’s dramatic tension lies in the fact that at each stage of the play, the more Oedipus is convinced of his judgement, the more flawed his reason seems to be. Despite the numerous attempts by others to stop him from the fateful encounter with the final truth, Oedipus’ curiosity remains insatiable and unstoppable. Eventually, he becomes a maniac who can’t help himself. The doom is brought about by an uncontrolled hunt (ichneuo, to track) for the truth and the hubris of an otherwise supreme human intellect.

Such hubris might be characterized as the following four attitudes: 1) there is no end to the search for knowledge, and no questions could be fundamentally ill-posed; 2) knowledge is always superior to ignorance, and certainty always better than uncertainty; 3) knowledge can improve human condition by helping us to overcome our fate; and 4) everything is knowable as long as one searches on. Such attitudes may be labeled “Socratic,” and they function as a backdrop to the play of Sophocles. While the Socratic project of climbing the ladder may lead humans to becoming like the gods, the more conservative Sophocles maintains that not everything should be subject to rational investigation. The unrestrained human enquiry suffers from a cognitive “Oedipus complex,” so to speak.

Historically speaking, compared to the Socratic project, the Sophoclean “how to place a limit on one’s enquiry” has been a venerable philosophical project, too. As David Hume points out, since a thorough-going sceptic would not even survive the everyday living, a rational skeptic must know how to restrict the scope of one’s enquiry. Similarly, knowing the limits of reason is paramount to Kant.

Now the Sophoclean concern is still with us as we think about the relationships between human and AI. Unbounded search for knowledge may bring out the ruin of all of us. With the ever more powerful “search” AI on the horizon, how to equip AI with true understanding in the Sophoclean sense becomes an urgent issue. One way to realize such understanding may be through what is called the implantation of task uncertainty into a search engine. As Stuart Russell points out, though PPL (probabilistic programming language) allows modern AI to do away with deterministic assumptions, AI today still “underestimates the degree of uncertainty about target systems.” A single-minded AI that harbors no doubt or degrees of uncertainty over its search target may become a “maniac” that recognizes no boundaries. Such AI embodies the Oedipal hubris: it would refuse to be shut down and any possible human intervention. Such refusal might as well mean, as many have pointed out, the total elimination of the human species. Therefore, the creation of “beneficial AI”, at least according to Stuart Russell, hinges on how to make AI humble, and the key to “AI humility”, is how to build uncertainties into AI. No coincidentally, such uncertainty implantation may constitute the very foundation for realizing moral emotions into the AI. The ancient Stoics regard cognitive uncertainties as the essence of emotions, so uncertainty-based restraints to rational machines may turn out to be a way to create emotional machines, too.

In the end, Zhu suggested various ways to “fence” AI: besides incorporating uncertainty, we might as well be wary against the project of making AI too much like humans.

Following the talks of Liu Xiaoli and Zhu Rui was a 50-minute Q&A session during which many interesting questions from the audience were raised. In response, views were freely exchanged between the speakers and audience on topics such as how important are machine emotions, whether AI ought to be autonomous, the tensions between reason and empathy, how to deal with the uncertainty-free supercomputers, and the utility of cognitive enhancement.

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world.