Thinking About Ethics Into Reality|AI Ethical Governance And Social Robots
Thinking About Ethics Into Reality|AI Ethical Governance And Social Robots
The application of AI practices is growing exponentially over time, while the human community that needs AI to improve well-being is in a global cultural value crisis. What problems do AI ethical governance need to face?
The application of AI practices is growing exponentially over time, while the human community that needs AI to improve well-being is in a global cultural value crisis. What problems do AI ethical governance need to face?
"Social robots" include robot care, robot companions, robot police, robot colleagues, etc. They have practical action capabilities and autonomy, and their impact on human society will be far greater than the current large language models or AI image generation. What moral and emotional procedures should social robots have to force them to make choices that conform to the universal values and moral systems of human beings?
In order to explore these cutting-edge issues, East China Normal University held an academic salon "AI Gathering" (Lectures 134-135 of Jisi and Wen Lectures) on June 5, 2024 to explore the issues of AI ethical governance and social robots. The salon is co-organized by the Institute of Modern Thoughts and Culture of East China Normal University and the Institute of Humanities and Social Sciences of East China Normal University, and hosted by the Center for Applied Ethics of the Department of Philosophy of East China Normal University and the Shanghai Comparative Research Association of Chinese and Western Philosophy and Culture.
This forum is divided into two parts: thematic seminar and roundtable discussion. The theme seminar was taught by Professor Liu Jilu of Fullerton, California State University, and Professor Wang Xiaohong of Xi'an Jiaotong University, and Professor Chen Yun of the Institute of Modern Thoughts and Culture of East China Normal University and Professor Pan Bin of the Department of Philosophy of East China Normal University. The roundtable discussion session was chaired by Professor Zhang Rongnan, the Center for the Applied Ethics of the Department of Philosophy, East China Normal University.
1. Social robots
Professor Liu Jilu made a keynote speech entitled "How to design basic moral qualities of social robots: Confucian virtues and their influence." Professor Liu first clarified the conceptual categories of AI ethics and robot ethics, and stated that he mainly studied the two issues of robot ethics and how to design, apply, treat robots, and the morality of robots. Professor Liu specifically mentioned that his research focuses on social robots, which require high interaction with humans and may even have independent thinking and action capabilities in the future. Therefore, in the face of the rapid development of existing robots and the difficulties that may arise in the future, how to design robots' moral qualities has become an important issue. On the one hand, Professor Liu reflected on the past choice of ethical theories that artificial moral subjects should follow, and on the other hand, he put forward his own conception and reference Confucian moral ethics and moral emotion theory. He did not rely on the action itself and started from the actors to build a robot with a continuous moral "personality".
Further, Professor Liu made a specific explanation of his concept in light of scholar F. Malle's proposal, supporting the goal of robots' "moral competence" first, and then consider how to build the moral subjectivity of robots in the future. She reposted the claim of a Columbia University professor: We should adopt a steer rather than fearful attitude towards the development of AI and robots. According to Malle's explanation, "moral competence" contains six key elements, namely moral vocabulary ability, moral normative ability, moral cognitive ability, moral emotion, moral decision-making and action, and moral communication ability. Professor Liu specifically pointed out that moral norms should not be limited to regionality and singularity, but should build a universally diverse value norms. On this basis, Professor Liu explained in detail five reasons for choosing Confucian ethics to establish robot virtues: (1) Confucian ethics emphasizes differences in closeness and distance, which is more suitable for establishing intimate relationships between robots and humans; (2) Confucian ethics emphasizes lifelong learning, which can be used to strengthen the endless termination of machine learning; (3) Confucian moral ethics lists complex and profound meanings, not just for human shortcomings, but can also be applicable to the moral structure of robots; (4) Confucianism's emphasis on handling relationships between people helps to establish a group with moral integrity; and (5) Confucian moral ethics is a moral elite ethics, and the goal is not to just build anthropomorphic robot, but to create gentlemen and sages among robots.
Regarding the choice of many Confucian virtues, Professor Liu advocated the choice of eight virtues of "benevolence, righteousness, propriety, wisdom, loyalty, trustworthiness, sincerity and respect" to join the robot ethics design as the basic structure of robot ethics construction from the top down. She also emphasized that robots must learn moral cognition from the bottom up, from the cases of virtuous people, and learn human value choices through interaction with people. She believes that the value construction of robots must require "people are in the loop" and uses questionnaire to examine people's expectations for virtuous robots. She believes that the questionnaire and results data she conducted in 2022 can confirm the usefulness of Confucian virtues.
Finally, Professor Liu Jilu once again emphasized that moral emotions are not optional for robots, but are necessary. Robots must have some moral emotions in order to make the right moral choices. She believes that not everyone's positive and negative emotions need to be designed for robots. What she chose was what Mencius called the "four ends"—four moral emotions that made moral possible. She believes that these moral emotions are not based on biological reactions of the body, but are based on moral judgments. They are cognitive emotions, and those who cooperate with them can also choose natural emotions such as love, sympathy, and regret. Of course, at this stage, we can only expect robots to have "moral competence" and to correctly make "moral" moral judgments, moral communication, moral choices, and moral behavior. As for how to go deeper from appearance to substantive level in the future and build the "moral" moral subjectivity of robots, it is still a debatable and challenging task. Professor Liu hopes that more Confucian scholars will join the research.
Professor Liu Jilu, Fullerton, California State University
Xiao Yang, a professor at Kenyong College in the United States and visiting professor of the Department of Philosophy at East China Normal University, commented on the two dimensions of virtue's emotion and wisdom, as well as the question of how to describe cases in the questionnaire method. Professor Xiao highly agrees with Professor Liu Jilu's view that ethics research should not only focus on actions and rules of action, but on virtue. However, one consequence of this is that it presents many serious challenges in designing social robots, which are challenges that ethics that focus only on actions and principles do not.
Professor Xiao Yang emphasized that virtue has at least two dimensions that challenge how to design virtue robots. First of all, virtue involves emotions. It is not enough to just take the right actions, but also to have the right attitude and emotions. The first thing a virtue robot should be to become an emotional robot, a robot with "emotional intelligence". Regarding the problems faced by how to impart emotions to robots, Professor Xiao made a more specific development, pointing out that most philosophers believe that emotions involve propositional content, and few people, like the Stoia school, believe that emotions can be completely reduced to and equivalent to propositional content. Later empirical research by psychologists showed that only a very small number of emotions can be completely reduced to the content of the proposition. That is to say, there are always things in emotions that are closely related to the body and cannot be expressed in propositions, so they cannot be programmed. This is the biggest challenge in designing emotional robots.
In addition to emotions, virtue also involves the dimension of practical wisdom. Professor Xiao first quoted Mr. Feng Qi's famous saying "turn theory into virtue", and then said that Mr. Feng would probably agree with one of its inferences, that is, "virtue should not be turned into theory." This is because practical wisdom cannot be reduced to theory or principle. Williams (B.) was once asked, "What's wrong with a person who follows the principles?" His answer was because he acts like a machine. McDowell (John) has also proved that virtue cannot be encoded.
Regarding the cases in the questionnaire researched by Professor Liu Jilu, Professor Xiao mentioned that Cora wrote an article discussing experimental philosophy, "What if x isn't the of sheep? and - in". She argues that current researchers have no self-awareness or critical reflection on how those cases in their questionnaire are described, and they do not notice in particular that their description of a case has been permeated by various values and presuppositions. Taking his own teaching experience as an example, Professor Xiao mentioned that when discussing the issue of "euthanasia", the same specific case, from being described as "helping a person get rid of pain" to being described as "murder", some students' positions have changed. Professor Xiao suggested that in future questionnaires, it may be possible to try to change the description of a case, or preferably give different descriptions of the same specific case. The differences between these different descriptions are often very subtle, and how to teach robots to recognize and understand these subtle differences is another huge challenge in designing a virtue robot.
Xiao Yang, professor at Kenyong College in the United States and visiting professor of philosophy at East China Normal University
Professor Gou Dongfeng from the Department of Philosophy of East China Normal University commented on methodology, the universality of Confucianism, and the issue of "like a person". Professor Gou first started from the methodology and affirmed the importance of exploring moral competence. He pointed out that clarifying the research object is a very important task. Professor Liu's research clearly focuses on social robots, which is of great significance. Professor Gou believes that exploring the issue of how humans and robots live in harmony involves understanding of humans, and the definition of humans in Confucianism is very rich. He quoted Mencius: "Yao and Shun are nature; Tang and Wu are opposite; Five Hegemons are false. If you fail for a long time, you will not return, and you will be afraid of knowing that it is not there." He used this to explain the Confucian understanding of people and how we should deal with robots.
Next, Professor Gou discussed the universality of Confucianism. He quoted Liang Shuming's three questions and pointed out that Confucianism pays more attention to the relationship between people and advocates solving this problem in a moral way, namely loyalty and forgiveness, which can be expanded to the solution of all other problems on this basis. Professor Gou emphasized that the Confucian governance concept of "ruling people with others, stopping by others" and the fundamental spirit of loyalty and forgiveness have important inspiration for the interaction between humans and robots.
Finally, Professor Gou discussed the issue of "like a person". He mentioned Mori Masahiro's "uncanny valley effect", which led to the differences in feelings between Chinese and Western philosophy of treating robots that are highly similar to humans. Confucianism looks at the world, ghosts, gods and humans from a genre perspective, without a strong duality between humans and non-humans. On this basis, Professor Gou further raised three issues related to robot morality. First, existing robots may not be distinguished from humans when completing specific tasks, but they are still only playing specific roles and may not necessarily have the "benevolence" of robots; second, when strong artificial intelligence has not yet appeared, how to deal with objects that pretend to be like humans but are not humans? This involves the design of robots. Top-down design may produce immoral robots, which is unpredictable; thirdly, when strong artificial intelligence appears, we cannot design them. How should we deal with these intelligent moral issues then?
Gou Dongfeng, Professor of the Department of Philosophy at East China Normal University
Professor Chen Yun of the Department of Philosophy of East China Normal University said that with the rapid development of artificial intelligence technology, the question of whether robots can possess human qualities such as legal awareness, moral awareness, historical sense, and empathy needs to be highly valued. The human-machine issue is actually a discussion around "What kind of existence do we hope to build a robot into" and "What kind of robots can we create in reality." In reality, whether a management system can be established to accommodate robots as a member of human society; whether future robots can have wisdom, emotions, consciousness, and even invent new language symbol systems are still in an unknown state. Professor Chen believes that although we have made significant progress in artificial intelligence research, there is still a certain gap from our ideal goals, and it is still a huge challenge to get them to the same level of complexity as humans. Nevertheless, humans can continue to think deeply and explore this field.
Professor Chen Yun from the Department of Philosophy, East China Normal University
2. AI ethical governance
Professor Wang Xiaohong gave a keynote speech “Three major problems in AI ethical governance and interpretation of philosophical principles.” Professor Wang's report mainly covers the contents of three parts: cultural consensus problems; moral responsibility of AI composite systems; conceptual analysis and clarification of multiple AI ethical issues. Professor Wang mentioned that his cognitive perspective on cultural consensus issues is based on the online platform "Link Artificial Intelligence Principles Platform (LAIP)," which aims to integrate, integrate, analyze and promote global artificial intelligence principles and their social and technological practices. The platform word cloud map shows that the concept of "people-oriented" is valued by most countries and shows a trend of cultural consensus on AI governance. However, under different cultural contexts, the abstract criterion of "people-oriented" needs to be implemented in specific ethical principles.
Regarding how to achieve cultural consensus, Professor Wang proposed three intelligent consensus strategies from a philosophical perspective. First, the "concept dove cage" that transcends culture, that is, avoid using the "concept dove cage" in the culture you are familiar with, and put facts in another culture into it to explain different cultural phenomena; second, respect the diverse values accumulated by different histories under the principles that are conducive to the progress of human nature; third, the ethical embedding of AI models, whether from top to bottom or bottom to top, requires analysis and clearly presentation of the semantic implications of moral philosophy. Professor Wang also shared an example of cultural differences to supplement the consensus strategy.
In addition, Professor Wang shared the recent work done by his laboratory, and especially shared the modeling project of ancient Chinese philosophy books. Professor Wang said that the abstractness of philosophical expression and the ambiguousness of the term Chinese philosophy are difficult issues in the transformation of moral philosophy into AI algorithms. The LDA theme modeling done by the laboratory can be used to discover the overall meaning structure of a huge corpus. It is currently mostly used in literature and historical research. There is not much practice in the field of philosophy, and its practicality remains to be examined. Close cooperation with Chinese philosophy professional scholars hopes to help the LDA model be applied to the philosophical corpus, and then focus on specific texts, and then reveal the meaning characteristics of small texts and the meaning relationship between texts, thereby making philosophical discoveries, and providing empirical arguments for some philosophical propositions to achieve research breakthroughs in this field. Finally, Professor Wang briefly discussed the problems of thematic model explanation, semantic expression mapping vector representation, and metaethic construction.
Professor Wang Xiaohong from Xi'an Jiaotong University
Professor Pan Bin of the Department of Philosophy of East China Normal University pointed out that Professor Wang’s report brought an important message, namely the suggestion on establishing a database of philosophical topics. Professor Wang's report shows how to use big data to conduct philosophical research, which is of great significance to philosophical literature verification, academic research and ideological history research. He believes that if a high-quality large-scale philosophical database can be established, it will have a revolutionary impact on the digital research of philosophy. Professor Pan also said that the establishment of a language database is a long process, but once completed, it will greatly promote the digital transformation of philosophical research. Professor Wang’s work has opened up this direction and is of great exemplary significance. We also hope that more scholars can participate in research in this field and jointly promote the digitalization process of philosophical research.
Professor Pan Bin, Department of Philosophy, East China Normal University
Professor Fu Changzhen from the Department of Philosophy of East China Normal University took the three major problems raised by Professor Wang as the entry point to explain the premise of building an AI ethical governance framework from three dimensions. First, the governance of specific ethical issues arising from the use of AI is a consequential governance; second, governance that conforms to ethics is the realization of good governance, which is a governance at the principle and method level; finally, the forward-looking governance of ethics as a value goal is related to the interaction between people and AI in the future, and ethics needs to shoulder its mission.
Regarding the issue of how to achieve the AI ethical governance goals, Professor Fu proposed to consider the implementation of principles from practice. Ethical principles need to meet feasibility, acceptability and desire, and build a "tent"-style theoretical framework, which can not only provide practical guarantees for the development of AI, but also meet the requirements of different cultural contexts, and conform to the concept of "technology for goodness and people-oriented". Professor Fu believes that this not only requires the joint efforts of philosophers and engineers, but also requires the participation of the public; this not only means that AI needs to conform to the existing values of human beings, but also means promoting the harmonious coexistence between AI and human beings, thereby giving birth to values that are more in line with the direction of social progress. Based on this, Professor Fu explored the transformation of the ethical paradigm and further put forward his own proposition, namely, the birth ethical shift based on practical wisdom in the context of information civilization.
Professor Fu Changzhen, Department of Philosophy, East China Normal University
Professor Zhu Jing from the Department of Philosophy of East China Normal University shared his experience on the two keywords "foresight" and "empirical research". Professor Zhu mentioned that Professor Wang had already begun to study how to use computer programs to explore scientifically as early as around 2000. Faced with the reality that AI for is becoming a craze, Professor Wang's research has undoubtedly shown a high degree of forward-looking nature.
Professor Zhu emphasized the importance of empirical research. She pointed out that Professor Wang's empirical research provides valuable experience in exploring cutting-edge issues such as artificial intelligence ethics, interpretability, verifiability and predictability in AI for. As a researcher of the philosophy of science, you need to have the corresponding abilities to participate in these tasks that are often considered computer scientists or data scientists, such as running LDA models. Professor Zhu believes that Professor Xiao mentioned the methodological issues behind ethical investigations and the investigation and research promoted by Professor Liu Jilu in the seminar, which actually revealed the widespread application of empirical research and pushed philosophical research to the forefront of empirical research. It is worth noting that the methodological doubts faced by philosophers when conducting empirical research are particularly serious. Because the empirical research results are specific and direct, it is easy to question whether there are problems such as improper design and unrepresentative samples. Professor Zhu Jing concluded that Professor Wang’s report shows the possibility and expansion space of how to advance philosophical research through empirical research. Although there are many challenges, its broad prospects are exciting.
Professor Zhu Jing, Department of Philosophy, East China Normal University
3. Roundtable discussion
Professor Zhang Rongnan of the Department of Philosophy of East China Normal University has made suggestions on the risks of cultivating robot morality. Professor Zhang pointed out that Confucian virtues are mainly cultivated in relational and contextual processes, and most people's emotions are related to embodied activities. So, how long does a robot take to learn and how much connection to cultivate virtues is a question worth thinking about. Furthermore, the learning process of robots is uncertain, and we cannot guarantee that robots will learn from good and good, so it may eventually develop into a "villain" rather than a "gentleman" similar to what Confucianism calls it. Professor Zhang proposed that perhaps it is possible to try to add some rules and ethics beyond Confucian moral ethics as the basic bottom line to help robots better learn and apply moral norms.
Professor Zhang Rongnan, Department of Philosophy, East China Normal University
Assistant Professor Fang Chao from Liverpool University combined his own sociological background and his concerns on aging and palliative care issues to propose the relationship between AI ethical governance and law. He mentioned that in recent years, the Netherlands has made new developments in the issue of legalizing euthanasia, involving comprehensive considerations from physical to psychological. If an individual feels that life is meaningless, he can also apply to end his own life, which is obviously different from the current legal status of other countries in this field. Professor Fang believes that AI ethical governance and the regulation of robots also need to consider the application of different laws and situations.
Assistant Professor Fang Chao of Liverpool University
Liang Yihong, a young researcher at the Institute of Advanced Humanities of East China Normal University, proposed the idea that "one difference in theory" in Song and Ming Neo-Confucianism plays a role in robot ethical design. He believes that "there is one point of difference" has universal value. For Confucianism, benevolence is the highest principle, and benevolence has different manifestations on different occasions; for Wang Yangming, the concept of "conscience" has its supreme nature, and it also shows different qualities in different situations, such as "filial piety", "faith", etc. In this sense, perhaps the theory of "one difference in theory" is expected to play a role in robot ethical design. In addition, he also paid attention to the theoretical source of the topic "moral emotions arise after moral judgment" in Professor Liu's speech.
Liang Yihong, a young researcher at the Institute of Humanities, East China Normal University, Simian Institute of Humanities
Associate Professor Hong Cheng of the Department of Philosophy of East China Normal University discussed the complexity of human nature and sociality. Professor Hong pointed out that for robots, sociality involves both social behaviors between robots and interactions between robots and humans. Professor Liu believes that Confucian virtue ethics are universal, but Western scholars may have different understandings. For example, Aristotle believed that sociality was based on friendship between people, while Hobbes emphasized self-love and self-preservation. So, whether robot ethics also need to consider robot self-love and self-protection desires is a question worth discussing. In addition, Professor Hong also mentioned the breakdown between theory, questionnaire and real life, and pointed out that robot ethics need to be more connected to actual life.
Associate Professor Hong Cheng, Department of Philosophy, East China Normal University
Professor Liu Liangjian of the Department of Philosophy of East China Normal University pointed out that philosophy and AI have two ways of interaction. One is "for AI", that is, thinking about the problems encountered in the development process of AI from a philosophical perspective; the other is "AI for", that is, exploring the possibility of AI in promoting philosophical research. Professor Liu put forward two discussion levels around the emotional issues of robots. On the one hand, if we discuss the importance of emotions at the agent level, we need to consider whether the machine can really feel the joy, anger, sorrow, and happiness. On the other hand, the concept of "emotion" can adjust its connotation when facing counterexamples so that it has the ability to accept counterexamples. The same is true for concepts such as "heart", "heart", and "people". Regarding the issue of ethical embedding of big models, Professor Liu said that it is necessary to reflect on the presumption of the dichotomy of fact and value, and whether it should be analyzed at the level of pragmatics rather than pure semantics.
Professor Liu Liangjian, Department of Philosophy, East China Normal University
Professor Zhu Cheng of the Department of Philosophy of East China Normal University pointed out that technological development has been to improve human life since ancient times. On this basis, AI may be more subversive at the technical level, but its ultimate goal is to help humans build a better life. However, the rapid deployment of AI in modern society has led to some occupations being replaced by artificial intelligence. The final result is that people cannot keep up with the rapid development of technology, and such a dilemma requires a new moral examination.
Professor Zhu Cheng, Department of Philosophy, East China Normal University