AI Ethics

Thinking About Ethics Into Reality|Application Ethics Exploration: Frontier Issues In The Ethics Of Artificial Intelligence

Thinking About Ethics Into Reality|Application Ethics Exploration: Frontier Issues In The Ethics Of Artificial Intelligence

Thinking About Ethics Into Reality|Application Ethics Exploration: Frontier Issues In The Ethics Of Artificial Intelligence

The first phase of the Applied Ethics Workshop initiated by the Department of Philosophy of East China Normal University focuses on the issues of care for the elderly from an ethical perspective and discusses issues such as dignity, rights, vulnerability, and agency of the elderly.

The first phase of the Applied Ethics Workshop initiated by the Department of Philosophy of East China Normal University focuses on the issues of care for the elderly from an ethical perspective and discusses issues such as dignity, rights, vulnerability, and agency of the elderly. While promoting dialogue and integration between disciplines, disciplines and industries, these discussions also promote the transformation of the paradigm of ethics research.

Following the first issue of the heated discussion, on November 9, 2023, the second issue of the Applied Ethics Workshop was held in the Feng Qi Academic Achievement Showroom of East China Normal University with the theme of "Front-edge Issues in Artificial Intelligence Ethics". Professors from Zhejiang University, Tongji University, East China Normal University, and Nanjing University of Information Engineering discussed issues such as ethical review, ethical standards, "black box" metaphor, multi-intelligent subject ethics, and human-machine relationship ethics. The discussion content not only reflects on the current situation of artificial intelligence ethical review, but also conceives the future ethical picture of human-computer relationships. It is looking forward to providing ethical thinking and suggestions for the research work of university researchers in the field of artificial intelligence.

Workshop photo

1. Current Situation and Challenges of Artificial Intelligence Ethics Review

With the accelerated evolution of the new round of scientific and technological revolution and industrial innovation, the emergence of phenomenal artificial intelligence products in the Industrial 4.0 era, especially the emergence of phenomenal artificial intelligence products, has profoundly affected everyone. However, the ethical problems that arise have also become common challenges faced by all mankind. Therefore, Professor Du Yanyong from the School of Humanities of Tongji University conducted a report on three aspects: the current situation, challenges and difficulties of artificial intelligence ethics review, and the solution path.

The first aspect is the current status of artificial intelligence ethical review. First, based on the "New Generation Artificial Intelligence Ethics Standards" issued by China in September 2021 and the "Technology Ethics Review Measures (Trial)" released in September this year as examples, it shows that the country has currently included artificial intelligence ethics review in key tasks. However, Du Yanyong observed that the current work of artificial intelligence ethics review is mainly concentrated in the field of medicine, but it still uses traditional ethical standards in the field of life sciences, and lacks new review standards and procedures for the field of artificial intelligence. Secondly, in terms of academic conferences, some international academic conferences have asked the authors to add an attachment to illustrate the possible social impact of this study while submitting the paper. Finally, in terms of ethical review of technology companies, many technology giants such as Google and Microsoft have established ethical committees in recent years, but some are just "flash-of-the-pan". Du Yanyong speculated that although the operation of the corporate ethics committee is not as successful as that of scientific research institutions, at least many Chinese and foreign companies have shown their emphasis on ethical review in form. However, basically no company makes detailed regulations on their ethical review such as review procedures, content and operational methods public.

The second aspect is the challenges and dilemma of artificial intelligence ethical review. First of all, scientific researchers do not attach importance to ethical review as a whole. Some scientific researchers lack understanding of the importance of ethical review and regard ethical review as a "tight curse" of scientific research, believing that ethical review may hinder their scientific research innovation. Secondly, the construction of ethics committees in colleges and universities is far from enough, and the ethical review procedures and standards with artificial intelligence characteristics are not mature enough. Finally, the professional quality of ethical reviewers needs to be improved urgently. At present, ethical reviewers mainly work part-time, and there are few professional education and training. On the one hand, ethical reviewers need to understand the rapid development of technology, and on the other hand, they need to be fully alert to the social achievements that emerging technologies may bring, which puts higher demands on them.

The third aspect is the solution path to the ethical review of artificial intelligence. Du Yanyong believes that the focus is to strengthen ethical training of scientific researchers, improve their ethical awareness and scientific and technological ethical literacy; improve and build consensus on ethical review through academic organizations, expand the scope of ethical review; improve the system, promote institutional review, and establish an incentive mechanism that attaches importance to ethical review.

After Professor Du Yanyong's thematic report ended, Professor Yan Qingshan from the Department of Philosophy of East China Normal University talked about his views based on his experience in ethical review of life sciences that he had participated in: First of all, medical and life sciences are indeed more mature in ethical review due to the rigid requirements of policies and journal conferences. However, due to the unpredictability of artificial intelligence itself, the ethical review of artificial intelligence can only be based on the results, which requires that ethical review is long-term and consistent. Secondly, the ethical review of artificial intelligence should be appropriately relaxed and based on the principle of "public good" to support the intrinsic value of scientists' "knowledge-seeking" activities. In addition, Professor Yan Qingshan believes that while universities and research institutions set up artificial intelligence ethics committees, a larger review agency should be established at the national level. He used the "He Jiankui gene editing baby" case to point out those scientific research ethical issues that have particularly important impacts, and should be followed up by the national level science and technology ethics committee to play an early warning role. Finally, Professor Yan Qingshan suggested that the review committee should not be in a hurry to carry out ethical review. It can first do ethical observation, that is, regularly discuss and collect issues with scientific researchers, and then hand it over to the ethical committee to follow up in a timely manner. This can maximize the orderly implementation of scientific research while ensuring ethical security.

2. Reflection on the ethical norms of "people-oriented" artificial intelligence

Professor Pan Enrong from the School of Marxism of Zhejiang University starts with the increasingly close human-computer relationship and critically examines the ethical norms of "people-oriented". In Professor Pan's view, "people-oriented" has become a high-frequency keyword for the ethical norms of artificial intelligence, but as a tool of humans, artificial intelligence has far surpassed humans in some aspects, so it is a "superhuman" entity, but "people-oriented" contains "mortal-oriented". Professor Pan raised a question: In the relationship between "human and machine", can the "superhuman" (artificial intelligence) machines "center (mortal) people"?

Professor Pan reviewed and sorted out the status of people as dignified and rational natural subjects established in the Western world since the use of enlightenment, which gave birth to the ethical norm of "people-oriented". However, the "people-oriented" ethical norms in the Western world not only inherit the "Western-centered" idea and the "individual liberalism" idea, but also further express a certain "human ethnic-centered" emotion between humans and artificial intelligence. Based on this, Professor Pan keenly pointed out that the "people-oriented" ethical principle of artificial intelligence advocated by Western society has two limitations, one is the logical paradox. Western society strives to use ethical standards to regulate artificial intelligence, but due to the "superhuman" characteristics of artificial intelligence, this may shake or even subvert the master-servant status between human-computer relationships. Therefore, the artificial intelligence ethical standards proposed and advocated by Western society will move towards self-negation. The second is the loopholes in human nature. From the perspective of "rational economic man", humans can take risks to accept the appearance of "main-slave exchange" of artificial intelligence machines. Inventing and using artificial intelligence based on capital logic is easy to go to extremes, which in turn leads to some irreversible social events. One extreme situation mentioned by Professor Pan is that under the temptation of high profit margins, "rational economic people" dare to take the risk of "making transactions with the devil" to accept "main-slave exchange" machine replacements, and let artificial intelligence dominate the fate of mankind, such as the Boeing crash case. Another extreme situation is that "rational economic people" continue to successfully implement "machine replacement" and finally move towards the large-scale industry of artificial intelligence. But this may lead to the differentiation or even disintegration of human society.

Based on the above analysis, the relationship between the two limitations of the "people-oriented" ethical norms of artificial intelligence in Western society is the relationship between "artificial intelligence itself and the application of capitalist artificial intelligence". But from the perspective of Marx's thought, the above relationship is essentially a reappearance of the relationship between "the capitalist application of machines and machines" in the dimension of artificial intelligence. So, based on Marx's thoughts, Professor Pan believes that there are three paths that can deal with the two limitations of the "people-oriented" ethical norms of artificial intelligence in Western society.

First, according to the basic principles of historical materialism that determines social consciousness, the scope of the "people-oriented" ethical norms and their practice can only be "weak artificial intelligence". Therefore, social reality is that for a long time now and in the future, artificial intelligence is only "weak artificial intelligence" rather than "strong artificial intelligence". Then, the concerns about "strong artificial intelligence" in the first half of the logical paradox are false social consciousness without a social reality basis. Although it is meaningful, it is not what the current "people-oriented" ethical norms of artificial intelligence should do. Secondly, based on human freedom, the "people-oriented" ethical norms and practices of artificial intelligence welcome artificial intelligence with ethical capabilities. In the current context of large-scale social applications of artificial intelligence, people who focus on "reality" cannot simply develop the relationship between humans and machines to achieve capital proliferation and expand reproduction, but must also develop and comprehensively develop the relationship between humans, humans and machines, and machines. This can not only give full play to the individual pursuit of the human group who controls artificial intelligence, but also help the public share the opportunities and benefits brought by the development of artificial intelligence. Finally, according to the dialectical development idea, the "people-oriented" ethical norms and practice of artificial intelligence need to use human nature's characteristics reasonably. The way to treat the symptoms is to reasonably drive the human nature's characteristics to invent and use artificial intelligence, and the way to treat the root cause is to "reconstruct personal ownership", fundamentally eliminating the tension between large-scale production of artificial intelligence and personal private ownership.

Professor He Jing from the Department of Philosophy of East China Normal University believes that Professor Pan’s proposal “weak artificial intelligence is essentially a tool, but the 'people-oriented' idea is no longer enough to satisfy our understanding of artificial intelligence” is extremely inspiring. She asked several questions from this: First, whether artificial intelligence is regarded as a tool or a subject is still the key to our understanding of human-computer relationship. Although the intelligence of humans and machines has similarities, what is the difference between artificial intelligence and human awareness? Second, although brain-computer interface technology is mainly used for healing, it often has the effect of "enhancing". Although this is not the original intention, it is a by-product that must be solved. We still have a long way to stop and distinguish the boundary between treatment and enhancement; thirdly, the expression of artificial intelligence's ethical capabilities can be constructed from two levels: one is that when we treat them as tools, considering that the limitations of artificial intelligence do not often obey our wishes, do we really need the ethical capabilities of artificial intelligence, and the other is whether artificial intelligence needs to be based on self-awareness? Otherwise, how do we define it as ethical?

3. Ethical considerations of artificial intelligence based on interpretability

Professor Pan Bin from the Department of Philosophy of East China Normal University reported that the problem of interpretability was induced by the black box metaphor, further pointed out the sorting of the concept of transparency and thus obtaining the ethical principle of "limited transparency", and ultimately regarding "opacity transparency" as the cognitive norm in the future intelligent era.

Professor Pan Bin talks about the classic issue of cybernetics - the black box issue. Inside the closed black box box, it does not need to be fully recognized and understood. It only needs to establish a causal relationship at the input and output to achieve cognitive purpose. Correspondingly, a white box means a complete understanding of the internal operating structure, while a gray box is an intermediate state of certainty and uncertainty. The black box metaphor simplifies the cognitive process to a “stimulus-feedback” model, ignoring the complexity of the cognitive process. The primary function of the black box model is to understand complex things through simple models, and secondly, it also exerts the effect of "the curtain of ignorance". On the epistemological level, the black box effect actually sacrifices the principle of "transparency" to achieve the purpose of efficient operation.

From the audience's perspective, Professor Pan Bin proposed the interpretability dimension contained in the black box problem. First of all, there is an essential difference between interpretability and intelligibility. Understandability refers to the understanding of the original model and source code, while interpretability refers to the construction of a cognitive interface between humans and models, with the goal of ensuring that humans and intelligent models can be understood in a certain interface. In essence, black box metaphors are an indissoluble inherent feature of artificial intelligence. With the deepening and accelerating of the intelligent process, the risks caused by opacity problems become increasingly profound, and it is urgent to reflect from the perspective of ethical review.

On this basis, Professor Pan Bin believes that the issue of transparency is the key to penetrating the black box metaphor. From the perspective of conceptual origin, the concept of "transparency" has both its psychological origins and is also significantly used in architecture and painting. Based on the above explanation, Professor Pan Bin summarized three basic ethical principles of limited transparency: first, the principle of responsibility is to defend the bottom-line ethics of the transparency requirements of intelligent systems; second, the ethics of situation is the ethical wisdom to understand the transparency of intelligent systems; third, the positive ethics is the ethical breakthrough in rebuilding trust in the intelligent era. Finally, Professor Pan Bin proposed that the transparency of opacity is the cognitive norm for future generations.

Based on Professor Pan Bin's sharing, Professor Zhang Rongnan of the Department of Philosophy of East China Normal University asked, is the opacity that Professor Pan mentioned is inherent in technology itself, or is it the result that human cognitive abilities cannot meet? The lack of transparency can make us unable to explain how artificial intelligence technology works, which in turn makes it difficult for us to trust the results given by the relevant technology. For example, in the field of medical imaging diagnosis, can we trust the judgment given by machines? What if the patient's disease is not treated in time due to misjudgment? Perhaps a more reliable approach is human-computer collaboration. Nowadays, there have been cases of human-machine cooperation diagnosis. Robots can see content that human experts have not discovered, and human experts can also correct robot misjudgments through experience.

Another question about algorithm black box. Algorithm models require data training. If the existing data is discriminatory, then the algorithm trained on these data is difficult to be fair. How to overcome the algorithm black box to solve human requirements for fairness? How to solve the tension between black box and controllability? This is related to the scope and scenarios of the application of artificial intelligence technology.

Three ethical principles were proposed at the end of the report, but the logical relationship between these three ethical principles and how to play an ethical role in specific cases, as well as how to guide the allocation of responsibility of multiple responsible subjects based on situational ethics or positive ethics, are all contents that need to be further explained.

4. The future of an intelligent society

Associate Professor Cui Zhongliang of the School of Marxism, Nanjing University of Information Engineering, took the theme of "The Future of Intelligent Society: The Evolution of Multi-Intelligent Subject Ethics and the Emergence of Ethical Community", and based on the assumption that artificial intelligence will become the central subject, discussing the evolution process of multi-Intelligent Subject Ethics and the possibility of the emergence of ethical communities from the perspective of intelligent society.

Associate Professor Cui Zhongliang first talked about the concept of Agent and used it as the core issue of future intelligent ethics research. The emergence of artificial intelligence subjects has brought the core concepts of traditional ethical subjects to the crisis of gradually disintegrating or being replaced. The accelerated advancement of artificial intelligence research at the emotional, conscious and perceptual levels has led to the continuous expansion of the core of the ethical subject, and the human-machine ethical relationship evolves from fusion to symbiosis, thus creating an ethical community between human-machine. The characteristics of the future community society are manifested as global ethical characteristics to make up for the distribution problems faced by traditional distributed morality, and artificial intelligence and humans will deeply integrate and evolve together. Finally, Associate Professor Cui Zhongliang proposed an hypothesis that artificial intelligence has virtues and may have the ability to develop virtues on its own.

Associate Professor Yu Feng from the Department of Philosophy of East China Normal University raised several questions. First, the global moral model will lead to the inability to allocate moral responsibility in an abstract moral community. For example, in the field of driverless cars, the distributed moral concept is still generally adopted to seek multi-party balance of responsible subjects, and it is difficult to implement a global moral model in reality. Secondly, there are many conceptual differences in the word "Agent", including "moral agent" and artificial intelligence moral body, which presuppose moral concepts at the emotional and perceptual levels, and these concepts are not completely in the same dimension. In addition, the analogy between artificial intelligence's perception operations and human emotions or emotions is questionable. Finally, when talking about the issue of artificial intelligence morality, there are two paths: top-down generational human-computer morality and bottom-up alignment of machines and human values, but are there any better paths beyond these two ideas?

Associate Professor Cui Zhongliang responded that from the perspective of the global moral model, this regional expansion includes factors such as situation and context in a comprehensive consideration, which may allow us to allocate moral responsibilities more reasonably. Regarding the question of whether artificial intelligence perception is real, Associate Professor Cui Zhongliang pointed out that the traditional view discusses the question of "whether there is or not", but the current level of our judgment needs to consider the question of "more and less". Therefore, we should give some moral characteristics to the artificial subject on the issue of "more and less". Finally, in the issue of opacity of human-computer fusion, once artificial intelligence is integrated with humans, there may be some difficult problems, but there are also some other opposite sexes. The core of the value retained by humans is the key to the distinction between humans and machines.

5. Roundtable discussion

After the thematic seminar of several professors, the teachers present had a lively discussion.

Professor Fu Changzhen from the Department of Philosophy of East China Normal University pointed out that the current thinking on human-computer relationship is basically exploring conceptual resources from Western traditions. But for our Chinese designers and developers, what we can consider is whether different emotional ethical resources can be activated, especially Confucian resources, to explore the relationship between traditional Chinese ethical wisdom and artificial intelligence ethics, to help us re-understand the new ethical relationship between human-computer symbiosis, grasp the contemporary, care for mankind and face the future.

Professor Cai Zhen of the Department of Philosophy of East China Normal University proposed that many discussions on the relationship between artificial intelligence and humans involve whether artificial intelligence systems can enjoy a moral dominant position. These discussions concern how we understand moral actors and morality, i.e. what kind of moral actors can be counted as members of the moral community. After all, there are different definitions of "morality", and accordingly, the thresholds of moral members are also different. Therefore, the definition of identity of artificial intelligence systems is fundamentally related to how to understand "morality".

Dr. Chen Hai, a young researcher at the Institute of Humanities, believes that Professor Fu Changzhen’s search for resources from traditional Chinese ethical thoughts is a very unique perspective, but it also suggests that there may be a dilemma here, that is, the state pursued in Chinese thought is a state that is difficult for artificial intelligence to achieve. So, when we use Chinese ideas to guide the ethics of artificial intelligence, will there be some differences between ideals and reality?

Professor Liu Liangjian, director of the Department of Philosophy at East China Normal University, first expressed a strong interest in the operation of ethical review. What he was concerned about was the specific indicators of ethical review and how the specific operating procedures of ethical review were carried out, and what is the basis for the reviewer's judgment. Secondly, regarding the ethical position of AI, most teachers regard strong artificial intelligence as the long-term vision during the discussion. So in what sense can weak artificial intelligence become the ethical subject? If we use autonomy to distinguish, then does autonomy need to divide different levels, and which level should we use to define the dominant position of the robot? In addition, Teacher Fu raised the issue of thinking about human-computer ethics in Chinese philosophy. Although it can provide new discourses and new perspectives, the difficulty is that these discourses are relatively weak at the analytical level and are difficult to form operable norms, and may have certain limitations in use.

Associate Professor Yu Feng believes that the concepts such as strong AI, weak AI, general artificial intelligence (AGI), conscious, perceptual, and autonomous artificial intelligence that we are talking about are extremely controversial, and there is a great difficulty in defining the definition. Because at the beginning we understand AI, we think it can solve problems, but with the development of technology, we will feel that AI still needs independent awareness. However, the problem of consciousness is also a relatively complex independent problem. For example, does independent consciousness need perception? Does perception necessarily mean independent consciousness? Is iterative upgrade of robots just conscious? Between these questions, it is difficult to define another concept with one concept.

Professor Pan Enrong agrees with the concept mixed application problem proposed by Professor Yu Feng. He pointed out that when we discuss artificial intelligence, there is too much knowledge gap. In the field of artificial intelligence, scientists believe that the perception problem of artificial intelligence has been solved because in their view, the representation of emotion is computing, which has some information deviation from the concepts discussed in philosophy. Therefore, in addition to general ethics, we may also need to change our thinking and establish a set of ethics of technology itself, so that we can align with the issues that scientists are talking about, and it is also the real significance of the discipline of applied ethics.

Professor Zhang Rongnan’s idea of whether to give robot moral positioning a two-directional perspective. First, based on traditional ideas, we see whether machines have certain human characteristics - such as rationality, perception and other characteristics - to give artificial intelligence moral status; another idea is the approach of moral ethics, that is, we give artificial intelligence moral status does not depend on whether machines truly have human characteristics. As long as machines interact with us in a human-like way, we should treat it out of the requirements of virtue.

Professor Fu Changzhen proposed in his summary that in the discussion on the mixed use of concepts, the usefulness of concepts lies precisely in the inclusion of concepts, and the concept space discovered by philosophical discussion may provide scientific researchers with some new aspects and ideas and expand the scope of scientific innovation.

More