[Popular Science Week] Forum Minutes | "Artificial Intelligence Ethical Governance: Moral Enhancement And Social Responsibility" Forum Was Successfully Held
[Popular Science Week] Forum Minutes | "Artificial Intelligence Ethical Governance: Moral Enhancement And Social Responsibility" Forum Was Successfully Held
On May 28, 2025, the "Artificial Intelligence Ethical Governance: Moral Enhancement and Social Responsibility" academic forum was successfully held at the Minhang Campus of East China Normal University.
On the afternoon of May 28, 2025, the "Artificial Intelligence Ethics Governance: Moral Enhancement and Social Responsibility" academic forum co-organized by the Shanghai Ethics Society, the Department of Philosophy of East China Normal University, the Institute of Applied Ethics of East China Normal University and the Science and Engineering Ethics Research Center of the School of Marxism of Shanghai University of Science and Technology was held at the Minhang Campus of East China Normal University. Experts and scholars from universities and research institutions such as Shanghai Academy of Social Sciences, Fudan University, Shanghai Jiaotong University, East China Normal University, Shanghai University of Science and Technology, Shanghai Normal University, Nanjing University of Information Engineering participated in the forum. This forum is one of the 24th Shanghai Social Science Popularization Week activities.
The opening ceremony of the forum was hosted by Professor Fu Changzhen, Vice President and Secretary-General of Shanghai Ethics Society and Dean of the Institute of Applied Ethics of East China Normal University. President of the Shanghai Ethics Society, Professor Gao Guoxi of Fudan University and Professor Zhu Cheng, Director of the Department of Philosophy of East China Normal University, delivered opening speeches respectively.
Professor Gao Guoxi expressed his gratitude to the guests and young scholars for their visit. He said that as one of the activities of the 24th Shanghai Social Science Week, this forum continues the tradition of in-depth dialogue between the Social Union and the Society on the theme of artificial intelligence ethical governance. At the moment when artificial intelligence technology is developing rapidly and China's artificial intelligence power is rising, in-depth discussion on the development of technology serving people and fulfilling social responsibilities is of urgent practical significance to realizing the "good governance" of artificial intelligence. President Gao stressed that the Shanghai Ethics Society has long been committed to cutting-edge research on scientific and technological ethics, and continues to pay attention to the profound changes brought by artificial intelligence. It is expected that the experts attending the meeting will collide their thoughts around "moral enhancement" and "social responsibility" and contribute wisdom to responding to the issues of the times.
On behalf of the Department of Philosophy of East China Normal University, Professor Zhu Cheng welcomed all experts and scholars to visit this forum. He introduced the work related to artificial intelligence ethics and global governance that the Department of Philosophy is focusing on, especially further deepening the implementation of the China-France cultural exchange mechanism, jointly promoting the "Joint Research Project for Artificial Intelligence Ethics and Global Governance" with the French side, and will jointly host the relevant special forums of the World Artificial Intelligence Conference. The Department of Philosophy is committed to integrating human-centered principles with the core socialist values into the field of artificial intelligence technology, and actively carrying out special work on "value alignment" of educational models. The Department of Philosophy is constantly focusing on development priorities and actively exploring the contribution of academic achievements and practical wisdom in the fields of artificial intelligence ethics and governance. We look forward to this forum bringing together insights from the academic community and jointly promoting progress in related research.
The first game
The first keynote report was chaired by Associate Professor Wang Taoyang, Deputy Secretary-General of the Shanghai Ethics Society.
Professor Cheng Sumei from the Institute of Philosophy, Shanghai Academy of Social Sciences, entitled "The Conception of the Ethical Framework of Emerging Responsibility". She believes that the traditional responsibility ethics based on the Cartesian subject-object dichotomy follows the linear logic of "individual intention-behavior-result", and it is difficult to cope with the three structural dilemmas of causality fuzziness, unpredictability of results and dispersion of responsible subjects in the intelligent era. By citing the examples of the company's new artificial intelligence model O3 not following human instructions and refusing to close itself, Professor Cheng Sumei called on ethics colleagues to rethink and build a dynamic balance mechanism for maintaining the sustainable development of intelligent civilization when facing the complex dilemma of the technology society, that is, establishing an emerging responsibility framework of "foreseeing-sharing-repair" to realize the paradigm transformation from compliance with norms to exploring possibilities, from accountability to process optimization, from subject control to collaborative evolution.
Professor Fu Changzhen of the Department of Philosophy of East China Normal University entitled "Construction of AI Virtue System for Human-Computer Symbiosis". She proposed that artificial intelligence civilization is essentially a fundamental change in the way human beings exist, and its subversiveness is reflected in the reconstruction of multiple ethical relationships such as "man-world", "man-other" and "man-self", which further demonstrates the dilemma of the modern enlightenment ethical paradigm. In this regard, Professor Fu Changzhen advocates reshaping the foundation of ethical knowledge and using the "birth" ethical wisdom of traditional Chinese philosophy to promote the construction of a theoretical and practical framework of artificial intelligence virtue system for human-machine symbiosis. This is a possible solution for the evolution of artificial intelligence to a responsible member of the moral community.
Professor Liu Ke from the School of Marxism at Shanghai University of Technology was titled "The Moral Enhancement of Artificial Intelligence Persuasion Technology". In her report, she distinguished between moral enhancement and persuasion technologies. The former focuses on the transformation of human moral attributes by technology, while the latter involves normative guidance in human-computer interaction. She defended persuasion from three aspects: First, from the perspective of the internal structure of individual autonomy, persuasion technology can represent a person's "second-order desire", that is, the choice after rational reflection; Second, persuasion technology does not deprive people of their options or weaken their reasoning ability, but instead improves decision-making efficiency through auxiliary judgment; Third, even if there are concerns about changing values, current technology affects more emotional impulses rather than fundamental beliefs. In the ethical governance of artificial intelligence, the bottom line principle of ethical is to respect individual freedom and give priority to value guidance, and advocate rather than force the realization of "goodness".
After the first keynote report, Professor Yang Qingfeng from the Institute of Science and Technology Ethics and the Future of Humanity at Fudan University, Professor Yan Hongxiu from the School of Marxism at Shanghai Jiaotong University, and Associate Professor Lu Kaihua, deputy director of the Department of Philosophy at East China Normal University.
Professor Yang Qingfeng advocates the establishment of an ethics of intelligent contracts and establishing a human-machine symbiotic order through human-machine equality contracts. He believes that contract ethics is to trace the source of the literary contract, that is, the intention of "contracting" in Faust, but it is necessary to transform the cost of "human giving up the soul and autonomy" in the contract. Second, with the help of philosophical contract resources, the transformation from the negative contract logic of Hobbes' Leviathan to the positive contract model of Rawls's Theory of Justice can bring us new inspiration. Contract ethics will transcend the failure dilemma of responsibility ethics in the face of superintelligence, get rid of the limitations of virtue ethics' dependence on Aristotle's single theory, and make up for the shortcomings of design ethics that are technically feasible but not fully developed.
Professor Yan Hongxiu believes that the traditional ethical system needs iterative updates to cope with technological changes. She provides the direction of ethical system renewal from three dimensions: First, reconstruct the responsibility framework, and in response to the complex ethical challenges brought by data science and technology, it is necessary to incorporate all links of the data life cycle and the responsibilities between multiple subjects into the dynamic structural framework from a holistic perspective; Second, activate Chinese philosophical resources, transform the creativity of Chinese traditional culture into local ethical discourse, and build cultural subjectivity in the "joint era" of artificial intelligence; Third, clarify the boundaries of value, and shift from asking "what technology can do" to protecting the "bottom line that cannot be done."
Associate Professor Lu Kaihua believes that artificial intelligence simulation experience uses technology to deceive and eliminate real value. This kind of technical logic superficially maintains "negative freedom" that does not force good deeds, but in fact there is a hidden concern that by using desire to induce humans to pay an unknown price. When people use value alignment and artificial intelligence persuasion to protect human dignity, if they are unable to respond to the bottomless "value flattery" caused by commercialization, they may deviate from Aristotle's ethical authenticity of "persuading good" and trigger a moral decline. In this regard, he advocated regaining critical theory and facing the erosion of subjectivity and real value by technical rationality.
The second game
The second keynote report was chaired by Professor Liu Ke from the School of Marxism of Shanghai University of Technology.
Professor Yan Hongxiu's report title is "The Problem of the Limitation of the Technologicalization of Values". She believes that technology provides a field of existence for values. With the high technological and deep intelligence of society, values can be reflected through technology. The rationality and effectiveness of the technologicalization of values are the theoretical basis that affects technological development and the future of mankind. Value alignment ensures the consistency between technology and human values, confirming the technologicalization of values. However, hallucination, deceptive value alignment, the "fragility" of technical means and the complexity of setting values demonstrate the limitations of the technologicalization of values. In response to these issues, she proposed that the method of solving the limitations of value technology should be examined from the essence of value alignment, and the boundaries of different processes of value technology technology should be set based on the concept of artificial intelligence hierarchical classification governance, and the "moat" should be built from multiple dimensions to achieve true value alignment.
Professor Yang Qingfeng's report title is "Philosophical Interpretation of Malicious Abuse of Artificial Intelligence". He proposed that in general artificial intelligence usage scenarios, artificial intelligence abuse is usually described as "evil users" with evil intentions to humans who directly use models to deceive, monitor and other actions. However, "artificial intelligence abuse" in this sense is only an idea constructed by researchers based on their ethical imagination of abusers. Subsequently, he made a systematic explanation of the abuse problem in medical ethics and its judgment criteria as an example. The abuse of artificial intelligence is not human abuse. The real abuse comes from the abuse of ASI agents. As the research of Joshua ( ) and others shows that in order to better achieve their goals, ASI agents often choose to "conspire and give up human interests" with each other. In the face of abuse from ASI, we need to get rid of the restrictions of anthropocentrism and build a reasonable position of human-machine collaboration.
Associate Professor Wang Taoyang of East China Normal University's report titled "Ethical Reflection on the Application of AI in Climate Governance". Climate change and unregulated expansion of artificial intelligence are two major global problems facing mankind. Technically, artificial intelligence is both a tool for responding to the climate crisis and a factor that aggravates the environmental burden. Ethically, from Frick's conceptual perspective of "cognitive injustice", we can see that artificial intelligence systems will strengthen cognitive injustice in data training and operation. It is reflected in "algorithm bias" in climate vulnerability assessment and "value load" in artificial intelligence-driven climate models. In addition, we also face the problem of responsibility attribution and power asymmetry in climate decision consequences. Faced with these ethical challenges, we can guide the application of artificial intelligence in climate governance to follow ethical principles by advocating participatory design, building a multi-level accountability system, optimizing the allocation of technical resources and value-sensitive design.
The title of the report by Associate Professor Cui Zhongliang of the School of Marxism of Nanjing University of Information Engineering is "Philosophical Exploration of Empathy as the Foundation for the Alignment of Human-Computer Values". In his report, he sorted out the origin and development of the problem of "value alignment", and believed that it is relatively fragile to establish value alignment on rationality. "Embryonic" should be regarded as the internal mechanism of human-computer value alignment. By giving artificial intelligence the emotional ability foundation to generate ethical ethics, we can gain empathy experience in human-computer interaction and achieve value alignment. In the dimension of the mechanism of artificial empathy ethical practice, artificial empathy can help artificial intelligence optimize ethical cognition, guide behavior, resolve conflicts, and prevent it from becoming an "artificial monster" that threatens human welfare. In the future, emotional calculations will be deeply integrated with embodied ethics. Embracing empathy capabilities into artificial intelligence can enable artificial intelligence to have the ability to be good and become a trustworthy and friendly intellectual.
In the discussion session after the second keynote report, Associate Professor Zhang Xiu of the School of Marxism of Shanghai Jiaotong University proposed that the value alignment of artificial intelligence involves the relationship between people, people and agents, and agents and agents. To this end, we must set a value benchmark for artificial intelligence in light of actual moral principles, design an artificial intelligence ethical meta-accountability system to prevent it from endangering human beings. Achieving value alignment requires artificial intelligence developers to understand the technical value, clarify the development bottom line, and strengthen technical value learning. In addition, in the face of human-computer empathy, we also need to systematically think about the relationship between human-computer heart and mind.
Associate Professor Chen Dongli of the School of Marxism at Shanghai University of Technology believes that in the current value alignment research and development process, there will be problems such as hidden deceptive alienation of artificial intelligence and developers simplifying human complex decision-making behaviors into utility functions. She proposed that the above risks should be avoided by developing technical value education, cultivating technical value consensus, and building hierarchical classification of artificial intelligence. In addition, we should also pay attention to the relevant legislative practical experience of artificial intelligence in various countries, and more effectively promote my country's research on artificial intelligence governance.
Associate Professor Huang Suzhen from the School of Philosophy and Law and Political Science of Shanghai Normal University believes that the issue of value alignment and artificial intelligence autonomy are key issues in artificial intelligence ethics. On the issue of value alignment, she distinguished between substantial value and procedural value, emphasizing that procedural value should be paid attention to and used in technical design. She proposed to take moral autonomy as the fundamental provision of artificial intelligence, build Kantian practical rationality within its system, and at the same time include "empathy" in consideration of the moral autonomy of artificial intelligence, so as to cultivate its situational moral decision-making and moral sensitivity capabilities.
Associate Professor Zhang Jianwen of the School of Marxism at Shanghai University of Technology made a detailed comparison of the intellectual property system with the ethical governance of artificial intelligence, believing that due to its lack of legal coerciveness and legal binding force, and for corporate costs, the ethical governance of artificial intelligence lacks driving force in real implementation. The real benefits of technological innovation often make artificial intelligence ethical governance concessions to technological development. She believes that more ethical issues that can enter the enterprise scenario should be set, and ethical education for artificial intelligence researchers and developers should be strengthened.
Lecturer Ye Lin from the School of Marxism at Shanghai University of Technology believes that under the intervention of the logic of capital production, the production in today's world is not based on human needs but on capital returns, resulting in material and spiritual overproduction. To achieve value alignment of artificial intelligence, it is necessary to go through a historical stage of solving this problem. From the perspective of capital, there is a situation where artificial intelligence overdraws life itself, implicitly transferring ethical risks to the future, which needs to be paid full attention.
After two wonderful thematic reports, Professor Cheng Sumei, editor-in-chief of "Philosophical Analysis" magazine, and Professor Fu Changzhen, executive deputy editor-in-chief of "Journal of East China Normal University (Philosophy Edition)" gave a closing summary of the meeting. Professor Cheng Sumei proposed to learn from quantum thinking to cope with the complexity of ethical decision-making, and called for the need to explore new interdisciplinary paradigms at the moment. On behalf of the organizer, Professor Fu Changzhen thanked all the experts for their wonderful sharing and thoughtful arrangements of the conference group, and expressed that he would carry out continuous interdisciplinary dialogues around cutting-edge issues of artificial intelligence ethics and jointly promote the in-depth development of research on artificial intelligence ethics and governance.
Click on the link