AI Ethics

Ethical Path To Build A Responsible Artificial Intelligence Governance System

Ethical Path To Build A Responsible Artificial Intelligence Governance System

Ethical Path To Build A Responsible Artificial Intelligence Governance System

Author: Liu Jing (Associate Professor, School of Philosophy, School of Marxism, Northeast Normal University) The rapid development of artificial intelligence technology has set off a surging wave. While it has profoundly changed people’s existence and lifestyle and brought great convenience to mankind, it has also broken the boundaries of the traditional human-machine relationship.

Author: Liu Jing (Associate Professor, School of Philosophy, Department of Marxism, Northeast Normal University)

The turbulent wave set off by the rapid development of artificial intelligence technology has profoundly changed human existence and lifestyle and brought great convenience to mankind. At the same time, it has also broken the boundaries of the traditional human-machine relationship and brought new ethical challenges and moral dilemmas to human existence and interactions. How to better respond to and adapt to the uncertainties brought about by changes in artificial intelligence technology is an important issue we need to face. General Secretary Xi Jinping pointed out: "Technology is a weapon for development, but it may also become a source of risks. We must proactively study and judge the rule conflicts, social risks, and ethical challenges brought about by the development of science and technology, and improve relevant laws and regulations, ethical review rules, and regulatory frameworks." The "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" issued by the National New Generation Artificial Intelligence Governance Professional Committee clearly states the need to "develop responsible artificial intelligence," which provides an important ethical path for us to develop and govern artificial intelligence. Further elucidating the ethical stance and ethical principles of responsible artificial intelligence governance has become a necessary topic in applied ethics research.

Ethical challenges and moral dilemmas of artificial intelligence

Artificial intelligence is triggering the most important technological revolution and social change. It is also transforming humans themselves and changing human behavior. With the biotech revolution and the rise of artificial intelligence, human beings are experiencing the dual technologicalization of body and spirit, facing the risk of being replaced by technology, triggering new ontological issues. Especially with the introduction of generative artificial intelligence (AIGC), a new revolution in artificial intelligence technology, it has broken the boundaries of the traditional human-machine relationship and manifested itself as "anthropomorphism of artificial intelligence" and "machineization and algorithmization of humans", posing huge challenges to traditional ethical relationships and ethical systems.

With the advent of the era of artificial intelligence, the issues of "future people" and "distant strangers" have gradually entered the ethical horizon of mankind. Traditional ethics emphasizes a kind of "proximate ethics", emphasizing direct communication between people. Both actors are familiar and present together, and moral time and space are close. The in-depth intervention of science and technology has changed the moral space and behavioral nature of human beings. Some new objects, such as artificial intelligence, have gradually entered the areas for which we are responsible and have moral concerns. As a new technology, artificial intelligence has brought profound changes to the ethical world and moral space of mankind. It has established a new relationship with humans in a virtual and anthropomorphic way. The ethical relationship has expanded from "human and human", "human and animal", "human and nature" to "human and artificial intelligence".

The ethical responsibility relationship between humans and artificial intelligence will face greater difficulties when explained by the traditional ethical relationship between humans, and there will be a triple moral dilemma in the ethical relationship between humans and machines. The first level is the dilemma of good and evil. The traditional human-machine relationship is a relationship between people and things, and technology is used by humans as a tool. With the continuous innovation of technology, modern technology is no longer a simple tool, and it is difficult for humans to distinguish whether the use of modern technology is good or evil. Artificial intelligence can bring maximum benefits to mankind, but it can also be abused to cause great harm to mankind. The second level is the dilemma of dignity. With the development of artificial intelligence, especially the application of generative artificial intelligence, human beings' over-reliance on technology and artificial intelligence may weaken human subjectivity and challenge human dignity. The third level is the responsibility dilemma. Although artificial intelligence currently does not have true autonomy and subjectivity, it has a certain degree of autonomous behavior. This kind of autonomous behavior brings ethical risks and uncertainties. There are unclear boundaries of responsibility and distribution of ethical risks. It has triggered debate on the responsibility gap in artificial intelligence applications, leaving a vacuum in the traditional responsibility ethics system. How to solve these moral dilemmas and develop "responsible artificial intelligence" is undoubtedly a feasible ethical path.

Adhere to a people-oriented ethical stance

Although the rise of artificial intelligence has opened up the horizons of ethical concerns, extended traditional ethical relationships, and brought certain challenges and risks, it is certain that humans themselves are the decision makers and users of the development of artificial intelligence, and fundamentally humans are the real ones responsible. "Responsibility" should run through the research and development and application of artificial intelligence. The "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" points out that the principles of "harmony and friendship, fairness and justice, inclusiveness and sharing, respect for privacy, security and controllability, shared responsibility, open collaboration, and agile governance" must be followed. This undoubtedly provides a governance framework and action guide for us to solve the triple dilemma. "Developing responsible artificial intelligence" means that humans develop and use AI in a safe, reliable and ethical way. The connotation of "responsible" includes inclusiveness, fairness, people-centeredness, transparency and explainability, security, respect for privacy and accountability, etc., to ensure the responsible operation and use of artificial intelligence systems. "Responsible artificial intelligence" is not about developing artificial moral machines with a sense of morality and responsibility, so that they can think independently and make moral decisions, and be responsible for their own actions. On the contrary, "responsible artificial intelligence" mainly points to the fact that humans should be responsible, humans should shoulder the responsibility for the artificial intelligence they create, and develop artificial intelligence responsibly so that it can better serve mankind. Human beings themselves are the key to the development and governance of artificial intelligence, so we must adhere to a human-centered ethical stance.

Adhering to a human-centered ethical stance is the value foundation for promoting the development and governance of responsible artificial intelligence. General Secretary Xi Jinping attaches great importance to the development, security and governance of artificial intelligence, and has put forward the important concept of "people-oriented, intelligence for good". This not only provides a new perspective for the development and governance of artificial intelligence, but also fully embodies the concept of a community with a shared future for mankind in the field of artificial intelligence. Putting people first contains deep ethical values ​​and is an important principle to be followed in artificial intelligence governance. The human-centered ethical stance requires that artificial intelligence should always adhere to the concept of "people are the purpose" in the research and development and application of artificial intelligence, and respect human dignity and rights. As a technology, no matter how far it develops, artificial intelligence should not get rid of its tool attributes. It is essentially the product of human practical activities, that is, an artifact. It is worth emphasizing that this people-centered view is not a strong anthropocentric view, which only highlights the dominant and superior status of people, but respects people's personality and dignity, regards people as ends, and emphasizes people's responsibilities and obligations.

In order to better develop responsible artificial intelligence while adhering to a human-centered ethical stance, we must further establish the "intelligence principle for good", "dignity principle" and "responsibility principle" to respond to the triple moral dilemmas faced by human-machine ethical relationships. First of all, intelligence for good is the fundamental goal of developing responsible artificial intelligence. While artificial intelligence has brought great benefits to mankind, it has also brought unprecedented challenges and risks. What we need is not the neutrality of technology, but the goal of benevolence, injecting humanistic and ethical dimensions into the development of technology and artificial intelligence, and promoting the development of artificial intelligence in the direction of science and technology for good. The purpose of this kind of intelligence for good cannot just stop at enhancing abilities and satisfying the needs of the subject, but should aim at improving the well-being of all mankind. Secondly, the principle of dignity is an inevitable requirement for the development of responsible artificial intelligence. Dignity is an important concept in ethics and the core content of people-centeredness. The research, development and application of artificial intelligence are to better realize that "people are the purpose" and reflect the dignity and value of people, rather than reducing people to tools. Therefore, the principle of dignity, as an inevitable requirement and ethical bottom line for people-centeredness, requires that the development of artificial intelligence must not harm or infringe on human dignity and rights, respect human autonomy, protect human privacy, and avoid algorithmic discrimination and intelligent harm. Finally, the principle of responsibility is an important guarantee for the development of responsible artificial intelligence. The principle of responsibility is one of the most critical principles in contemporary discussions of applied ethics, and it is also the most basic ethical principle that should be adhered to in the development and application of artificial intelligence. The "people-centeredness" emphasized by the people-centered ethical stance is not only reflected in serving mankind and improving human welfare, but also reflected in the fact that people can and should be responsible. This is precisely where the people-centered humanistic spirit lies. In the research and development and application of artificial intelligence, we must always adhere to the principle of responsibility, integrate responsibility into all aspects of the development of artificial intelligence, establish and improve necessary accountability mechanisms, ensure traceability of responsibilities, and protect the basic rights and interests of human beings.

Build a responsible artificial intelligence governance system

In a world where humans and machines coexist, new ethical issues and moral dilemmas have gradually emerged. How to build a responsible artificial intelligence governance system has become a key issue in the development of artificial intelligence and scientific and technological ethics governance. We must adhere to a people-centered ethical stance, jointly build a responsible global artificial intelligence governance system, form a responsibility mechanism for collaborative and joint governance, cultivate human intelligence literacy and sense of responsibility, and jointly promote the progress of human civilization.

Building a responsible artificial intelligence governance system requires jointly building a global artificial intelligence governance system. The global governance of artificial intelligence is a common issue faced by all countries in the world and is related to the destiny of all mankind. In this new technological era and a community with a shared future for mankind, all countries should participate in the global cooperation and governance of artificial intelligence in a responsible manner, narrow the intelligence gap between countries, and jointly improve the global governance mechanism. The most important thing to build a responsible global artificial intelligence governance system is to be based on the concept of a community with a shared future for mankind, with the goal of enhancing the common well-being of all mankind, and on the premise of ensuring human safety and rights, to ensure that artificial intelligence develops in the direction of the progress of human civilization. As a responsible artificial intelligence power, my country has always actively embraced intelligent change, vigorously promoted the innovative development of artificial intelligence, and persisted in being an active promoter, participant and contributor in the field of global artificial intelligence governance in the face of the opportunities and challenges that artificial intelligence has brought to mankind. China attaches great importance to the security governance of artificial intelligence and has implemented a series of pragmatic measures, such as the release of the Global Artificial Intelligence Governance Initiative, which emphasizes adhering to "people-oriented" and "intelligence for good", and provides the world with a Chinese plan for artificial intelligence governance based on the concept of a community with a shared future for mankind.

To build a responsible artificial intelligence governance system, we need to build a collaborative and co-governance responsibility mechanism. A responsible person not only refers to a single responsible subject, but also a responsible person composed of multiple responsible subjects. It is inseparable from the joint participation and collaborative governance of multiple subjects such as governments, enterprises, industries, and users. The construction of a responsibility mechanism cannot simply attribute responsibility to artificial intelligence developers and users, but requires the participation of all responsible subjects. It is necessary to clarify the subjects, levels, distribution and boundaries of responsibility, and to explore a common responsibility mechanism to solve the problem of responsibility gap and ethical vacuum. Although artificial intelligence currently participates in human actions with relatively low autonomy and does not have the status of human moral subjects, they together with us constitute human-computer interaction actions. Therefore, we still need to integrate it into the broader ethical framework and accountability mechanism. The responsible person here is a human-machine joint responsibility body composed of multiple responsible subjects and artificial intelligence. It is necessary to continuously improve and perfect the responsibility mechanism for collaborative and co-governance.

Building a responsible artificial intelligence governance system requires cultivating human AI literacy and sense of responsibility. Although building a global artificial intelligence governance system and responsibility mechanism is very important for the governance of artificial intelligence, we cannot rely solely on systems and mechanisms, which are far from enough to build a responsible artificial intelligence governance system. We must also pay attention to cultivating and improving the moral quality and sense of responsibility of the responsible persons, and we must implement the ethical identity of the responsible persons to everyone. As a member living together in a community with a shared future for mankind, everyone must have the consensus that "everyone is a responsible person." In the era of artificial intelligence, human beings not only need to improve AI technological literacy, but also cultivate AI moral literacy and sense of responsibility. This cultivation cannot only stop at the learning of scientific and technological ethics knowledge, but also pay attention to the cultivation of heart. This is what artificial intelligence cannot truly replace humans.

The era of artificial intelligence is a living field where humans and machines coexist. Artificial intelligence has deeply intervened in our living world. In a world where humans and machines coexist, we must adhere to a human-centered ethical stance, adhere to the "principle of intelligence for good", "principle of dignity" and "principle of responsibility", and jointly build a responsible artificial intelligence governance system. Although the future is full of risks and uncertainties, we should never forget that humans are the real responsibilities and always shoulder the heavy responsibility and mission of developing responsible artificial intelligence.

"Guangming Daily" (page 15, January 6, 2025)

More