AI Ethics

Analysis On The Ethical Problems Of Generative Artificial Intelligence From The Perspective Of Moral Action Body

Analysis On The Ethical Problems Of Generative Artificial Intelligence From The Perspective Of Moral Action Body

Analysis On The Ethical Problems Of Generative Artificial Intelligence From The Perspective Of Moral Action Body

Discussions on the status of the moral subject of artificial intelligence have always been a hot proposition in artificial intelligence ethics, especially the emergent characteristics of generative artificial intelligence, which aggravates the hidden concerns about its moral attributes. This paper attempts to explore the possibility of generative artificial intelligence moral load and value expression from the perspective of moral agents, and on this basis, it explores the ethical problem response methods of value embedding, responsibility correspondence, and ethical evaluation, and proposes the governance framework ideas of public opinion clarification, system balance, key breakthroughs, institutional means, and international governance.We first invented machines to free people from heavy physical labor. With the progress of the times, this alternative intention has expanded from manual labor to mental labor, and intelligent machines are used to solve complex computing problems, planning problems, decision-making problems, etc. Today, multimodal generative artificial intelligence based on big models is gradually refreshing human imagination of the future boundaries of artificial intelligence. This makes us ask ourselves: What is the purpose of developing artificial intelligence?Faced with this contemplation, the development of artificial intelligence should grasp the acceptable

Text: Li Yang, Liang Zheng

summary:

Discussions on the status of the moral subject of artificial intelligence have always been a hot proposition in artificial intelligence ethics, especially the emergent characteristics of generative artificial intelligence, which aggravates the hidden concerns about its moral attributes. This paper attempts to explore the possibility of generative artificial intelligence moral load and value expression from the perspective of moral agents, and on this basis, it explores the ethical problem response methods of value embedding, responsibility correspondence, and ethical evaluation, and proposes the governance framework ideas of public opinion clarification, system balance, key breakthroughs, institutional means, and international governance.

Keywords:

Generative artificial intelligence; moral actors; value embedding

introduction

While the intelligent society actively changes human society, it is also accompanied by negative aspects. Artificial intelligence (AI) is an inevitable product of the development of science and technology in human society and the result of the objectification of human essential power [1]. Then, it not only extends the good aspects of human beings, but also extends the evil aspects of human beings. There are many uncertainties in the construction of the interactive relationship between humans and artificial intelligence. Especially in recent years, the rapid advancement of generative artificial intelligence technology has inevitably caused deep concerns among all sectors about artificial intelligence ethics.

We first invented machines to free people from heavy physical labor. With the progress of the times, this alternative intention has expanded from manual labor to mental labor, and intelligent machines are used to solve complex computing problems, planning problems, decision-making problems, etc. Today, multimodal generative artificial intelligence based on big models is gradually refreshing human imagination of the future boundaries of artificial intelligence. This makes us ask ourselves: What is the purpose of developing artificial intelligence? If we use it instead of any kind of activity, will human life progress or regress? Assuming that artificial intelligence evolves enough intelligence in the future, what role can humans play? Faced with this contemplation, the development of artificial intelligence should grasp the acceptable "degree" of a society rather than blindly pursue speed [2]. At the same time, whether the development of artificial intelligence needs to be controlled to a level of weak overall and strong single terms, and whether it is necessary to retain the fatal intellectual defects of artificial intelligence to a certain extent [3] remains to be discussed jointly by the policy, academic and industrial circles.

Can artificial intelligence be used as a moral act?

The great progress of generative artificial intelligence has made people doubt their own subjective status and even feel deeply uneasy. Discussions on independent decision-making and subjective status of machines have become increasingly rampant. This concern has gradually deepened with the continuous breakthroughs in natural language understanding, long text interactions, and multimodal applications. The moral oppression of the advancement of intelligent machines on human society has led more and more scholars to join the discussion on the future development of artificial intelligence technology.

Can artificial intelligence have a moral dominant position? The so-called moral subject refers to a moral act that is self-aware, can regulate behavior from the moral cognitive level through reasoning, judgment and constraints, and can make moral decisions, and bear moral responsibilities for the corresponding behavior. So, can artificial intelligence, as an artificial object, meet these conditions? What is the relationship between artificial intelligence and moral actors? James H. Moor classified artificial moral subjects: moral influencing subjects (-), invisible moral subjects (), explicit moral subjects (), and complete moral subjects (full) [4]. However, the discussion on the moral subject status of intelligent machines cannot be determined through simple classification and can be analyzed from multiple levels.

The core of a person having a moral subject is the freedom of the subject. The formation of subject freedom is a long and complex process. First of all, after long-term evolution in nature, humans have cultivated the ability to learn independently, and continuously deepened their understanding of the world and self in the process of interacting with the world. The development of self-cognition promotes the formation of self-consciousness and produces a free sense of subject. The basic technical logic of generative artificial intelligence is still a probabilistic model. Although human feedback and supervision fine-tuning have improved the machine learning effect, judging from the current practical results, there is no artificial method other than natural methods that can make machines have the ability to be subjective. Secondly, when humans form society through language, communication, organization and other forms, self-cognition begins to become complex. In the interaction between people and society, human beings further realize that they are free, and subjectivity progresses to self-knowledge of freedom, and various social behaviors of people are dominated by free will. Human free will is the product of social consciousness reflection. However, for machines, this complex free will is a more advanced stage. Finally, social progress has deepened people's self-awareness of freedom. Human beings not only recognize freedom, but also realize that they should be free, that is, they have given moral connotations to freedom. Therefore, from the perspective of subject freedom, generative artificial intelligence does not seem to have a moral subject status.

The view of subject freedom may have a tendency to be solemn. Analytical philosophy and phenomenology provide new ideological resources for subjectivity, respectively.

Analytical philosophy believes that the human mind is composed of the "intelligence" that processes objective information programs and the subjective feeling "consciousness" accompanying the program [5]. The subject's internal representation ability is "intrinsic intention", which connects the semantics of specific content with logical forms to form understanding, which is the so-called intelligence. Consciousness acts as an identifier of action, and plays the role of the subject being responsible for action. Therefore intelligence and consciousness become two prerequisites for subjectivity. Johnson (G.) pointed out that because artificial intelligence cannot satisfy the "intrinsic state of will, beliefs, and other tendencies, and has the intention to act", it can only be regarded as a moral entity rather than a moral subject () [6]. From the perspective of analytical philosophy, although generative artificial intelligence has the ability to process objective information and can feedback results beyond the Turing test, conscious recognition of actions and responsibility for them has not yet been achieved, and this view has also been criticized by "anthrocentrism".

Information philosopher Freudy () proposed that characteristics such as subject consciousness, free will, and internal state are not necessary conditions for artificial intelligence to have a moral subject status. As long as it has interactivity (the subject interacts with the environment), autonomy (the state changes without affecting the interaction), and adaptability (the internal state can be changed in interaction to adjust the transitional rules), artificial intelligence can become an artificial subject ( , AAs) [7]. This thinking pattern is constructivist, and it does not rule out the possibility that artificial intelligence will become a moral subject in the process of being constructed. However, the concept of artificial subjects will change in different situations and conditions, so it is inevitable to fall into relativistic criticism.

Phenomenology believes that the subject is a process of self-discovery and is a "symbiotic" relationship with the world. The realization of subject's self-cognition is a way to withdraw oneself from the situation where one lives with the world by perceptual ability. Post-phenomenology completely abandoned the argument of subject-object duality, and used different "human-technical relationships" ① to recognize people and technology as a unity that is entangled and composed of each other. Dutch technical philosopher Peter Paul calls it technical regulation (), and discusses the possibility of "moral materialization" on this basis. Obviously, from the perspective of "moral materialization", generative artificial intelligence has the conditions to become a moral subject, because artificial intelligence and humans are to some extent mutually constructed, and they are mutually entangled and shaped. It can be judged that there are no theoretical obstacles in artificial intelligence on the road to growing into a moral subject. For humans, current artificial intelligence is often regarded as a tool in an affiliated position, and viewing artificial intelligence as a moral subject requires gradual emotional identification [8].

Emile, an American information technology scholar, has a more radical view: humans are not intelligent subjects but robots are. Influenced by Kant, Nador believed that an action is a free act if and only if this act is based on the subject's rational thinking. People often rely on emotional judgment and make decisions, so people cannot be absolutely rational. On the contrary, a machine based on formal logic can achieve absolute rationality, so it has the ability of a complete moral subject [9]. This view is called the super subject concept of evolutionary thinking [10], and is almost a science fiction thinking that ignores reality.

In short, the status of the moral actor of artificial intelligence is not limited to the field of philosophy, and it has become an actionable technological model in practice. The generative artificial intelligence is based on the infrastructure of big models, big data, and strong computing power. It uses the method of human feedback reinforcement learning and fine-tuning of supervision instructions to develop a new idea of ​​"super alignment". On the one hand, its interaction with users constitutes an important data set and learning library, and uses human feedback to continuously "learn" it. In this process, user value actually has a shaping influence on the big model. On the other hand, human-computer interaction provides very meaningful parameters for the further update of generative artificial intelligence. Designers practice ethical alignment principles through fine-tuning methods such as annotation and proofreading to ensure the consistency between the value expressed by generative artificial intelligence and the goodness of human beings. This interaction between designers, users and big models confirms the practical path of post-phenomenology and moral materialization theory with relational ontology as the core, that is, the relationship between generative big models and humans may also be morally composed and shaped by each other. The core point of this is how humans should participate in the design and use of artificial intelligence, and how to more effectively embed value factors such as fairness and good into big models to have positive moral consequences.

Generative artificial intelligence ethical problems: value embedding, responsibility correspondence, ethical evaluation

Although there are still many controversies in the academic community whether generative artificial intelligence can fulfill its corresponding responsibilities and obligations as a moral actor, it has become a general consensus that its ability to embed, express and shape value. The reasons for technological, ethical, industrial, social and political risks that generative artificial intelligence may cause are many aspects. First, technology misuse. The generative new features of artificial intelligence give users a path to participate in design through human feedback to a certain extent, increasing the possibility of technology misuse. Second, technology is out of control. The emergence of generative artificial intelligence increases the possibility of out-of-control, causing humans to lose control of intelligent technology and thus have immeasurable consequences. Third, the application is out of control. It is difficult for anyone to estimate the consequences of the widespread application of generative artificial intelligence and whether it will cause major social problems [11].

Responding to ethical issues in generative artificial intelligence is a arduous task that involves a wide range, covers a wide range of contents, and has a large range of impacts. To deal with the ethical problems of artificial intelligence, we must not only standardize the use of technology through ethical means, but also use engineering and technical means and governance planning means. Therefore, the ethics and governance of artificial intelligence require more comprehensive system construction, starting from the value embedding in the design stage, the responsibility correspondence in the use stage, ethical evaluation and adjustment, and building a mechanism for coordination and coordination between multiple subjects, different levels, and multiple means to achieve the goal of effective governance.

Generative artificial intelligence ethics response must start with design. An excellent smart product should be a combination of economy, functionality and value. Introducing moral and ethical design elements and writing value factors into the design of intelligent technology has become increasingly the core requirement of high quality in modern technology. The advocated super alignment principle provides theoretical methods and practical cases for design value embedding. The interaction between generative artificial intelligence and designers and users reflects the complex relationships entangled in human-technical interactions, and it has the conditions to become a moral act to a certain extent. Then as a "quasi-moral subject", the value embedding of intelligent technology in the design stage is reasonable. The value-sensitive design and responsible innovation research that have emerged in recent years have provided theoretical support for this claim. From Id's "human-technical relationship" theory and Vibeck's technical regulation theory, "moral materialization" theory, and then to the super alignment method, the moral embedding of artificial intelligence is not only at the theoretical level, but also feasible in practice.

Delft University of Technology in the Netherlands has set off a wave of research on "value design" (for) in recent years, attracting global attention. The so-called "value design" focuses more on implementing ethical value in technical research and development, technical design and technical application. The practice of "value design" is mainly reflected in three aspects: First, the introduction of ethics and ethics-related courses for engineer courses has become a compulsory course for every engineer. Through such courses, we can cultivate the value sensitivity of front-line technical experts in practice, so that they realize that their work is not just a technical or engineering problem, but also a value and ethical problem. Second, ethics researchers participate in the practical activities of engineer training. This point emphasizes the possibility and feasibility of the integration of technology and ethics in practical operations, and the science and humanities interact with each other in the process of learning and practical application. Third, the construction of a community of engineering scholars and ethics scholars. Engineers and humanities scholars should form a research community that communicates and promotes mutual understanding. On the one hand, it makes engineers' optimism more inclusive and prudent, and on the other hand, it makes literary scholars' concerns no longer vague future studies.

Building a responsibility correspondence mechanism is a difficult problem to deal with the ethical problems of generative artificial intelligence. Responsibility ethics requires answering five questions: who is responsible, who is responsible to, what is responsible, how to be responsible, and how to be responsible.

Who will be responsible for the subjectivity of the corresponding responsibilities? The government should assume supervision responsibilities and control the direction of technology development from a macro perspective; enterprises should assume design responsibilities and be aware of the negative aspects of the value embedding of model training process; universities should assume teaching and research responsibilities and cultivate responsible engineers in teaching and research; the public should assume supervision responsibilities and actively play their own supervision obligations over governments, enterprises, universities and other groups; "artificial intelligence bodies" should bear "technical responsibilities", always being a good technology, requiring good interaction between designers and users.

Who is responsible for the direction of corresponding responsibilities? Each responsible entity should first be responsible for its own behavior and should also be responsible to other responsible entities, so as to form an organic closed-loop responsibility system.

What responsibility should be taken corresponds to the connotation of responsibility. For example, the responsibility that enterprises should bear is to ensure the security of generative artificial intelligence applications, avoid risks, and eliminate harm. The connotation of responsibilities corresponding to different responsible parties is also different.

How to be responsible for corresponding responsibilities. That is, how to achieve the connotation of responsibility, the ethics of responsibility run through motivation, behavior, and results, so the responsibility methodology at the level of consciousness, system, and behavior needs to be constructed.

How to take responsibility and evaluate the corresponding responsibilities. Whether an effective generative artificial intelligence ethics and governance evaluation mechanism can be built is directly related to the ultimate effect of responsibility ethics practice.

The effectiveness of ethical issues depends on the evaluation and adjustment of multi-party participation. In 2019, the EU Senior Expert Group on Artificial Intelligence Ethics summarized seven ethical guidelines for intelligent technology: human initiative and supervision; technical robustness and security; privacy and data governance; transparency; diversity, non-discrimination and equity; social and environmental well-being; accountability. ② First of all, consensus is needed on the evaluation system through multi-party consultation, and only with standard support can we be targeted. Secondly, the ethical evaluation of generative artificial intelligence should not only be left after-event evaluation, but should be introduced into the technological research and development process. The community of technical experts and humanities scholars should always follow up on the progress of artificial intelligence projects, discover and solve problems as soon as possible, and deal with various emerging problems of artificial intelligence through technical governance. Finally, government departments should introduce corresponding governance measures to assist in the progress of ethical assessment, including laws and regulations, behavioral norms, etc. The "Opinions on Strengthening the Governance of Science and Technology Ethics", "Interim Measures for the Management of Generative Artificial Intelligence Services" and "Technology Ethics Review Measures (Trial)" issued by my country can all be used as means of ethical evaluation and adjustment.

Framework ideas for generative artificial intelligence governance

Public opinion clarification - avoid the demonization of technology. Although disorder, unemployment and out of control constitute the main hidden concerns of generative artificial intelligence, technological development is a prerequisite for governance. In particular, it is necessary to avoid demonizing generative artificial intelligence technology, avoid forming overwhelming public opinion voices and hindering technological development. We must adhere to the correct public opinion orientation, use the consensus viewpoints of experts to actively communicate with the public, clarify problems, discuss problems, and solve problems.

System balance - handle the relationship between technological innovation and standardized governance well. Without technological development, there is no governance technology. Therefore, there should be a certain degree of tolerance for uncertainty in emerging technologies, and it is not advisable to set up too many obstacles in the rapid growth stage of technology. The global layout of technology should be deepened to prevent the setting of excessively high thresholds from being unfavorable to industrial development. The evolution of large models requires a large amount of data flow from user feedback, which can improve itself. If the governance threshold is too high, it will inevitably prevent the application of large-scale models of some enterprises from being launched, which will hinder technological progress.

Key breakthroughs - actively develop ethical alignment of artificial intelligence. Ethical alignment is an important development direction for generative artificial intelligence in the future. It can make the output of artificial intelligence more in line with human values ​​and exert different capabilities of human and artificial intelligence in collaborative delivery with humans. It is necessary to conduct in-depth research on the theories of technical governance, responsible innovation, value-sensitive design, moral materialization, human-machine relationship and the dominant position of the agent, and make breakthroughs in the ethical alignment of artificial intelligence using the combination of technology and theory.

Institutional tools - establish a model evaluation system and build a risk assessment platform. Establish a large model evaluation system, carry out large model evaluation, and formulate online standards based on the evaluation data. The working logic of the probability model of artificial intelligence makes it impossible to achieve 100% accuracy. The standard for large-scale release of models can be established through priority and staged evaluation plans, a fault tolerance mechanism can be built, and time cycle correction and improvement can be given to ensure that it operates healthily. As the degree of open source deepens and the scope of use expands, security vulnerabilities and risks are inevitably added, and security risk assessment is particularly important. To this end, a risk assessment platform should be established to promote the trustworthiness and reliability of large models.

International Governance - The digitalization, decentralization, open source and networking characteristics of artificial intelligence technology determine that its impact is difficult to limit to geographical boundaries and national scope. The content security, value conflicts, risk of out-of-control, human-machine relationships and other problems brought by generative artificial intelligence are largely global challenges and even common risks for human beings. It is difficult for any country to deal with it alone. It requires coordination, coordination, coordination and cooperation between different stakeholders and countries. It is crucial to establish a global governance framework, governance mechanism and related rules.

Conclusion

The biggest threat to mankind by the development of generative artificial intelligence intelligent technology is neither an unemployment problem nor an issue that is powerful enough to eliminate mankind, but a challenge to the governance system and traditional conceptual frameworks such as values, development concepts, employment concepts, wealth concepts, distribution systems formed by mankind on the basis of industrial civilization. [12] When the environment changes too quickly and the original conceptual framework and value system cannot keep up with the changes that are taking place, it is the real risk that everyone should face. Intelligent technology, intelligent industries, and intelligent society are forming a new "ecology". As Freudy said, the Fourth Revolution was passively concerned with our loss of “uniqueness” (we are no longer at the center of the information field) and actively concerned with the new ways in which we understand ourselves as information; when we interpret ourselves as information organisms, I mean not based on a broad “spiritual external” phenomenon, but rather integrating ourselves into everyday technology; this new form is different from genetically transforming humans, mastering their genetic information and thus mastering the incarnation of the future, a posthumanism, both technically (safe and feasible) and morally (acceptable) futuristic perspective [13].

In short, with the continuous improvement of human living standards, we are no longer concerned with traditional issues of how to develop and how to develop, but we should actively face new issues of how to develop better and more valuablely.

Generative artificial intelligence is the product of this era. We should also keep pace with the times and change our thinking, explore new ethical problems and respond to new governance challenges under the ever-changing situation, and achieve good creation, good use and good governance of generative artificial intelligence.

① The post-phenomenology of Don Id divided the "human-technical relationship" into embodied relationship, hermeneutic relationship, other different relationship and background relationship. Vibeck developed this theory, adding Cyborg relationship and compound relationship.

②See "for AI" released by the High-Level Group on AI on April 8, 2019. https://-.ec..eu/en//---ai.

References

[1] Yuan Wei. Artificial Intelligence: Do you rule or serve human beings? ——Thinking based on historical materialism [J]. Dialectical Natural Communications, 2021 (2): 1-9.

[2] Pan Enrong, Sun Zhiyan, Guo Yi. Intelligent integration and reflexive capital restructuring—An analysis of the development momentum of the new industrial revolution in the era of artificial intelligence [J]. Research on Dialectics of Nature, 2020 (2): 42-47.

[3] Zhao Tingyang. The "close worries" and "long-term contemplation" of the "revolution" of artificial intelligence - an ethical and ontological analysis [J]. Philosophical Dynamics, 2018 (4): 5-12.

[4] Moor JH. The , , and of [J]. IEEE, 2006, 21(4): 18-21.

[5] DJ. The mind: in of a [M]. New York: Press, 1996: 25-32.

[6] DG. :moral but not moral [J]. and , 2006, 8 (4): 195-204.

[7] L, JW. On the of [J]. Minds and , 2004, 14 (3): 349-379.

[8] Wu Tongli. Is artificial intelligence qualified to be a moral subject?[J]? Philosophical Dynamics, 2021 (6): 104-116.

[9] JE. Only can be [M]//Ford KM, C, Hayes PJ, et al. About . : IAAA Press, 2006: 241-248.

[10] Cheng Peng, Gao Siyang. Philosophical reflection on the moral status of general artificial intelligence [J]. Research on Dialectics of Nature, 2021 (7): 46-51.

[11] Chen Xiaoping. Artificial Intelligence Ethical System: Infrastructure and Key Issues [J]. Journal of Intelligent Systems, 2019 (4): 605-610.

[12] Cheng Sumei. Intelligent revolution and human prospects[J]. Journal of Shandong University of Science and Technology (Social Science Edition), 2019 (1): 3-5.

[13] L. The forth : how the is human [M]. : Press, 2014: 95-96.

rent! GPU computing power

A new batch of 4090/A800/H800/H100 is launched

Especially suitable for enterprise-level applications

More