Analysis Of Ethical Issues In Generative Artificial Intelligence From The Perspective Of Moral Actors
Analysis Of Ethical Issues In Generative Artificial Intelligence From The Perspective Of Moral Actors
Generative artificial intelligence is a product of this era, and we should also keep pace with the times and change our thinking.
summary:
The discussion on the moral subject status of artificial intelligence has always been a hot topic in artificial intelligence ethics, especially the emergent characteristics of generative artificial intelligence, which has exacerbated hidden concerns about its moral attributes. This article attempts to explore the possibility of moral load and value expression of generative artificial intelligence from the perspective of a moral agent. On this basis, it explores the ways to deal with ethical issues such as value embedding, responsibility correspondence, and ethical assessment, and proposes governance framework ideas of public opinion clarification, system balance, key breakthroughs, institutional means, and international governance.
Keywords:
Generative artificial intelligence; moral actors; value embedding
introduction
While the intelligent society is actively transforming human society, it also comes with negative aspects. Artificial intelligence (AI) is an inevitable product of the development of science and technology in human society and the result of the objectification of human essential power. Then, it not only continues the good aspects of human beings, but also extends the evil aspects of human beings. There are many uncertainties in the construction of the interactive relationship between humans and artificial intelligence. Especially in recent years, the rapid development of generative artificial intelligence technology has inevitably caused deep concerns about the ethics of artificial intelligence from all walks of life.
We first invented machines to free people from heavy manual labor. With the advancement of the times, this willingness to substitute has expanded from physical labor to mental labor, and intelligent machines are used to solve complex computing problems, planning problems, decision-making problems, etc. Today's multi-modal generative artificial intelligence based on large models is step by step refreshing humankind's future imagination of the boundaries of artificial intelligence. This makes us ask ourselves: Why are we developing artificial intelligence? If it were used to replace us in any form of activity, would human life progress or regress? Assuming that artificial intelligence evolves enough intelligence in the future, what role can humans play? Faced with this contemplation, the development of artificial intelligence should grasp a socially acceptable "degree" instead of blindly pursuing speed. At the same time, whether it is necessary to control the development of artificial intelligence to a level where the whole is weak but individual areas are strong, and whether it is necessary to retain the fatal intellectual defects of artificial intelligence to a certain extent, remains to be discussed by the policy community, academia, and industry.
Can artificial intelligence serve as a moral agent?
The rapid progress of generative artificial intelligence has made people doubt their own subject status, and even feel deeply uneasy. Discussions about machine autonomous decision-making and subject status have once again become rampant. This concern has gradually deepened as generative artificial intelligence continues to make breakthroughs in natural language understanding, long text interaction, and multi-modal applications. The advancement of intelligent machines has a sense of moral oppression on human society, causing more and more scholars to join the discussion on the future development of artificial intelligence technology.
Can artificial intelligence have moral subject status? The so-called moral subject refers to a moral actor who is self-aware and can regulate behavior from the level of moral cognition through reasoning, judgment, and restraint, and can make moral decisions and assume moral responsibility for corresponding behaviors. So, can artificial intelligence as an artificial object meet these conditions? What is the relationship between artificial intelligence and moral actors? James H. Moor classified artificial moral subjects: moral influence subjects (-), invisible moral subjects (), explicit moral subjects (), and complete moral subjects (full). However, the discussion of the moral subject status of intelligent machines cannot be demarcated through simple classification and can be analyzed from multiple levels.
The core of human being's moral subject status is the freedom of the subject. The formation of subject freedom is a long and complex process. First of all, through long-term evolution in nature, human beings have developed the ability to learn independently, and continue to deepen their understanding of the world and themselves in the process of interacting with the world. The development of self-awareness promotes the formation of self-awareness and produces free subjective consciousness. The basic technical logic of generative artificial intelligence is still the probabilistic model. Although human feedback and supervised fine-tuning have improved the machine learning effect, judging from the current practical results, there is no artificial method other than natural methods that can make the machine have the ability of subject consciousness. Secondly, when human beings form a society through language, communication, organization, etc., self-perception begins to become complicated. In the interaction between human beings and society, human beings further realize that they are free, their subjectivity progresses to self-understanding of freedom, and various social behaviors of human beings are governed by free will. Human free will is the product of social consciousness reflection. For machines, however, this sophistication of free will is a more advanced stage. Finally, social progress has deepened people's self-understanding of freedom. Human beings not only realize freedom, but also realize that they should be free, which means giving freedom a moral connotation. Therefore, from the perspective of subject freedom, generative artificial intelligence does not seem to have the status of a moral subject.
The view of subjective freedom may be somewhat solipsistic. Analytical philosophy and phenomenology respectively provide new ideological resources for subjectivity.
Analytical philosophy believes that the human mind is composed of "intelligence", a program that processes objective information, and "consciousness", a subjective feeling that accompanies the program. The subject's internal representation ability, as "internal intentionality", connects the semantics of specific content with logical forms to form understanding, which is the so-called intelligence. Consciousness serves as a recognizer of actions and plays the role of the subject taking responsibility for actions. Intelligence and consciousness thus become two prerequisites for subjectivity. Johnson (G.) pointed out that because artificial intelligence cannot satisfy "the internal state of will, belief and other tendencies, and the intention to act", it can only be regarded as a moral entity (moral) rather than a moral subject (). From the perspective of analytical philosophy, although generative artificial intelligence has the ability to process objective information and can feedback results that exceed the Turing test, conscious recognition of actions and responsibility for them has not yet been achieved. This view has also been criticized by "anthropocentrism".
Information philosopher Freud ( ) proposed that characteristics such as subjective consciousness, free will, and inner state are not necessary conditions for artificial intelligence to have the status of a moral subject. As long as it has interactivity (the agent interacts with the environment), autonomy (changes the state without affecting the interaction), and adaptability (the internal state can be changed during the interaction to adjust the transition law), artificial intelligence can become an artificial agent (AAs). This mode of thinking is constructivist, and it does not rule out the possibility of artificial intelligence becoming a moral subject in the process of being constructed. However, the concept of artificial subject will change in different situations and conditions, so it is inevitable to fall into the criticism of relativism.
Phenomenology believes that the subject is a process of self-discovery, and it has a "symbiotic" relationship with the world. The realization of the subject's self-recognition is to rely on the ability of perception to withdraw oneself from the situation of coexistence with the world. Postphenomenology completely abandons the dualistic argument of subject and object. It recognizes people and technology as a unity that is entangled with each other and constitutes each other through different expressions of "human-technology relationship". Dutch technology philosopher Verbeek (Peter Paul) called it technological regulation (), and discussed the possibility of "moral materialization" on this basis. Obviously, from the perspective of "moral materialization", generative artificial intelligence has the conditions to become a moral subject, because artificial intelligence and humans are mutually constituted to a certain extent, and they are entangled and shaped by each other. It can be judged that there are no theoretical obstacles to the growth of artificial intelligence into moral agents. For humans, current artificial intelligence is often regarded as a tool in a subordinate position, and treating artificial intelligence as a moral subject requires gradual emotional recognition.
American information technology scholar Emile Nadeau has a more radical view: Humans are not intelligent subjects but robots are. Influenced by Kant, Nadeau believes that an action is a free action if and only if it is based on the subject's rational thinking. People often rely on emotions to judge and make decisions, so people cannot be absolutely rational. On the contrary, a machine based on formal logic can be absolutely rational, so it has the ability of a complete moral agent. This view is called the super subject concept of evolutionary thinking, which is almost a science fiction thinking that ignores reality.
All in all, the status of artificial intelligence as a moral agent is not limited to the field of philosophy. It has become an operational technical model in practice. The generative artificial intelligence is based on the infrastructure of large models, big data, and strong computing power, and uses human feedback reinforcement learning and supervised instruction fine-tuning to develop a new idea of "super alignment." On the one hand, its interaction with users constitutes an important data set and learning library, using human feedback to continuously "learn". In this process, user values actually have a shaping influence on the large model. On the other hand, human-computer interaction provides very meaningful parameters for the further update of generative artificial intelligence. Designers practice the principle of ethical alignment through fine-tuning methods such as annotation and proofreading to ensure that the values expressed by generative artificial intelligence are consistent with human goodness. This interaction between designers, users and large models confirms the practical path of postphenomenology and moral materialization theory with relational ontology as the core, that is, the relationship between generative large models and human beings may be mutually constituted and mutually shaped morally. The core point of this is how humans should participate in the design and use of artificial intelligence, and how to more effectively embed value factors such as fairness and kindness into large models so that they can produce positive moral consequences.
Responding to ethical issues in generative artificial intelligence: value embedding, responsibility correspondence, and ethical assessment
Although there is still much controversy in academic circles about whether generative artificial intelligence can fulfill corresponding responsibilities and obligations as a moral actor, it has become a general consensus that it can embed, express, and shape value. There are many reasons why generative artificial intelligence may cause technical risks, ethical risks, industrial risks, social risks, and political risks. One is technical misuse. The new generative features of artificial intelligence give users a way to participate in design through human feedback to a certain extent, increasing the possibility of misuse of technology. The second is that technology is out of control. The emergent nature of generative artificial intelligence increases the possibility of loss of control, causing humans to lose control of intelligent technology and producing immeasurable consequences. The third is that the application is out of control. It is difficult for anyone to estimate the consequences of applying generative artificial intelligence on a large scale and whether it will cause major social problems.
Dealing with ethical issues in generative artificial intelligence is an arduous task that involves a wide range of topics, covers a lot of content, and affects a wide range. To deal with the ethical issues of artificial intelligence, we must not only standardize the use of technology through ethical means, but also use engineering technology means and governance planning means. Therefore, the ethics and governance of artificial intelligence require a comprehensive system construction, starting from value embedding in the design stage, responsibility correspondence, ethical assessment and adjustment in the use stage, and building a coordination mechanism with multiple subjects, different levels, and multiple means to achieve the goal of effective governance.
Ethical responses to generative artificial intelligence start with design. An excellent smart product should be a combination of economy, functionality and value. Introducing moral and ethical design elements and incorporating value factors into the design of smart technology has increasingly become a core requirement for high quality in modern technology. The advocated super-alignment principles provide theoretical approaches and practical examples of design value embedding. The interaction between generative artificial intelligence and designers and users reflects the complex intertwined relationships in human-technology interaction, and it has the conditions to become a moral actor to a certain extent. Then as a "quasi-moral subject", the value embedding of intelligent technology in the design stage is reasonable. The research on value-sensitive design and responsible innovation that has emerged in recent years has provided theoretical support for this proposition. From Eade's "human-technology relationship" theory and Verbeek's technological adjustment theory and "moral materialization" theory to the super-alignment method, the moral embedding of artificial intelligence not only remains at the theoretical level, but is also feasible in practice.
Delft University of Technology in the Netherlands has launched a wave of research on "value design" (for) in recent years, attracting global attention. The so-called "value design" pays more attention to implementing ethical values into technology research and development, technology design and technology application. The practice of “value design” is mainly reflected in three aspects: First, the introduction of ethics-related courses into engineering curriculum education has become a required course for every engineer. Through such courses, the value sensitivity of practicing front-line technical experts can be cultivated, so that they can realize that their work is not only a technical issue or an engineering issue, but also a value issue and an ethical issue. Second, ethics researchers participate in the practical activities of training engineers. This point emphasizes the possibility and feasibility of integrating technology and ethics in practical operations, and the positive interaction between science and humanities in the process of teaching mutual learning and practical application. Third, the construction of a community of engineering scholars and ethics scholars. Engineers and humanities scholars should form a research community that engages in dialogue and communication to promote mutual understanding. On the one hand, the optimism of engineers can be made more inclusive and prudent, and on the other hand, the concerns of humanities scholars should no longer be utopian futurology.
Constructing a responsibility correspondence mechanism is a thorny issue in dealing with the ethical issues of generative artificial intelligence. Responsibility ethics requires answering five questions: who is responsible, to whom, what responsibility, how to be responsible, and how to take responsibility well.
Who is responsible for the subjectivity of corresponding responsibilities. The government should assume regulatory responsibilities and control the direction of technological development at a macro level; enterprises should assume design responsibilities and be aware of the negative aspects of value embedded in the model training process; universities should assume teaching and research responsibilities and cultivate responsible engineers in teaching and research; the public should assume supervisory responsibilities and actively exert their supervisory obligations to governments, enterprises, universities and other groups; "artificial intelligence entities" should bear "technical responsibilities" and always be good technologies, requiring good interaction between designers and users.
Who is accountable to corresponds to the direction of responsibility. Each responsible subject should first be responsible for its own actions and should also be responsible to other responsible subjects, thereby forming an organic closed-loop responsibility system.
What you are responsible for corresponds to the connotation of responsibility. For example, the responsibility of enterprises is to ensure the safety of generative artificial intelligence applications, avoid risks, and eliminate hazards. The connotations of responsibilities corresponding to different responsible entities are also different.
Methodology of how to take responsibility for corresponding responsibilities. That is, how to realize the connotation of responsibility. Responsibility ethics runs through motivation, behavior, and results. Therefore, responsibility methodology at the consciousness level, institutional level, and behavioral level needs to be constructed.
How to take responsibility well corresponds to the evaluative nature of responsibility. Whether an effective generative artificial intelligence ethics and governance evaluation mechanism can be built is directly related to the final effect of responsible ethics practice.
The effectiveness of ethical issues depends on the evaluation and adjustment of multiple parties involved. The 2019 EU High-Level Expert Group on the Ethics of Artificial Intelligence summarized seven-point ethical principles for smart technologies: human agency and oversight; technological robustness and security; privacy and data governance; transparency; diversity, non-discrimination and fairness; social and environmental well-being; and accountability. ② First of all, it is necessary to reach a consensus on the evaluation system through multi-party consultation. Only with the support of standards can it be targeted. Secondly, the ethical evaluation of generative artificial intelligence should not just stay at the post-event evaluation, but the evaluation should be introduced into the technology research and development process. The community of technical experts and humanities scholars should follow up on the progress of artificial intelligence projects in a timely manner, identify and solve problems as early as possible, and deal with various emerging problems of artificial intelligence through technical governance. Finally, government departments should introduce corresponding governance measures to assist in ethical assessment, including laws, regulations, codes of conduct, etc. The "Opinions on Strengthening the Governance of Science and Technology Ethics", "Interim Measures for the Management of Generative Artificial Intelligence Services" and "Measures for the Review of Science and Technology Ethics (Trial)" promulgated by our country can all be used as means of ethical assessment and adjustment.
Framework ideas for generative artificial intelligence governance
Clarification of public opinion – to prevent technology from being demonized. Although disorder, unemployment, and loss of control constitute the main concerns of generative artificial intelligence, technological development is a prerequisite for governance. In particular, it is necessary to avoid demonizing generative artificial intelligence technology, avoid forming an overwhelming voice in public opinion, and hinder technological development. We must adhere to the correct guidance of public opinion, use the consensus views of experts to actively communicate with the public, clarify problems, discuss problems, and solve problems.
System balance - properly handle the relationship between technological innovation and standardized governance. Without technological development, there is no governance technology. Therefore, there should be a certain tolerance for the uncertainty of emerging technologies, and too many obstacles should not be set up during the rapid growth stage of technology. The global layout of technology should be deepened to prevent setting too high a threshold that is detrimental to industrial development. The evolution of large models requires the flow of data from large amounts of user feedback, leading to self-improvement. If the governance threshold is too high, it will inevitably prevent the large model applications of some enterprises from going online, hindering technological progress.
Key Breakthrough - Actively develop ethical alignment of artificial intelligence. Ethical alignment is an important future development direction of generative artificial intelligence. It can make the output of artificial intelligence more consistent with human values and bring into play the different capabilities of human and artificial intelligence in collaborative delivery with humans. It is necessary to conduct in-depth research on theories such as technocracy, responsible innovation, value-sensitive design, moral materialization, human-machine relationship, and the subject status of intelligent agents, and use the combination of technology and theory to make breakthroughs in the ethical alignment of artificial intelligence.
Institutional tools - establish a model evaluation system and build a risk assessment platform. Establish a large model evaluation system, conduct large model evaluation, and formulate launch standards based on evaluation data. The working logic of artificial intelligence's probabilistic model makes it impossible to achieve 100% accuracy. We can establish large model release standards through prioritized and staged evaluation plans, build a fault-tolerant mechanism, and provide time cycles for correction and improvement to ensure healthy operation. As the degree of open source deepens and the scope of use expands, security vulnerabilities and risks will inevitably increase, and security risk assessment is particularly important. To this end, a risk assessment platform should be established to promote the credibility and reliability of large models.
International governance - The digital, decentralized, open-source, and networked characteristics of artificial intelligence technology determine that its impact is difficult to be limited to geographical boundaries and national boundaries. Issues such as content security, value conflicts, out-of-control risks, and human-machine relationships brought about by generative artificial intelligence are to a large extent global challenges and even common risks for mankind. It is difficult for any country to deal with it alone. It requires coordination, collaboration, and cooperation among different stakeholders and countries. It is crucial to establish a global governance framework, governance mechanism, and related rules.
Conclusion
The biggest threat to mankind from the development of generative artificial intelligence intelligence technology is neither the problem of unemployment nor the problem that it is so powerful that it can eliminate mankind, but the challenge to the governance systems and traditional conceptual frameworks such as values, development concepts, employment concepts, wealth concepts, and distribution systems that humans have formed on the basis of industrial civilization. When the environment changes too fast and the original conceptual framework and value system cannot keep up with the ongoing changes, this is the real risk that everyone should face up to. Smart technology, smart industry, and smart society are forming a new "ecology". As Freudy put it, the Fourth Revolution focuses negatively on our loss of "uniqueness" (we are no longer at the center of the information field) and actively on the new ways in which we understand ourselves as information; when we interpret ourselves as information organisms, I do not mean in terms of the broad "extrapsychic" phenomenon, but integrates itself into everyday technology; this new form is different from genetically modifying humans, mastering their genetic information and therefore future incarnations. This kind of posthumanism is a futuristic perspective, both technically (safe and feasible) and morally (acceptable).
In short, with the continuous improvement of human living standards, we are no longer concerned about the traditional issues of how to develop and how to develop, but should actively face the new issues of how to develop better and more valuable.
Generative artificial intelligence is a product of this era. We should also keep pace with the times, change our thinking, explore new ethical problems, respond to new governance challenges under the ever-changing situation, and realize the good creation, good use and good governance of generative artificial intelligence.
① Don Ide's postphenomenology divides "human-technical relationship" into embodied relationship, hermeneutic relationship, alterity relationship and background relationship. Verbeek developed this theory and added cyborg relationship and compound relationship.
②See "for AI" released by the EU High-Level Group on AI on April 8, 2019. https://-.ec..eu/en//---ai.