The Ethical Stance And Governance Of Generative Artificial Intelligence Applications: Taking ChatGPT As An Example
The Ethical Stance And Governance Of Generative Artificial Intelligence Applications: Taking ChatGPT As An Example
Ethical issues are related to the future technological direction, rule formulation, and acceptance of artificial intelligence. They are the primary issues that need to be resolved during the development of generative artificial intelligence. Generative artificial intelligence represented by ChatGPT has powerful language understanding and text generation capabilities
Ethical issues are related to the future technological direction, rule formulation, and acceptance of artificial intelligence. They are the primary issues that need to be resolved during the development of generative artificial intelligence. The generative artificial intelligence represented by thought has powerful language understanding and text generation capabilities, but it only absorbs the integration of digital codes in the virtual world, and is still far from interacting with the objective world. We should be wary of the ethical dilemma that shakes the status of human subjects, clarify the humanistic stance in the application of generative artificial intelligence, construct ethical rules for generative artificial intelligence governance, and start with the artificial intelligence ethical governance organizational mechanism and artificial intelligence ethical normative mechanism to seek appropriate ethical solutions for a society where humans and machines coexist.
At present, the international organization level mainly focuses on ethical advocacy norms, which aim to build consensus for artificial intelligence ethics and have strong declarative significance. After leading the breakthrough in generative artificial intelligence technology, the United States has paid more attention to its ethical issues than before. Multiple documents have proposed ensuring the ethics and trustworthiness of artificial intelligence and the protection of citizens' basic rights such as personal information, privacy, fairness, and freedom. This has been expressed in the formulation of federal policies and legal frameworks, and an ethical governance model is forming that advocates and regulates in parallel. In the regulation of generative artificial intelligence, the UK focuses on maintaining market order and protecting the legitimate rights and interests of relevant consumers, emphasizing the maintenance of basic ethical order and the protection of basic rights. Our country is in the early stages of constructing ethical standards for artificial intelligence and is exploring a balanced plan for safe development and collaborative participation. Legislation such as the "Interim Measures for the Management of Generative Artificial Intelligence Services" and the "Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations" put forward clear requirements that artificial intelligence should respect social morality and ethics, adhere to socialist core values, prevent discrimination, and respect the legitimate rights and interests of others. In the future, it will need to be improved in terms of organizational structure, program design, standard unification, and responsibility assumption.
Multiple ethical dilemmas in generative artificial intelligence applications
First, it weakens the value of human subjects. Human beings are endowed with certain inherent values or value priorities. The ethical order revolves around human beings. The core of ethical construction is "human beings shape the world". However, in a society where humans and machines coexist, artificial intelligence is becoming an implementer with a certain ability to act autonomously, and artificial intelligence can easily analyze, identify and shape human concepts and cognitive systems through the output of conclusions, leading to the result of "artificial intelligence shaping humans", and human subjectivity will suffer setbacks. Furthermore, due to differences in individual abilities, the emergence of the "artificial intelligence gap" may intensify internal divisions within the community. Vulnerable groups will face elimination or become victims of algorithmic bullying, shaking the universal humanistic stance.
Second, it intensifies algorithmic bias and discrimination. First, as big data is isomorphic to human society, it contains deep-rooted biases. Artificial intelligence can learn stereotyped associations from data, and can also inherit biases from training data sets, causing some groups of people to suffer unfair treatment. Second, generative artificial intelligence can obtain and analyze human responses during human-computer interaction, provide feedback for self-reinforcing learning, and improve accuracy. If users spread false or untrue data, or respond with personal values such as bias or discrimination, harmful output will be generated.
Third, over-reliance on the output results of the mimetic environment. Generative artificial intelligence makes people accustomed to relying on its output results to understand the things around them, and creates trust in its results. At this time, transmitted experience replaces practical experience, and the mimetic environment becomes an intermediary existence between people and the world. Therefore, what the machine finally presents to the public is not entirely objective facts in the real world, and it may produce a mimicry of "symbolic reality." After users receive the mimic facts and output them through mass communication channels, they will become a universal social reality, becoming the standard for human action and the basis for understanding the world.
Fourth, value alignment is difficult. If artificial intelligence cannot understand human intentions, the machine may make wrong choices when setting multiple goals for it, and the output results may not meet human intentions. The values of humans and machines cannot be aligned, and machines will choose to perform goals that humans do not need. If we cannot adhere to human values, we may lose control over it, making machine applications ultimately superior to humans.
A humanistic ethical stance for generative artificial intelligence applications
Focusing on the coexistence of humans and machines, we urgently need to adhere to a humanistic stance to locate the relative positions of the two in the social spectrum. First, clarify the priority of humans over machines, while also taking into account the advantages of human-computer interaction. Second, clarify the subordination and secondary nature of machines, and always regard artificial intelligence as a tool to achieve human goals. Its task is to assist rather than replace humans, let alone manipulate and dominate humans, alienating humans into tools. Third, both traditional tools and artificial intelligence are valuable social practice objects or social practice activities created by humans. They must follow the ethical stance of "people are the purpose" and cannot change their "human nature" characteristics.
The author believes that based on a humanistic ethical stance, the principles of well-being, dignity and responsibility should be adhered to. The principle of welfare is the fundamental goal of people-centeredness, the principle of dignity is the prerequisite and inevitable requirement for realizing people-centeredness, and the principle of responsibility is an important guarantee for realizing people-centeredness. The above principles aim to ensure that the development and application of generative artificial intelligence bring positive ethical impacts to human society.
Humanistic ethical governance mechanism for generative artificial intelligence applications
The law lags behind, and scientific and technological innovation must consider ethics first. In a situation where the future risks of artificial intelligence are not fully recognized and regulatory means such as risk assessment and cost-benefit analysis are not fully effective, our country should adhere to a humanistic stance and explore a way to do good in science and technology that is consistent with China's reality.
First, construct a human-oriented artificial intelligence ethical governance organization. my country's artificial intelligence ethical governance organizations should be based on a humanistic stance, pay attention to normalized and scenario-based assessment of the ethical risks and impacts of generative artificial intelligence, and formulate ethical safety standards, technical specifications, risk prevention, control and response strategies. Ethics institutions should work together, coordinate with each other, communicate with each other, have a clear division of labor, and build a collaborative governance organizational framework for ethical supervision. We should adhere to the precautionary principle, intervene in the ethical risks of generative artificial intelligence in a timely manner, plan its development direction and development strategy, formulate safety standards and norms, and improve the ability to manage risks. Ethical institutions should also pay attention to how citizens’ basic rights can be effectively protected in generative artificial intelligence applications.
Second, improve the humanistic ethical standard mechanism for artificial intelligence. The first is to clarify the fair mechanism for technology application. Pay attention to discrimination and bias caused by the application of generative artificial intelligence, and improve the fairness protection mechanism. Solve design ethics issues, expand the evaluation criteria of technical system quality, and develop algorithms that conform to the value orientation of users. Form a normative system for the authenticity of generative artificial intelligence applications and promote the interpretability of the generative artificial intelligence application process. The second is to improve the early warning mechanism for ethical risks. Strengthen the safety guarantee of artificial intelligence technology, establish intelligent monitoring, early warning and control mechanisms for ethical risks, and improve the safety and transparency of generative artificial intelligence. The third is to supplement the accountability mechanism for ethical review. The ethical responsibility of the questions and answers and information output by generative artificial intelligence should ultimately be directed to the subject who plays the actual role in each link. Generative artificial intelligence is still an "impersonal" fitting tool, which is based on the analysis of parameters and data. It does not have subject consciousness and emotions, and strictly speaking it does not have ethical awareness and responsibility. Therefore, at this stage, based on a humanistic standpoint, the ethical responsibilities of providers, users, designers and other parties should be clarified, the ethical responsibility assuming mechanism should be implemented, and the responsibility system should be improved. A mechanism should also be established to monitor the full life cycle of generative artificial intelligence systems, monitor and audit algorithms, data and system operating processes, and pay attention to the realization of traceability and auditability of generative artificial intelligence applications. (Professor Feng Zixuan, Southwest University of Political Science and Law)
(Original article published in "Journal of East China University of Political Science and Law" Issue 1, 2024)