Research On Ethical Risk Prevention Of Artificial Intelligence In Western Academic Circles
Research On Ethical Risk Prevention Of Artificial Intelligence In Western Academic Circles
International academic circles have conducted many studies on various ethical issues and prevention that may be caused by new technologies, which deserve our attention. Artificial intelligence obviously goes beyond the scope of traditional tools. It has the ability to learn, make decisions, and adjust behavior according to changes in the environment.
The intelligent revolution is profoundly affecting and shaping human production, lifestyle and social civilization development. How to deal with the ethical issues and challenges brought about by the development and application of artificial intelligence technology has become a major issue that all mankind needs to face in the era of intelligence. International academic circles have conducted many studies on various ethical issues and prevention that may be caused by new technologies, which deserve our attention.
Views on whether machines can become moral agents
Artificial intelligence obviously goes beyond the scope of traditional tools. It has the ability to learn, make decisions, and adjust behavior according to environmental changes, thereby producing corresponding ethical consequences. Therefore, how to determine or define the identity and status of artificial intelligence in human society? Can it become a moral subject or legal subject, take responsibility for its own actions or obtain incentives? Foreign academic circles believe that the discussion of these issues must ultimately return to the issues of self-awareness and free will. From Turing's (AM) idea of the "Turing Test" in 1956 to Searle's (JR) "Chinese Room" thought experiment, to Dreyfus's (H.) "What Computers Still Can't Do: A Critique of Artificial Reason" and other ideological achievements, early artificial intelligence pioneers generally believed that artificial intelligence does not have human-like consciousness based on the nature of artificial intelligence and its difference from human intelligence.

In recent years, with the exponential development of new technologies represented by artificial intelligence, whether autonomous machines can become the mainstay has become an unavoidable topic. Among them, most scholars believe that machine intelligence relies on algorithmic programs and is difficult to derive self-awareness and free will like humans, making it difficult to become a moral subject. They believe that the human mind is composed of two parts, the computational consciousness that understands, grasps, and transforms the object world through formal logic and natural causal laws, and the social-emotional consciousness that confirms the essence and meaning of the subject world through object-oriented activities and communication activities. The autonomous consciousness displayed by machines is only a simulation of human computational intelligence. For example, M. Boben believes that it is difficult for humans to design general artificial intelligence because artificial intelligence only focuses on intelligent rationality and ignores social emotional intelligence and has no wisdom (). Žižek emphasized that the computer should not be imagined as a model of the human brain, but that the human brain should be imagined as a "computer constructed of flesh and blood", and the human brain cannot be completely reduced to a computer structural model. However, some futurists believe that machines will derive consciousness different from humans and intelligence beyond humans in the future. Once super artificial intelligence appears, it will be difficult for humans to communicate with it and make it comply with human moral rules. For example, Kurzweil (Ray) proposed the theory of "technological singularity" in 2005, believing that human subjectivity will be challenged by machines in the future. Anderson ( ) and his wife edited the book "Machine Ethics" ( ), which opened up the research approach to machine ethics with machines as responsible subjects. With the exponential development of artificial intelligence technology, whether machines can break through the limitations of the law of causality and derive active consciousness in the future requires continued theoretical follow-up.
The debate among Western scholars about whether machines can become moral subjects has caused us to re-focus and examine issues such as "what is a human being", "how should we treat humans", and "what are the nature and limits of technology" under the trend of artificial intelligence.
Moral and ethical risks arising from artificial intelligence
Artificial intelligence technology has been inextricably linked to humans since the beginning of its development. As early as 1950, Wiener ( ), the founder of American cybernetics, believed that it was full of uncertainty whether robotic technology would do good or evil. However, robots would replace humans in work, which might cause the "devaluation" of the human brain. Western scholars have conducted relatively in-depth and systematic research and exploration on the moral and ethical risks that may arise from artificial intelligence.
First, discuss issues such as the unemployment of workers caused by artificial intelligence technology and the formation of new social injustices and technological gaps. Many Western scholars believe that artificial intelligence will cause risks such as massive unemployment and an increase in the social gap between rich and poor. For example, Harari (Yuval) believes that with the evolution of technology, most people will be reduced to the "useless class" because their jobs will be replaced by artificial intelligence. Only a small number of elites with technology and resources will evolve into super humans, and social classes will become solidified and polarized. Regarding how to better protect people's rights to survival and development, scholars such as Hughes (James) proposed establishing a comprehensive basic income system through taxation and public ownership of wealth to deal with unemployment and social injustice caused by smart technology.
Second, the debate on the ethical risks of uncertainty in artificial intelligence technology. "Who should be responsible for machine behavior" has become an increasingly serious ethical issue of responsibility. Some scholars advocate that designers, manufacturers, programmers, and users should control and supervise the social consequences of robots, emphasizing the ethical responsibilities of robot engineers. Other scholars advocate designing algorithms in a morally embedded manner so that machines become moral bodies with built-in ethical systems to prevent ethical risks arising from the design and application stages of artificial intelligence. In 2009, American scholars Wallach ( ) and Colin Allen made a more systematic analysis of how to design moral machines in the book "Moral Machines: Teaching Robots to Distinguish Right from Wrong" co-authored. However, ethical algorithms face value choices and conflicts. There are many kinds of moral norms and ethical principles in human society, and it is difficult to communicate between various systems. What kind of moral norms should be used to design algorithms becomes a problem. In addition, the designer's ethical demands are not uniform, and how to make value trade-offs when designing a moral machine is also a problem. Based on this, some scholars such as Bryson (J.) have discussed how to rank values, resolve value conflicts, and seek universal ethical consensus as a theoretical framework for designing moral machines. They generally regard machines as harmless and friendly to humans as the primary ethical principle.
Third, there are concerns about artificial intelligence technology beginning to break through the boundaries of traditional human morality and ethics. In addition to the above issues, some scholars have expressed concerns about the following issues. People's over-reliance on intelligent technology can easily lead to technological hegemony and technological slavery, resulting in social uncertainty risks and crises; the application of nursing robots has the risk of objectifying the elderly and young children, weakening or infringing on their dignity, freedom, and privacy; the application of autonomous combat robots has the risk of breaking international social laws and regulations, increasing the possibility of regional conflicts and wars, and causing mass destruction.
Ideas for preventing ethical risks in artificial intelligence
Regarding the various ethical issues that may arise from artificial intelligence, Western scholars generally believe that artificial intelligence ethical risks should be prevented and avoided through multiple approaches such as machine ethical value-oriented design, industry standard setting, and legislation.
The machine design ideas of top-down moral coding and bottom-up moral learning have great influence in the international academic community. The former advocates embedding the moral rules of human society into algorithms in the form of program coding, so that machines can make moral decisions through calculation and reasoning. The latter believes that human moral behavior is learned in specific moral situations and interactions with others, so there is no need to pre-code it, allowing machines to become moral actors through observing moral cases and learning through interaction with other moral entities.
Both designs have their own limitations. The former has the problem of choosing which ethical value to embed and how to deal with complex moral scenarios. The latter has the problem of not having an ethical guidance system and relying solely on machine learning to determine what results will be obtained by inputting morally sensitive data into the machine. Mark Kaukolberg believes that current machine learning is essentially a statistical processing process, so it is difficult for machines to be trained into complete moral actors. He advocates designing moral machines from the perspective of human-computer interaction.
Countermeasures to prevent and avoid artificial intelligence ethical risks through industry standards. In recent years, the European Robotics Research Network (EURON), the National Aeronautics and Space Administration (NASA), the National Science Foundation, and the Korean Ministry of Trade, Industry and Energy have guided artificial intelligence ethics research at the national level. Some industry associations such as the British Standards Institute (BSI) have promulgated the "Guidelines for the Ethical Design and Application of Robots and Machine Systems", and the Institute of Electrical and Electronics Engineers (IEEE) has proposed "Ethical Design" (EAD) specifications to deal with issues such as algorithmic discrimination and social injustice caused by designers' cognitive preferences.
Methods of resolving ethical risks of artificial intelligence through institutional norms. In April 2021, the European Commission passed the legislative proposal of the "Artificial Intelligence Law", which distinguished the functions and uses of artificial intelligence and other factors into levels of unacceptable risk, high risk, limited risk and low risk, and proposed a clear and specific corresponding classification governance and supervision system.
The international academic community’s exploration of the feasibility of “ethically designed” moral machines provides methodological guidance for us to develop and design trustworthy artificial intelligence. At present and in the future when the epidemic is unpredictable, the destiny of mankind shares weal and woe. It is the general trend to accelerate the in-depth integration of artificial intelligence into various fields and make the world form a pan-intelligent global village. Therefore, it is necessary, urgent and possible to put aside disputes, seek global consensus, design a moral machine with the greatest common denominator, and jointly resist future uncertain risks. However, these scholars focused their research on how to avoid technological risks and ignored the polarized ethical risks caused by the lack of humanistic values in technological applications. In order to solve these problems, we should always adhere to the people-centered development law of science and technology, and always regard maintaining human dignity and caring for human values as the fundamental goal and prerequisite in the development and application of technology.
(Author’s affiliation: School of Marxism, South China University of Technology; Institute of International Studies, Guangdong Academy of Social Sciences)