AI Ethics

Research On The Prevention Of Ethical Risk Of Artificial Intelligence In Western Academic Circles

Research On The Prevention Of Ethical Risk Of Artificial Intelligence In Western Academic Circles

Research On The Prevention Of Ethical Risk Of Artificial Intelligence In Western Academic Circles

The intelligent revolution is profoundly affecting and shaping the forms of human production, lifestyle and social civilization development. How to deal with the ethical problems and challenges brought about by the development and application of artificial intelligence technology has become a major issue that all mankind needs to face in the era of intelligence. The international academic community has conducted many research on various ethical issues and prevention that may arise from new technologies, which deserves our attention.Views on whether machines can become moral subjects. Artificial intelligence obviously goes beyond the scope of traditional tools, has the ability to learn, make decisions, and adjusts behaviors according to environmental changes, thus producing corresponding ethical consequences. Therefore, how to determine or define the identity and status of artificial intelligence in human society, can it become a moral or legal subject, take responsibility for its own behavior or receive incentives?Foreign academic circles believe that the discussion of these issues ultimately depends on the question of self-awareness and free will. From Turing proposed the

The intelligent revolution is profoundly affecting and shaping the forms of human production, lifestyle and social civilization development. How to deal with the ethical problems and challenges brought about by the development and application of artificial intelligence technology has become a major issue that all mankind needs to face in the era of intelligence. The international academic community has conducted many research on various ethical issues and prevention that may arise from new technologies, which deserves our attention.

Views on whether machines can become moral subjects

Artificial intelligence obviously goes beyond the scope of traditional tools, has the ability to learn, make decisions, and adjusts behaviors according to environmental changes, thus producing corresponding ethical consequences. Therefore, how to determine or define the identity and status of artificial intelligence in human society, can it become a moral or legal subject, take responsibility for its own behavior or receive incentives? Foreign academic circles believe that the discussion of these issues ultimately depends on the question of self-awareness and free will. From Turing (AM) proposed the "Turing Test" idea in 1956, to Sell (JR)'s "Chinese House" thought experiment, to Dreyfus' (H.'s "What Computers Still Can't Do: Artificial Rationality" The early pioneers of artificial intelligence generally believed that artificial intelligence does not have human-like consciousness based on the perspectives of the nature of artificial intelligence and its differences from human intelligence.

Ethical artificial intelligence refers to_characteristics of ethical artificial intelligence_artificial intelligence ethics

In recent years, with the exponential development of new technologies represented by artificial intelligence, whether autonomous machines can become the main body has become an unavoidable topic. Among them, most scholars believe that machine intelligence relies on algorithmic programs, and it is difficult to derive self-awareness and free will like humans, so it is difficult to become a moral subject. They believe that the human mind is composed of two parts, through formal logic, natural law of cause and effect, the calculation consciousness of the object world is understood, grasped, and transformed, and through object-based activities and communication activities, the social emotions of the essence and meaning of the subject world are confirmed. Consciousness, the autonomous consciousness expressed by machines is just a simulation of human computing intelligence. For example, M. Boben believes that it is difficult for humans to design general artificial intelligence because artificial intelligence only focuses on intelligent rationality and ignores social and emotional intelligence, without wisdom (). Zizek emphasized that computers should not be imagined as human brain models, but human brains should be imagined as "metal-blooded computers", and the human brain cannot be completely restored to computer-made models. However, some futurists believe that machines will derive different consciousness from humans and intelligence beyond humans in the future. Once super artificial intelligence appears, it will be difficult for humans to communicate with them and make them abide by human moral rules. For example, Ray proposed the "technological singularity" theory in 2005, believing that in the future, human subjectivity will be challenged by machines. Anderson ( ) and his wife compiled the book "Machine Ethics" ( ), which opened up the research approach of machine ethics with machines as the responsible body. With the exponential development of artificial intelligence technology, whether machines can break through the limitations of the law of cause and effect in the future and derive active consciousness requires continuous follow-up from the theory.

The debate among Western scholars on whether machines can become moral subjects has aroused our re-focus on and examine the issues of "what is a person", "how should we treat people", "what are the essence and limitations of technology" under the trend of artificial intelligence.

Moral and Ethical Risks Arising from Artificial Intelligence

Artificial intelligence technology has been closely related to humans since its development. As early as 1950, Wiener, the founder of American cybernetics, believed that robot technology will do good or evil will be full of uncertainty, but robots will replace humans. Working in a job may cause the human brain to "depreciate". Western scholars have conducted in-depth and systematic research and exploration on the moral and ethical risks that artificial intelligence may arise.

First, discussion on issues such as artificial intelligence technology causing workers to lose their jobs and forming new social injustice and technological gaps. Many Western scholars believe that artificial intelligence causes risks such as large-scale unemployment and an increase in the gap between the rich and the poor in society. For example, Yuval believes that with the evolution of technology, most people will become "useless classes" because their work will be replaced by artificial intelligence. Only a few elites who master technology and resources will evolve into superhumans and social classes. Curing, polarizing. Regarding how to better protect people's right to survive and develop, scholars such as James proposed to establish a comprehensive basic income system through taxation and public ownership of wealth to cope with unemployment and social injustice caused by smart technology.

Second, the debate on the issue of ethical risks of uncertainty in artificial intelligence technology. "Who should be responsible for machine behavior" has become an increasingly serious responsibility ethical issue. Some scholars advocate that designers, manufacturers, programmers and users should control and supervise the social consequences of robots, and emphasize the ethical responsibility of robot engineers. Some other scholars advocate designing algorithms in a morally embedded way, making machines a moral body with built-in ethical systems to prevent ethical risks arising from artificial intelligence in the design and application stages. In 2009, American scholars Wallach and Colin Allen made a relatively systematic analysis of how to design moral machines in their co-authored book "Ethical Machines: Teaching Robots to Distinguish Right and Mistake". However, moral algorithms face value choices and conflicts. There are many moral norms and ethical principles in human society, and it is difficult to communicate between various systems. What kind of moral norms are used to design algorithms has become a problem. In addition, designers’ ethical demands are not monolithic, and how to make value choices when designing moral machines also become a problem. Based on this, some scholars such as Bryson (J.) have discussed how to sort value, resolve value conflicts, and seek universal ethical consensus as the theoretical framework for designing moral machines. They generally regard machines as harmless and friendly to humans. The primary ethical principle.

Third, concerns about the beginning of artificial intelligence technology breaking through the boundaries of traditional human moral ethics. In addition to the above issues, some scholars have expressed concerns about the following issues. Human beings' excessive dependence on intelligent technology can easily lead to technological hegemony and technical enslavement, causing risks and crises of social uncertainty; the application of nursing robots has risks such as materializing the elderly and young children, weakening or infringing on their dignity, freedom, and privacy; autonomous combat robots The application of the application lies in breaking the laws and regulations of the international community, increasing the possibility of regional conflicts and wars, and the risk of mass destruction.

Ideas for preventing ethical risk of artificial intelligence

In response to various ethical issues that artificial intelligence may cause, Western scholars generally believe that artificial intelligence ethical risks should be prevented and avoided from multiple channels such as machine ethical value-oriented design, industry standard setting, and legislation.

What has great influence in the international academic community are top-down moral coding and bottom-up moral learning machine design ideas. The former advocates embedding the moral rules of human society into algorithms in a programmatic way, so that machines can make moral decisions through calculation and reasoning. The latter believes that human moral behavior is learned in specific moral situations and interactions with others, so there is no need to be pre-coded, allowing machines to become moral actors through moral case observation, interaction and learning with other moral bodies.

These two designs have their own limitations. The former has the problem of what kind of ethical value embedding to choose and how to deal with complex moral scenarios. The latter has no ethical guidance system. With machine learning alone, what results will be obtained by inputting the machine's morally sensitive data? The problem. Mark believes that current machine learning is essentially a statistical processing process, so it is difficult for machines to be trained as a complete moral actor. He advocates designing moral machines from the perspective of human-computer interaction. .

Set countermeasures to prevent and avoid artificial intelligence ethical risks through industry standards. In recent years, the European Robot Research Network (EURON), NASA, the National Science Foundation, the South Korean Ministry of Trade, Industry and Energy have guided the research on artificial intelligence ethics from a national level. Some industry associations such as the British Standards Institute (BSI) have issued the "Guidelines for Ethical Design and Application of Robots and Machine Systems", and the American Institute of Electrical and Electronic Engineers (IEEE) has proposed "Ethical Design" (EAD) specifications to deal with the design. Algorithm discrimination and social injustice caused by cognitive preferences.

Through institutional norms, practices to resolve ethical risks of artificial intelligence. In April 2021, the European Commission passed the legislative proposal of the Artificial Intelligence Law, which distinguished unacceptable risks, high risks, limited risks and low risks at levels such as the functions and uses of artificial intelligence, and proposed clear, Specific corresponding classification governance and supervision systems.

The international academic community's exploration of the feasibility of "ethical design" moral machine provides methodological guidance for us to develop and design trustworthy artificial intelligence. In the unpredictable current and future of the epidemic, it is a general trend to share the fate of mankind. It is accelerating the deep integration of artificial intelligence with various fields and making the world a pan-intelligent global village. Therefore, it is necessary, urgent and will be possible to put aside disputes, seek global consensus, design a moral machine with the greatest common divisor, and jointly resist future uncertain risks. However, these scholars will focus their research on how to avoid technological risks and ignore the polarized ethical risks of the lack of humanistic value in technological applications. In order to solve these problems, we should always adhere to the people-centered laws of scientific and technological development, and always maintain human dignity and caring for human value in the development and application of technology.

(Author’s unit: School of Marxism, South China University of Technology; Institute of International Studies, Guangdong Academy of Social Sciences)

More