AI Ethics

Guo Rui: The Ultimate Dilemma Of Artificial Intelligence Ethics

Guo Rui: The Ultimate Dilemma Of Artificial Intelligence Ethics

Guo Rui: The Ultimate Dilemma Of Artificial Intelligence Ethics

In the field of artificial intelligence, the crisis of creating order is the fundamental reason for discussing ethical issues. The most intuitive necessity of artificial intelligence ethics comes from human fear of unknown and powerful things.The knowledge of good and evil is ultimately attributed to the subject or objective world, and the ethical paths of deontology and consequentialism are defined respectively. Taking the deontology represented by Kant

Article | Guo Rui (School of Law, Renmin University of China)

Source | Sanhui Xuefang

What is the ethics of artificial intelligence_Ethics artificial intelligence mind map_Artificial intelligence ethics

In the field of artificial intelligence, the crisis of creating order is the fundamental reason for discussing ethical issues. The most intuitive necessity of artificial intelligence ethics comes from human fear of unknown and powerful things.

The knowledge of good and evil is ultimately attributed to the subject or objective world, and the ethical paths of deontology and consequentialism are defined respectively. Taking the deontology represented by Kant's ethics as an example, and taking the freedom of the subject as the final decision, people must follow a series of principles. These principles come from the subject itself. Consequentialists (utilitarians) do the opposite, and the results produced by human actions in the objective world become the ultimate criterion for judging human behavior. [1]

Where is the path beyond the principles of stake measurement and abstraction? Perhaps we should adopt a true humility and respect the creative order of this world. The tradition of ancient Greek ethics has a premise that human order is a part of the cosmic order. Perhaps we should know that our existing principles may not be enough to legislate for the world. Utilitarianism and economic analysis based on it have been criticized in recent years, which also tells us that it is the rational conclusion that humbly acknowledging that we cannot predict the changes in the objective world brought about by actions, and that we cannot accurately measure gains and losses. As the Proverbs says: Reverence of the Creator is the beginning of wisdom.

In the field of artificial intelligence, the crisis of creating order is the fundamental reason for discussing ethical issues. The most intuitive necessity of artificial intelligence ethics comes from human fear of unknown and powerful things. Frankenstein’s image—a monster born in thunder and lightning combined with a machine—is an expression of this fear in the work of art. People often feel fear of these powerful but not good forces. We are afraid that what we create ultimately brings us destruction. After the atomic bomb, we have the power to destroy the entire race for the first time. In the era of cold weapons, even large-scale wars could not destroy the entire body. The emergence of the atomic bomb has played a huge role in human history and has also given humans an ethical reflection on technology. Will artificial intelligence be the same as the monster Frankenstein? Will we create a technology that will eventually destroy us humans? Standing at the entrance of a technological revolution, what mankind fears is the subversion of order: Adam and Eve in the Garden of Eden are tempted by the fruit of the tree of distinguishing good and evil described by the devil, and the order of God is therefore subverted. If humans’ fear of artificial intelligence is not because machines make decisions that are unfavorable to humans, but because machines replace human decisions themselves, then we may see that the ultimate dilemma of artificial intelligence ethics is the subversion of the creative order. Today, facing the possibility of backlash by the technology created by itself, human beings have to rethink a series of problems such as personal self-cognition, interpersonal relationships, changes in personal and economic organizations, and personal and state relations.

Ethics artificial intelligence mind map_Artificial intelligence ethics_What is the ethics of artificial intelligence

"Frankenstein" movie poster

Based on the thinking of creating order, we see two major problems in artificial intelligence ethics: 1. Causal connection problem: Artificial intelligence is entrusted with the ability to make decisions on human affairs, but its ethical judgment ability of decision-making results is insufficient: 2 . Ultimate Code Difficulty: Due to the lack of the ultimate moral code that guides AI to work, it is difficult for AI to weigh among conflicting decisions. The fundamental reason for the above two difficulties is the limitations of artificial intelligence over humans.

The causal linkage problem comes from the limitations of AI's cognition of the ethical significance of decision-making consequences. In other words, it is difficult to establish a causal connection between artificial intelligence on a certain decision and other social impacts, which will lead to a catastrophe for human beings under the conditions of the emergence of super artificial intelligence. Tigmak cited the example of humans and ants: Although people do not intentionally want to step on ants, people often step on ants because they do not care about the interests of ants. When it comes to the relationship between humans and artificial intelligence, the problem of causal connection is that artificial intelligence does not "carry" about human interests. Although the goal of artificial intelligence decision-making is not to directly harm people, it is entirely possible to cause indirect damage to people, which can be fatal to people. Observing the world from the perspective of an ant is similar to the fear of machines after the industrial revolution: people operate a machine that cannot be stopped in time when they start, and if the worker's hands are accidentally rolled into a machine running at high speed , will cause work-related injuries. People have a deep-rooted fear of machines: machines just repeat the machine's commands, and once they start, they cannot stop, which will cause unpredictable consequences (this unpredictable may be due to the designer's negligence or the human being The intellectual limitations of the human limbs are unpredictable, such as harm to human limbs. Here we see that the characteristic of differentiating machines from humans is that they lack the ability to make ethical judgments anytime and anywhere, and by the time people intervene, it is too late.

In the contemporary era, artificial intelligence is often used to solve a specific problem, and can only make decisions through existing limited data, and often cannot understand a broader social and ethical context like humans. For example, people may give instructions to obtain food, which brings the result of pet being killed by artificial intelligence. Artificial intelligence cannot fully understand the ethical significance of the results, so that it executes instructions that people did not expect. Therefore, it is completely understandable that we are afraid of artificial intelligence’s lack of awareness of the ethical significance of decision-making consequences. When artificial intelligence technology (machine learning) itself lacks transparency (black box problem), we have more reason to worry about the lack of ethical judgment of decision-making results. Artificial intelligence often has algorithmic reasons (such as machine learning) and in specific situations, there is always limitation of computing power, and it is impossible to trace back the specific mechanisms of the machine to make decisions. The inability to backtrack brings limitations in predicting consequences in advance and making corrections afterwards, which leads us to hesitate about whether to apply artificial intelligence technology.

The ultimate criterion problem comes from the difficulty of comparing different decision results because artificial intelligence does not have known ethical codes to guide its decisions. When AI is powerful, it has the potential to become a new "God", that is, AI becomes the participant and influencer of all human decisions. However, the above dilemma has caused the “God” we created to be unable to care for the world. The "tram problem" vividly reflects the dilemma of machine decision-making: in an accident, no matter what decision the autonomous vehicle makes, it is not an existing consensus among humans. But the results of its decisions will inevitably determine the life and death of many groups (such as passengers in cars and pedestrians on the road). This dilemma scares us: when we are developing a powerful technology, we are unable to find its ultimate control solution, which puts people who control the technology in an unbearable responsibility.

These two major problems have led to the following two concerns about the impact of artificial intelligence technology at the practical level: 1. The deterioration of artificial intelligence technology (expanding the scope and deepening the degree) has caused existing problems (such as discrimination, wealth inequality) ;2. Artificial intelligence technology brings new problems (such as making some people more intelligent than others, creating new sources of inequality). After the Industrial Revolution, when machines appeared in human life, people also felt worried. But the worries brought by artificial intelligence technology are far greater than those in the past. For the first time, artificial intelligence technology has made machines not only stronger than humans, but also made independent decisions without relying on humans. If the problem is just that people control machines to do bad things, we can also prevent them by educating and holding people accountable; if machines may "decide themselves" to do bad things, how can we deal with it?

Faced with the ultimate dilemma of artificial intelligence ethics, when we truly humbly look for answers to these questions, we should return to the background of historical and social changes to find a new starting point. Whenever people start talking about ethics, their background is always the fear of ethical imbalances in society. Thucydides of ancient Greece described the historical background in which Greek philosophy began to explore ethics. In the second year of the Peloponnese War, a great plague broke out in Athens. The plague had a great impact on the morality of the Athenians. Thucydides wrote:

What people do in temples, discussions on oracles and similar initiatives are of no help. In fact, in the end, people were completely overwhelmed by the pain they suffered, so that they had no extra energy to devote themselves to these things... They died in an unattended situation; in fact, in many houses, the people in the house were He passed away quietly without anyone noticing... The bodies of the dead were buried in each other, and dying patients could be seen staggering on the streets, or gathered in groups by the spring water due to thirst. The temple was full of corpses, and they died in the temple. Because disasters are so irresistible, people who don’t know what will happen next to them turn a blind eye to all religious laws and legal provisions… Whether it is fear of God or the laws made by man, they no longer have binding effect. As for the gods, things look the same whether they are worshipped or not, when people see good luck and bad luck come upon people indiscriminately. [2]

Faced with the daily death threat, many Athenians chose to stop obeying traditions and habits and began to indulge in lust and enjoy themselves all day, because even the rich and the powerful may die in a bad life. Honor is no longer attractive to the Athenians, and the law can no longer restrain the Athenians. This made Greek thinkers start to think about what exactly is ethics?

Ethics artificial intelligence mind map_What is the ethics of artificial intelligence_Artificial intelligence ethics

Marco Aurelius (reigned from 282 AD to 283 AD)

Similarly, the two plagues in the Roman Empire were a critical period in the formation of Western ethical traditions. In 165 AD, during the reign of Marco Aurelius, a devastating plague swept across the Roman Empire. When the plague was at its peak, caravans and four-wheeled carriages kept transporting the bodies out of the city. A century later, Rome suffered another great plague. Against this background, Roman society was quickly influenced by Christian ethical thought, and the final result was that Constantine the Great converted to Christianity in accordance with the people's sentiment. Bishop of Carthage, Guipuryan wrote:

In this seemingly terrible and deadly plague and disaster, it is necessary to find justice for everyone and to examine how suitable and necessary the human heart is; whether the patient is properly cared for, and whether the relatives are as they should do To take care of their relatives responsibly, whether the master shows sympathy for their sick slaves, whether the doctor does not abandon the sick person... Although death is of no benefit to anything else, it is beneficial to Christians and the servants of God As we learn not to fear death, we have begun to happily seek the path of martyrdom. This is our tentative exercise, not death; they give the soul the glory of salvation; by despising death, they prepare the crown... Our brethren have broken free from this land by the call of the Lord, and should not It is sad for us, because we know that they have not disappeared, but have only taken one step; they lead the way when they set off; just as travelers are used to wandering, they should not be mourned... and the pagans come from this Our accusations are unfair and unjust, and to the end, we are grieving for those who we say are still alive. [3]

In the West, the ethical reflection on death in early Christianity in literature and art greatly influenced the emphasis on the transcendental foundation in ethics. Christian ethical thinking began the great tradition of "treating people as human beings." Similarly, when Charles Taylor traced the emergence of modern social concepts, he discovered the uniqueness of natural law ideas among many diverse and complex thoughts. International jurists Gross and Pfendorf, who were not of concern in the history of Western thought, became the real beginning of modernity in Taylor's eyes: although the natural law idea they advocated at the beginning was nothing more than the order of the king's rule. In novel defense, this idea soon became the source of thought for criticizing the old order. Taylor's traceability provides a fresh perspective on how to understand the uniqueness of Western modernity. [4] Only when the natural law idea establishes personal rights on the basis of transcendence can the possibility of modern social life be found. In the Western world, people start with natural law and imagine society as an economy that (1) achieves common benefits through exchange of goods and services with each other, (2) discusses public spaces among strangers about topics of common concern; and ( 3) A country is formed through certain secular practices without the need to be accepted by transcendental principles. Natural rights and natural law thus became the ethical cornerstones of Western modernity.

In the context of the above natural law and natural rights, people attribute the question of good and evil, good and bad to the subject or objective world. This is the process of the construction of the subject in the generation of modernity. [5] Today, when the technological revolution is coming again, many scholars compare artificial intelligence technology to steam engines, believing that this is another beginning of the great social change. Have we discovered a new framework of ethical thinking? What kind of theory will become a new starting point for ethics? What is the "natural law" in the era of artificial intelligence? This book is a preliminary consideration for these issues.

If we look back at the development process of human society, we will see clearly that the industrial revolutions and the development of modern machine industry in the 19th and 20th centuries have made people begin to reflect on what impact technology has had on human society. On the one hand, modern technology has helped mankind conquer "external enemies" such as floods and beasts, drought and hunger; on the other hand, on the material level, technological development has broken the primitive balance between man and nature, and brought about environmental and ecological crises that shrouded the world. On a spiritual level, the success of technology in helping humans conquer nature has triggered an over-expansion of instrumental rationality. Instrumental rationality focuses on impersonal logical relationships, which mainly pursues calculation efficiency and rejects the intervention of value considerations. The reflection proposed by social theorists, including Marx, is that as instrumental rationality gradually becomes the main model of modern society, people themselves gradually lose their subjectivity and are objectified.

Artificial intelligence technology brings greater challenges to human subjectivity: artificial intelligence brings people more thoroughly into the technical regulations of algorithmic decision-making. Will this worsen existing problems and create more new things in society question? We have already seen this in the current technological development and application. With the development of artificial intelligence technology and its continuous expansion and deepening of applications in more and more fields and scenarios such as autonomous vehicles, medical care, media, finance, industrial robots and Internet services, on the one hand, it has brought about improvement in efficiency and cost. The reduction in the number of social problems such as unemployment and poverty is helpful to solve social problems such as unemployment and poverty; on the other hand, the autonomy of artificial intelligence systems has gradually replaced human work, and this replacement sometimes not only fails to solve existing problems, but also worsens The existing difficulties have even brought new challenges, causing many problems such as social justice, security and definition of responsibilities. How to maintain human moral subjectivity while the scope of machine decision-making is becoming wider and deeper, is the key to our answer.

Comments

[1] For criticism of deontology and utilitarianism, see [German] Bonhofer: "Ethics", translated by Hu Qiding, Shanghai Century Publishing Group, 2005 edition, pp. 39-41.

[2] See: [Ancient Greece] Thucydides: "The Peloponnese War", translated by Xu Songyan and Huang Xianquan, Guangxi Normal University Press, 2004 edition, pp. 137-144.

[3] See: [US] Rodney Stark: "The Rise of Christianity", Shanghai Ancient Books Publishing House, 2005 edition, Chapter 4.

[4] See: [Gan] Charles Taylor: "Modern Social Imagination", translated by Lin Manhong, Yilin Publishing House, 2014 edition, page 14.

[5] See: [Gan] Charles Taylor: "The Roots of Self: The Formation of Modern Identity", translated by Han Zhen et al., Yilin Publishing House, 2012 edition, pp. 34-40.

More