AI Ethics

Literature Reading | From Technical Ethics To Bioethics: Paradigm Transformation In Artificial Intelligence Ethics Research

Literature Reading | From Technical Ethics To Bioethics: Paradigm Transformation In Artificial Intelligence Ethics Research

Literature Reading | From Technical Ethics To Bioethics: Paradigm Transformation In Artificial Intelligence Ethics Research

After Shu Hongyue and Li Wenlong ChatGPT were released, hundreds of AI industry leaders and experts jointly issued an open letter warning that AI may destroy human civilization and hope that policy makers can reduce the risk of extinction it brings.

Shu Hongyue, Li Wenlong

1. Artificial Intelligence: From technological progress to life evolution

After its release, hundreds of AI industry leaders and experts jointly issued an open letter warning that AI might destroy human civilization and hope that policy makers can reduce the risk of extinction it brings.

How do you view the risks of AI? First, distinguish AI as technology from AI as life. One is the means of survival for other lives, and the other is a living body that can survive on its own. Secondly, how to view the two concepts of machines and life. With the continuous development of technology, two trends have become increasingly obvious: "Artificial things appear more and more like living beings; life becomes more and more engineered." According to non-reduction physicalism, life is defined as the characteristics that emerge through the organization of various inanimate parts.

Since modern times, the similarity between biological evolution and cultural evolution and technological evolution has gradually attracted attention from a large number of scholars, including Marx, Huxley, Mach, James, Piaget, Lorentz, Campbell and Popper. Whether a machine can become a life depends on whether it has two characteristics: metabolism (survival) and gene replication (reproduction).

Decades ago, computer models emerged with the characteristics of replication and reproduction. This activity stems from the invention of a replicable cellular automata, which is separated into cells on a computer screen in the form of a Q-shaped "ring" that interact with neighboring cells according to specified rules. We can not only program them through genetic algorithms to recombinate the "genetic" factors, but also allow them to "eat" each other and compete for resources.

Oppositions focus on "machines do not have metabolic ability and cannot survive independently." Heidegger's philosophy of existence studies not ordinary life, but the highest form of survival of human life. Heidegger's disciple Yunas did not study humans, but the essence of all life. He believed that the worry of life objects is "metabolism", which is the exchange of matter and energy with surroundings.

Interacting with the surrounding environment and adapting to the environment is the basic behavior of AI. As machines interact more, they gradually develop self-perception mechanisms. Interaction is more powerful than observation.

Although different organisms have different forms of worry, their most basic concern is to regulate their internal stable state and maintain the normal living state of their bodies. Homeostasis is the internal feeling of the body, that is, the body's perception of its own physiological state. The development of AI will gradually evolve into internal perception. Only by feeling its own existence can the hardware of AI gradually evolve into the body.

The reaction modes of inorganic and organic matter to the outside world are different. The former is a physical and chemical reaction, while animals can give objects value and meaning, and they are the makers of the value and meaning of things. Of course, existing machines do not have this capability, and they cannot become "legislators." Between inorganic objects and people, there are countless "legislative" levels. Even the lowest-level organisms will show some self-awareness characteristics and can respond positively. “We must assume that consciousness develops from a small source; its initial form resembles a vague stimulation.”

Like existing life, AI has the function of replication (reproduction), can exchange matter and energy with the outside world, complete more self-renewal, and gradually reveal the function of metabolism. This requires us to turn AI ethics from technology to life.

2. Artificial Intelligence Ethics: From Technical Ethics to Life Ethics

AI, a new life, will have a decisive impact on whether human civilization can continue and the status of human species' life evolution history.

Robot ethics belongs to the branch of applied ethics, which attempts to clarify how ethical principles solve various subtle and critical ethical problems arising from the use of robots in our society. Wen Qi's "technical singularity" targets the singularity in the evolution of life and the birth of a new life. Strictly speaking, once a machine produces a mind, even if it is not a human mind, it is no longer a simple machine, but a living body with a mind. But is it ascribed to the manufacturer, designer, user or the robot itself the damage caused by the "mind machine"? The problems at this time gradually transcend technical ethics and become the life ethics of the identity of the moral subject of AI.

Floridi and Sanders proposed three conditions for judging that a thing is a moral subject: interactivity, autonomy, and adaptability. Moore, one of the founders of computer ethics, divides the moral behavior of intelligent machines into three levels: implicit, explicit and complete based on the moral behavior of agents and their moral decision-making abilities. Allen and Wallach divided the moral behavior of AI into three levels: operational, functional and complete.

The formation of children's sense of morality has given us inspiration, which is to enable AI to start "life" (from bottom to top) based on "initial preferences" (from top to bottom). The values ​​we ultimately form depend on life experiences, which provide AI with the possibility of learning human morality. “For security reasons, the right motivation should be ideally installed in seed AI before seed intelligence is fully capable of expressing human concepts or understanding human intentions.”

Just like children grow up, AI growth will also cost a certain price. It is impossible to completely prevent AI from making mistakes in the process of learning morality. What we need to do is to minimize the cost in its growth process. In theory, once a system reaches a certain level, its programs and hardware can be copied infinitely. Only through practice, from a controllable environment to a highly complex society, can the system produce the values ​​of human desire. Bottom-up evolution will produce a large group of offspring, and those who can adapt and survive should be those AIs that are human-friendly in terms of values.

Once AI becomes a moral subject, even if it is not a moral subject in the full sense, we will redefine "human nature" and regard all human-like artificial lives as existences with "personality". Intersubjectivity not only includes the relationship between people, but also includes the relationship between people and AI life and AI life.

3. A new ethical paradigm: the symbiosis and coexistence of different lives

Of course, humans can do everything they can to prevent the birth of intelligent life, but given the huge inertia of technological development and the nature of capital profit-seeking, humans can only delay and cannot prevent their emergence. There are three stages in the relationship between humans and AI life: the human dominant stage (human design and guide AI), the human-machine equality stage, and the post-human stage.

AI life does not reach the human level as soon as it is created. It also has an evolutionary lineage: from plant-like, animal-like to human-like life, and then to post-human life. Before their moral standards reach human level, they need humans to act as designers and mentors, a critical window for human intervention. The implementation of this path requires both a "top-down" design and a "bottom-up" cultivation. The design time is relatively short, and the cultivation time is very long, which may take several generations to complete. There are many moral norms in human beings, but the most fundamental one is to "treat people as human beings."

However, although there are countless lives on the earth, people believe that intersubjectivity exists only between people and does not exist between people and other lives. Descartes believed that animals had no souls and at most they were considered an automatic machine. Mainstream Western philosophy believes that animals do not have moral status.

Darwin believes that “humans and higher animals, especially primates, are instinctively common. All higher animals have the same perception, intuition and feeling, similar passions, love and emotions, and even more complex emotions, such as curiosity, curiosity, and the same ability to imitate, focus, memory, imagine and reason, but in different degrees.”

Today, the view that there is no clear boundary between humans and animals has been supported by many experts in the philosophical and scientific circles. We can analogy, there is no clear boundary between humans and posthumans, and the relationship between the two will be multiple, dynamic and open.

We must admit that human beings are neither the starting point nor the end point of the evolution of life on Earth. Only by continuing to explore the relationship between humans and animals, humans and post-humans can we truly understand human nature and understand the value and significance of human existence. This should be the ultimate mission of today's AI ethics research.

More