Wang Guoyu, Fudan University: Artificial Intelligence Ethics--Looking For Feasibility Of Artificial Intelligence
Wang Guoyu, Fudan University: Artificial Intelligence Ethics--Looking For Feasibility Of Artificial Intelligence
People now have far more concerns about the energy and potential of robots than before. The problem is not that humanoid robots constructed in science fiction movies or novels always hate humans, but that these robots may have a broader horizon and stronger potential.All walks of life at home and abroad are paying attention to this topic. At the
People now have far more concerns about the energy and potential of robots than before. The problem is not that humanoid robots constructed in science fiction movies or novels always hate humans, but that these robots may have a broader horizon and stronger potential.
All walks of life at home and abroad are paying attention to this topic. At the "Double Horse Dialogue" at the Artificial Intelligence Conference two months ago, Elon Musk and Jack Ma debated the future of AI and humanity. It is understood that at the China Computer Conference (CNCC), an annual event in the field of computing technology held today, "The Moral Boundaries of Artificial Intelligence Development" is also an important forum theme.
Focusing on the hot topic of the relationship between AI, morality and humanity, we interviewed Professor Wang Guoyu, director of the Center for Life Medicine Ethics Research at Fudan University and the Center for Applied Ethics Research at Fudan University. She attended the CNCC conference and presented a speech on "Ethics of Artificial Intelligence: From Possibility Inference to Feasibility Exploration".
Figure | Wang Guoyu brought a report on "Artificial Intelligence Ethics: From Possibility Inference to Feasibility Exploration" at CNCC (Source:)
Wang Guoyu developed his views on artificial intelligence ethics from three aspects: Why is artificial intelligence the object of ethics? The path of research on artificial intelligence ethics and the collaborative governance of ethical issues of artificial intelligence. She believes that artificial intelligence ethics originate from people's fear and concern about the risks of artificial intelligence technology. The ethical problems of artificial intelligence are not individual technical problems or algorithm problems, but from the interaction between technical systems and human social life systems. , artificial intelligence ethics should move from possibility speculation to feasibility exploration.
"Will artificial intelligence once gain self-awareness 'from slave to general'? I think if we don't prepare early, it is theoretically possible from the perspective of human dependence on robots," Wang Guoyu believes, "Artificial Arts The governance of intelligent ethical issues is not only the matter of ethicists, but also the matter of scientists and engineers, and requires more interdisciplinary cooperation."
The following is the interview content:
: You have always focused on issues related to technical ethics, and you should have witnessed the rise of many emerging technologies and the ethical issues that accompanies it. So what do you think is the special nature of the ethical issues of artificial intelligence compared to the ethical issues of other technologies?
Wang Guoyu: I think artificial intelligence has its own special ethical problems and some ethical problems that other emerging technologies have. Compared with other emerging technologies, current artificial intelligence algorithms and systems have higher autonomy and are more likely to greatly change the future of mankind, thus attracting more social attention in terms of ethical issues. In my opinion, artificial intelligence is not a single technology, but an important branch in the emerging technology system and a key enablement technology to promote the new generation of industrial revolution (). To implement artificial intelligence, it must be combined with other technologies, such as artificial intelligence combined with automobile manufacturing, resulting in autonomous driving or unmanned driving technology; combined with automatic control, remote sensing, mechanical manufacturing and other technologies to produce intelligent robots; combined with medical technology Intelligent diagnosis and treatment technology have been produced. At present, the core algorithms of artificial intelligence strongly rely on data-driven. It can be said that without big data technology, there would be no current artificial intelligence technology. Therefore, in a sense, artificial intelligence technology is an advanced systematic technology or technical system.
Starting from this definition, we can say that the ethical issues of artificial intelligence are not single. It has special ethical problems caused by high technology autonomy, such as algorithmic ethics issues, and ethical problems common to other emerging technologies, such as privacy issues, responsibility ownership issues, etc. Algorithms have a value load like other technologies. What parameters to choose in algorithm design and what values and interests are given priority are closely related to the designer's moral sensitivity and value orientation. But at the same time, the algorithm will amplify and even make a choice based on its own logical operations. The designer cannot completely dominate the operation of the algorithm, nor can he predict, nor can he guarantee that the algorithm results are morally acceptable. This is also a question of whether the most controversial intelligent subject can be regarded as a moral subject. This problem will become more prominent as the algorithm capabilities increase, and will also affect the ownership and identification of responsibilities.
For example, the question of who should be responsible for the accident if an autonomous driving hits a person, which is discussed more frequently. This is not exactly the same as the general determination of responsibility, because it involves that on the one hand, autonomous vehicles are capable and autonomous, and the steering wheel and brakes are controlled by intelligent machines; but on the other hand, autonomous vehicles themselves are not moral subjects, Possible for moral responsibility. So, we have to ask who should be responsible for the accident? Is it the designer or the owner of an autonomous vehicle? Is it the responsibility of engineers, the responsibility of enterprises, or the responsibility of governments and management workers? These are all related to the particularity of artificial intelligence, but this issue also involves a more general technical ethical issue: how technical ethics faces technological uncertainty, or how we interact with technological uncertainty.
Photo | Huang, 38, died in a car accident using Tesla's autonomous driving feature (Source: BBC)
: What do you think of the view that “After artificial intelligence robots develop self-awareness, robots will rule humans”? Why do you think people are more worried and afraid of this issue than before?
Wang Guoyu: The first half of this question is a hypothesis, that is, the assumption that artificial intelligence robots will develop self-awareness, and the second half is speculation, which speculates that "robots will rule humans." This is a typical speculative assertion of "If and then". This reminds me of the plot in Czech writer Karel Cappek's "Russom's Universal Robot". The well-known term Robot originated from the drama. The word Robot is a combination of Czech and Polish. The former means "worker, hard work" and the latter means "worker". When combined, it is translated as "enslave workers and slaves". The robots in the play did not have consciousness or feelings at the beginning, and they replaced all human labor. Later, an engineer in the robot factory secretly "injected" the robot "soul", allowing the robot to have the ability to perceive pain, and then gradually became self-aware. Unfortunately, the first thought of the robot after awakening is to resist and attack. This may be the earliest science fiction literature about robots dominating humans, and later there are more and more similar works. Such a narrative naturally brings people fear and concern about artificial intelligence.
Technology ethics is due to people's fear and concern about technology. Therefore, more than half a century ago, Isaac Asimov proposed that the need to use ethical rules to guide the behavior of robotic intelligence: that is, robots must not harm humans, or harm humans due to inaction (stand bye bye); unless they violate it; The first rule is that robots must obey human commands; without violating the first and second rules, robots must protect themselves. Robots are not allowed to harm this group of humans, or to harm this group of humans by inaction (stand bye bye). (Rule Zero, 1985)
Figure | The Three Laws of Robots are the first concept proposed by Isaac Asimov in his work "Turning Circles" published in 1942, and have become a clue to the behavioral codes and story development of robots in many of his novels. Robots are required to comply with these guidelines, and violating them can lead to irrecoverable psychological damage to the robot. Afterwards, the robots in science fiction novels of other authors also abide by these three laws. At the same time, scholars have established the emerging discipline "Mechanical Ethics" based on the three laws, aiming to study the relationship between humans and machinery (Source:)
So, is it possible for artificial intelligence robots to gain self-awareness? Those who advocate strong artificial intelligence think it is certain, but it is just a matter of time. Kuzville predicted that in 2045, humans and machines will be deeply integrated, and artificial intelligence will surpass humans themselves and reach a "singularity". I don't know whether artificial intelligence will definitely produce self-awareness, whether humans will definitely be happy to see or allow artificial intelligence to produce self-awareness, and when artificial intelligence will produce self-awareness, I cannot predict these, but once artificial intelligence acquires itself Will consciousness "from slave to general"? I think it would be possible in theory if we didn't prepare early. Of course, I'm not envisioning a country ruled by robots like a science fiction novelist. But from the perspective of human autonomy and loss of freedom.
In fact, we have felt today that while artificial intelligence enhances people's abilities and freedom of behavior, it makes people increasingly rely on it. This dependence can also be explained as the loss of human independence. As Hegel pointed out in his book Phenomenology of Spirituality, the master and the slave are defined by each other: on the one hand, the master becomes the master because of the existence of the slave, and on the other On the one hand, once the master has obtained the identity of ruler, he does not obtain independent consciousness but dependent consciousness. He does not determine his true self because of this, but instead prompts the slave to surpass his master. Let’s give a common example, for example, the intelligent navigation we use today allows us to safely avoid and bypass various roads with poor road conditions, and we will follow the route it guides to reach our destination. In this sense it is an empowering technology. We believe that this information comes from a navigation that grasps all dynamics in real time, and the navigation knows everything on the road well. But as long as we worry that giving up information will detour and believe that we can reach our destination quickly by accepting it, we begin to rely on information, and at this time, the "control" of information on people appears.
Regarding the latter question: Why do the public feel more worried and fearful about this issue than before? I think first of all, this is because of the rapid development of artificial intelligence itself in recent years, which has made people strongly feel the changes in the world of artificial intelligence technology systems. Especially after defeating human Go player Lee Sedol in March 2016, people were amazed at the power of artificial intelligence, while cheering on the one hand, and fearing and worrying about how it will change the world and our lives. And some celebrities like Hawking, Bill Gates, and Musk's predictions on the threat to artificial intelligence will also intensify people's panic. For example, Hawking believes that "once robots reach a critical stage in which they can evolve themselves, we cannot predict whether their goals are still the same as humans." Some media often spread and exaggerate the progress of artificial intelligence, and like to use sensational words such as whether artificial intelligence is a devil or an angel as the title. Such narrative approaches have also increased people's concerns about the evolving artificial intelligence.
: The face-changing technology that was popular some time ago has sparked a wave of fanaticism, whether it is foreign or domestic ZAO. Do you think that the current AI applications can be widely disseminated in a very short time? Is it partly because of the rapid development of technology and the threshold is lowered? Technology without constraints is dangerous, so how can we release the innovativeness of technology as much as possible in moderate constraints? How should the conflict between ethics and science and technology policy be coordinated?
Wang Guoyu: I think the rapid spread of this type of face-changing technology is because the threshold of the technology itself is relatively low, and on the other hand, it is also related to people's curiosity. I also saw similar videos in my Moments and WeChat groups, although they are not face-changing videos, but a face-changing APP. You just need to upload one or take a photo on the spot, and you can get the different eras you want. , photos with different images. I see many people who are happy to play this game, out of curiosity. But most people may not know it. In this way, our "Facebook" is saved as data in the background. We generally don't know how this data will be processed and what it does, and we rarely know it. inform. If this data is maliciously used, it will bring great harm to yourself and others and society. We know that many banks and even entrance doors now use human faces to verify the identity of a person. These facial data are like human fingerprints and are a sign of an important identity of a person. Once it falls into the hands of criminals, the consequences will be unimaginable.
Picture | ZAO Poster, ZAO is an application software that uses AI technology to complete face swaps. Users can upload a front face photo to create emoticons and characters in face swap movies and TV series (Source: Apple Store)
The problem now is that we are still a little slow in legislation and policy regulations. Technological innovation does not mean that you can do whatever you want. The purpose of technological innovation is to make a better life. If this innovation does not bring about a better life, but destroys the order that a better life should have, endangers others, and harms the public interest, then such innovation is not something we should accept and embrace. of. Therefore, we should adhere to responsible innovation. Responsible innovation focuses not only on the economic benefits brought by innovation, but also on the social benefits of innovation and the social responsibility of enterprises. Correspondingly, relevant government departments should promote relevant research, issue relevant laws and regulations as soon as possible, classify and manage similar technologies, and reasonably guide technological innovation, and resolutely stop some acts that violate the public interests of society and infringe on the rights of others.
: In order to try to make "technology good", what efforts do you think are needed to make different levels of society?
Wang Guoyu: According to Aristotle, every skill is intended for a certain good. However, one characteristic of modern technology is its uncertainty. This uncertainty runs through the entire process of modern science and technology from purpose to method and results. Faced with the uncertainty of science and technology, not only do we have the responsibility as scientific and technological workers and enterprises, but also as competent departments of the government, but also humanities scholars, media and the public.
As scientists and engineers engaged in scientific and technological research and development, they are the creators and producers of technology, and they are the most familiar with the characteristics and potential risks of the technology itself, and have the responsibility to transparently explain and promptly inform the public of its potential risks and uncertainties. The management departments and policy decision makers of science and technology research and development, including politicians and entrepreneurs, have the responsibility to work with scientific and technological workers, humanities scholars, etc. to conduct risk prediction and ethical assessment of the development of science and technology. Since scientific and technological activities directly penetrate into political activities, science and technology Decision-making is also a political decision. Science and technology policies are the catalyst for the development of science and technology. The trajectory of science and technology development can be adjusted through appropriate policies. If necessary, warnings can be issued. Humanities and social science workers should also pay attention to the social impact of the forefront of scientific and technological development and formulate science and technology that is acceptable to the public for the government. Policy provides support and basis, and mainstream media and public media should have a deep understanding of the characteristics of technology itself, rather than for the purpose of creating news. Finally, the development of science and technology cannot be separated from public support. The public's active participation in the social governance of science and technology is one of the most important forces to ensure that modern science and technology are good.
: The title of your report at this year's CNCC conference is "Artificial Intelligence Ethics: From Possibility Inference to Feasibility Exploration". Can you share your original intention of choosing this topic? In addition to ethical content, what other forums or speakers are you more interested in this conference?
Figure | Wang Guoyu attended the CNCC conference and delivered a speech report entitled "Artificial Intelligence Ethics: From Possibility Inference to Feasibility Exploration" (Source: CNCC Official Website)
Wang Guoyu: Thank you very much for giving me this opportunity and share with you some of my thoughts on the ethics of artificial intelligence. Artificial intelligence ethics belong to the category of technical ethics. Although it has its special features as we pointed out earlier, I think that artificial intelligence ethics is like other branches of technical ethics. The core problem we need to face is that artificial intelligence technology systems are Issues of uncertainty caused by security, freedom and justice. The original intention of choosing this topic is to discuss with you how we should find a path and method that promotes artificial intelligence to benefit humans while effectively avoiding the negative consequences of artificial intelligence while artificial intelligence is in full swing.
As I mentioned earlier, the governance of artificial intelligence ethical issues is not only a matter of ethicists, but also a matter of scientists and engineers. It requires more interdisciplinary cooperation. In this regard, our Center for Science and Technology Ethics and Social Governance at Fudan University is particularly looking forward to learning from artificial intelligence experts, and together with artificial intelligence experts, we will explore more in-depth methods and paths to promote the governance of artificial intelligence ethics problems in multiple ways and in all aspects. So, I will go to different forums to study as much as possible during the meeting.