To Promote Artificial Intelligence To "benevolent", The Sooner Ethics Intervenes, The Better
To Promote Artificial Intelligence To "benevolent", The Sooner Ethics Intervenes, The Better
Today, the Academician Consultation and Academic Activity Center of the Chinese Academy of Engineering in Shanghai held an academician salon on "Ethical and Moral Construction in the Era of Artificial Intelligence". Experts mentioned that ethical research is not a "rope" that restrains the development of artificial intelligence, but a "rein" that promotes its rapid growth.

As artificial intelligence becomes more and more human-like, artificial intelligence ethical issues have become the most concerning issue for the technology industry, regulatory agencies and the public.
At present, the extreme admiration for the commercial value of artificial intelligence and the deep concern about the collapse of ethics constitute two opposing arguments when discussing artificial intelligence: On the one hand, business organizations continue to depict the convenience and beauty of the artificial intelligence era, while many film and television works remind people of the potential dangers of artificial intelligence. Today, the Academician Consultation and Academic Activity Center of the Chinese Academy of Engineering in Shanghai held an academician salon on "Ethical and Moral Construction in the Artificial Intelligence Era". Experts mentioned that ethical research is not a "rope" that restrains the development of artificial intelligence, but a "rein" that promotes its rapid growth. Artificial intelligence ethical research must be put on the agenda as soon as possible.
"Ethical dilemma" hinders artificial intelligence research and development
From the information flow in news clients that increasingly “understand you” to cars that are beginning to try to drive themselves on closed roads, in the past few years, various products containing artificial intelligence technology have entered public life. According to Si Xiao, director of Tencent Research Institute, there are more than 2,000 artificial intelligence startups around the world, and Tencent has four major laboratories working on artificial intelligence-related projects.
In fact, some artificial intelligence business scenarios have not been able to be implemented for a long time, not because of technology, but because of ethical research. Si Xiao gave an example. Statistics from the United States in 2016 showed that self-driving cars are significantly safer than manual driving. However, there is a famous ethical dilemma in self-driving cars, that is, when it inevitably hits people, should it hit the side with more people or the side with fewer people? If this problem is not solved, self-driving cars will not be able to hit the road.
Wang Guoyu, a professor at the Department of Philosophy at Fudan University, pointed out that the ethical dilemmas of panopticon and data discrimination are also worthy of vigilance. Panopticon means that cameras can form a road map of all your actions, and search engines can give you a portrait. Under strong artificial intelligence, it means that someone is watching your every move; data discrimination means that the process of converting data into information that people can understand is full of human interpretation factors. If the data shows that people in a certain area have a higher job-hopping rate, it will affect people in this area to find jobs.
Ethics and science should be viewed as a unified whole
There is a saying that people can develop artificial intelligence first, and then correct the errors when problems are discovered. In the worst case, just pull out the wires. Zhang Zheng, a professor of computer science at New York University Shanghai, does not agree with this view.
"One stereotype that must be broken is that ethical supervision is a 'wall' for the development of science and technology. The relationship between the two should be viewed from a more comprehensive perspective. I think they can be regarded as 'a new city'." Zhang Zheng said.
This somewhat "brain-burning" concept won Wang Guoyu's approval. She explained that in recent years, whether it is industrial robots, domestic robots, or aerospace robots, they have become more and more sophisticated in terms of "personality" elements such as autonomy, intentionality, and emotion. It can be said that artificial intelligence is breaking away from the category of tools and becoming a ubiquitous intermediary. The more powerful artificial intelligence is, the lower its "sense of existence" will be until it disappears and is completely integrated into our lives. "In this case, we cannot 'pull out the wires', but should study them as a whole." Wang Guoyu said.
As a whole, when formulating ethical standards, it is important to get in as early as possible with the technology. Si Xiao said that when Microsoft "feeds" the chatbot with vulgar language data, the machine will swear a lot. There must be strong regulations and "designed ethics" involved in the initial development to ensure that artificial intelligence is fair and controllable.
The humanities should contribute to ethical research
Artificial intelligence ethical issues have attracted world attention. In 2016, the British Standards Organization released the robot ethics standard "Ethical Design and Application of Robots" and Microsoft proposed six principles of artificial intelligence, which have promoted the development of artificial intelligence to a certain extent. In my country's "New Generation Artificial Intelligence Development Plan", the word "artificial intelligence ethics" appears as many as fifteen times, indicating that the formulation of artificial intelligence ethical norms is urgent.
What kind of ethics is required for artificial intelligence? Si Xiao said that this field naturally walks between technology and humanities, requiring contributions from mathematics, statistics, computer science, neuroscience, etc., as well as the participation of philosophy, psychology, cognitive science, law, sociology, etc.
Wang Guoyu said that in order to promote theoretical research on artificial intelligence, the Netherlands established the Joint Experimental Center for Artificial Intelligence Value Design. Their approach is to gather the power of engineering and technical personnel, philosophers, jurists, and ethicists to discover the mainstream values of society, convert these values into computer voices, and embed them into artificial intelligence. "Of course, artificial intelligence ethics is not fixed, it should be an open and amendable system."
Yang Shengli, an academician of the Chinese Academy of Engineering, said that Shanghai has excellent humanities, so it might as well seize the opportunity to vigorously develop artificial intelligence and set up an ethics committee to promote the "good" development of artificial intelligence.
Regarding artificial intelligence and ethical issues, Li Feifei, director of the Stanford Artificial Intelligence Laboratory and the Stanford Visual Learning Laboratory, also expressed some of his own views in an interview.

Li Feifei believes that human-centered artificial intelligence consists of three core ideas, which guide thinking about the future of artificial intelligence. The first core revolves around next-generation artificial intelligence technology inspired by humans. Although artificial intelligence is so exciting, it is still an emerging field, with only 60 years of development. Its potential is actually quite large. Therefore, we should continue to unleash human creativity and develop artificial intelligence, especially in the intersection of brain, cognitive and behavioral sciences.
The second core focuses on one word, "enhancement," not "replacement." There are many people who worry that automation will conflict with human jobs. Li Feifei believes that this is a very important topic, and artificial intelligence as a technology can be used in many aspects to enhance human capabilities, protect humans from harm, and improve human labor productivity. "Take healthcare as an example. I recently stayed with my mother in the hospital for several months after she had just had a major surgery. This experience was quite interesting because I have been conducting AI research in the healthcare field for six years, but as a patient's family member, this was the first time. Doctors and nurses need to keep an eye on the patient's condition, and now we can use assistive technology to do this. We can use AI to allow humans to avoid harm in rescue situations, we can use AI to enhance personalized education, and we can also use AI to make fairer and more effective decisions," Li Feifei said.
The third core is what I call “the social impact of artificial intelligence.” "It is crucial that we realize that all aspects of human society will be affected by this technology." Li Feifei said, which means that we must involve people from all walks of life, including social scientists, economists, legal scholars, ethicists, historians, and philosophers, to understand what impact this technology will have, what policy recommendations we should make, how we should guide this technology to make it unbiased, and how we can protect privacy in the AI era. It is not just technical experts who can solve these problems, it requires dialogue and joint efforts from the whole society.