Promote Artificial Intelligence To "go Good", The Earlier The Ethics Intervene, The Better
Promote Artificial Intelligence To "go Good", The Earlier The Ethics Intervene, The Better
As artificial intelligence becomes more and more like humans, the issue of artificial intelligence ethics has become the most concerned issue of technology industries, regulators and the public.At present, the extreme admiration of the commercial value of artificial intelligence and the deep concerns about the collapse of ethics constitute two opposite arguments about artificial intelligence: on the one hand, commercial institutions constantly depict the convenience and beauty of the era of artificial intelligence, while many film and television works remind people that Artificial intelligence poses potential dangers. Today, the Shanghai Chinese Academy of Engineering Academician Consulting and Academic Activities Center held the
As artificial intelligence becomes more and more like humans, the issue of artificial intelligence ethics has become the most concerned issue of technology industries, regulators and the public.
At present, the extreme admiration of the commercial value of artificial intelligence and the deep concerns about the collapse of ethics constitute two opposite arguments about artificial intelligence: on the one hand, commercial institutions constantly depict the convenience and beauty of the era of artificial intelligence, while many film and television works remind people that Artificial intelligence poses potential dangers. Today, the Shanghai Chinese Academy of Engineering Academician Consulting and Academic Activities Center held the "Ethics and Ethics Construction in the Artificial Intelligence Era" Academician Salon. Experts mentioned that ethical research is not a "rope" that restricts the development of artificial intelligence, but promotes its rapid growth." "The reins" must put the research on artificial intelligence ethics on the agenda as soon as possible.
"Ethical Dilemma" hinders the development of artificial intelligence
From the information flow that is increasingly "understanding you" in the news client to the cars that have begun to try to drive autonomously on closed roads, various products containing artificial intelligence technology have entered the public's lives in the past few years. According to Si Xiao, director of Tencent Research Institute, there are more than 2,000 artificial intelligence startups worldwide. In Tencent, four large laboratories are working on artificial intelligence-related projects.
In fact, some artificial intelligence business scenarios have been delayed. They are not "trapped" by technology, but rather ethical research has dragged them down. Si Xiao gave an example, saying that the statistics in the United States in 2016 show that the safety of autonomous driving is significantly higher than that of manual driving, but there is a famous ethical dilemma in autonomous driving, that is, when it inevitably hits people, it will hit many people. On one side, there are fewer people. If this problem is not solved, autonomous vehicles will not be able to get on the road.
Wang Guoyu, a professor in the Department of Philosophy at Fudan University, proposed that the ethical dilemma of panoramic prisons and data discrimination is also worthy of vigilance. Panoramic prison means that the camera can form an action roadmap for all your actions, and search engines can give you an image. Under strong artificial intelligence, it is equivalent to someone staring at your every move; data discrimination means that data can be converted into people's ability. The process of understanding information is full of artificial interpretation factors. If the data shows that people in a certain area have a higher job-hopping rate, it will affect people in this area looking for jobs.
Ethics and science should be regarded as a unified whole
There is a saying that people can first develop artificial intelligence and correct problems when they find them. At most, they can just unplug the wires. Zhang Zheng, a professor of computer science at New York University in Shanghai, does not agree with this view.
"One stereotype that must be broken is that ethical supervision is to paint a "wall" for the development of science and technology. The relationship between the two should be viewed from a more comprehensive perspective. I think they can be regarded as 'a new city'." Zhang Zheng said.
This somewhat "brain-burning" concept won Wang Guoyu's recognition. She explained that in recent years, whether it is industrial robots, housekeeping robots, or aerospace robots, they have become more and more capable of both form and spirit in terms of "personality" elements such as autonomy, intentionality, and emotion. It can be said that artificial intelligence is leaving the category of tools and becoming a universal intermediary. The stronger the artificial intelligence, the lower its "presence" will be until it disappears and is fully integrated into our lives. "For this situation, we cannot 'unplug the wires', but should study them as a whole," said Wang Guoyu.
As a whole, when formulating ethical norms, you must enter the technology as early as possible. Si Xiao said that when Microsoft uses vulgar language data to "feed" the chat robot, the machine will swear. There must be strong regulations that "designed ethics" will be involved in the initial development to ensure the fairness of artificial intelligence. control.
Humanities should make contributions to ethical research
The ethical issues of artificial intelligence have attracted the attention of the world. In 2016, the British Standards Organization issued the robot ethical standard "Ethical Design and Application Robots" and Microsoft proposed the six principles of artificial intelligence, which all promote the development of artificial intelligence to a certain extent. In my country's "Plan for the Development of New Generation Artificial Intelligence", the word "artificial intelligence ethics" has appeared as many as fifteen times, indicating that it is urgent to formulate artificial intelligence ethical norms.
What kind of ethical norms do artificial intelligence require? Si Xiao said that this field naturally roams between science and humanities, requiring both the contribution of mathematics, statistics, computer science, neuroscience, etc., as well as philosophy, psychology, cognitive science, law, sociology, etc. participate.
Wang Guoyu said that in order to promote the research on artificial intelligence theory, the Netherlands established a joint experimental center for artificial intelligence value design. Their approach is to gather the power of engineering and technical personnel, philosophers, jurists, and ethicists to discover mainstream social values, and Transform these values into computer voice and embed them into artificial intelligence. "Of course, the ethics of artificial intelligence are not solidified, it should be an open and revisable system."
Yang Shengli, an academician of the Chinese Academy of Engineering, said that Shanghai has excellent humanities, so it is better to seize the opportunity of vigorously developing artificial intelligence, build the ethics committee, and promote the development of artificial intelligence to "good".
Regarding the issues of artificial intelligence and ethics, Li Feifei, head of Stanford Artificial Intelligence Laboratory and Stanford Visual Learning Laboratory, also talked about his own views in an interview.
Li Feifei believes that people-oriented artificial intelligence consists of three core ideas, which guide thinking about the future of artificial intelligence. The first core revolves around the next generation of artificial intelligence technologies inspired by humans. Although artificial intelligence is so exciting, it is actually an emerging field and has only experienced 60 years of development. Its potential is actually quite large. Therefore, we should continue to leverage human creativity and develop artificial intelligence, especially in cross-fields such as brain, cognitive and behavioral science.
The second core focuses on one word, “enhanced”, not “replacement.” There are many people who worry that automation is inconsistent with human work. Li Feifei believes that this is a very important topic, and as a technology, artificial intelligence can be used in many aspects to enhance human capabilities, protect humans from harm, and improve human labor productivity. “Taking health care as an example, I recently spent several months with my mother in the hospital and she had just had a major surgery. This experience is quite interesting because I have been doing AI research in the health care field for six years, but This is the first time as a patient’s family. Doctors and nurses need to pay attention to the patient’s condition at all times, and now we can use assistive technology to do this. We can use AI to avoid harm in rescue situations, and AI can use AI to To enhance personalized education, AI can also be used to make more fair and effective decisions. "Li Feifei said.
The third core I call it "the social influence of artificial intelligence." "We need to realize that all aspects of human society will be affected by this technology, which is crucial." Li Feifei said that this means we must make it necessary to include social scientists, economists, legal scholars, ethicists, and history. People from all walks of life, including philosophers, are involved in understanding what impact this technology will have, what policy recommendations should be made, how to guide this technology to be free of bias, and how we protect privacy in the AI era. It is not only technical experts who can solve these problems. It requires dialogue and joint efforts of the whole society.