AI Ethics

Challenges Of AI’s Ethical Governance: What Do Experts From China, The United States, Europe, Japan And The United Kingdom Think?

Challenges Of AI’s Ethical Governance: What Do Experts From China, The United States, Europe, Japan And The United Kingdom Think?

Challenges Of AI’s Ethical Governance: What Do Experts From China, The United States, Europe, Japan And The United Kingdom Think?

What values ​​should we let artificial intelligence learn? How can machines make judgments about those moral dilemmas that are difficult for humans to solve themselves?

Artificial Intelligence Ethics_Ethics Artificial Intelligence refers to_What is the ethics of artificial intelligence

Roundtable forum after speeches by guests from various countries, from left: van den Hoven, professor at Delft University, director of research at the Future Association, and policy researcher at the Center for Artificial Intelligence Governance at Oxford University. Image provided by Zhiyuan.

Written by | Xia Zhijian

"What kind of artificial intelligence should we develop?" has undoubtedly become one of the most concerned issues for AI developers and stakeholders now.

On May 25, 2019, Beijing Zhiyuan Artificial Intelligence Research Institute, aiming to promote the development of artificial intelligence research and technology in China, jointly released the "Beijing Consensus for Artificial Intelligence" with Peking University, Tsinghua University, Institute of Automation of the Chinese Academy of Sciences and other universities and research institutions. Just two days after the release of this document focusing on the ethics of artificial intelligence research and development, use and governance, there were as many as 5 million searches on Baidu about this Chinese version of artificial intelligence ethics.

More than five months later, on October 31, Beijing Zhiyuan Artificial Intelligence Research Institute held the "2019 Beijing Zhiyuan Conference" at the National Convention Center. At the "Special Forum on Artificial Intelligence Ethics, Security and Governance" that afternoon, experts from the field of artificial intelligence from the European Union, the United Kingdom, the United States, Japan and China shared their views on the ethical issues of artificial intelligence.

Who will be responsible for the consequences of AI?

Director of the Center for Life Ethics at Yale University and a member of the Artificial Intelligence Committee of the World Economic Forum pointed out that an important issue in the current development of AI is the mismatch between the speed of technology development and the corresponding ethical development speed, and the management difficulties brought about by this. To achieve a good operation of rapid development of technology, "it requires better predictions, better planning and better processing and management methods."

"Our scientific discoveries and technology are implemented far faster than ethical development. In addition, few legislators can fully understand the latest technology." Said. "It will be easier to manage in the early stages of technological development, but we don't know what problems will occur in the early stages of technological development. And when we find that there are some problems, this technology has penetrated into all aspects of society and is difficult to manage."

The rapid development of technology not only challenges managers, but also puts forward requirements for developers and users. van den Hoven, a professor at the Polytechnic University in Delft, Netherlands and a member of the European Ethics Committee, believes that when designing AI, developers of artificial intelligence should actively think about how to avoid problems and help vulnerable groups to shoulder the responsibility of "taking responsibility for others."

“AI governance is a design about taking responsibility for others,” he said. “Our designers need to be deeply aware that their products are the result of human choice.”

For AI users, Hoven emphasized the need to avoid becoming "slaves of computers" and avoid letting AI judgments replace humans' own judgments. "A user of a technology must take moral responsibility to use this intelligent technology. If you can't make the right decision, don't use it," he said.

This requires AI users to make sure they have a full understanding of AI before they can use it. In a real environment, if AI makes an unethical decision, who should be responsible for this decision? Is it an AI, a developer or a user? In different countries and societies with different cultural backgrounds, this question has different answers: in some countries, they believe that AI makes decisions automatically and should make AI responsible; in other countries, AI developers and users will bear certain responsibilities.

What is the ethics of artificial intelligence_Ethics Artificial Intelligence refers to_Artificial Intelligence Ethics

Speaking at the roundtable forum. Image provided by Zhiyuan.

Cultural differences in AI ethical governance

"We have to think about the differences between culture and society. No matter what high-end technology you are using, we need to have a standard behind it. We want the technology to be strong enough and also ensure that people use it in the right way, which is our ultimate goal," said Danit Gal, a researcher at the Center for Future Intelligence Research at the University of Cambridge.

"These globally shared AI ethical principles have the same theoretical explanations, but the situations they implement in different countries are different. We have a globally shared AI ethics concept that does not mean that every country should exercise such technology in one way, and each country should consider its own different situations," Gai added.

In terms of cultural differences in technology, a country is in sharp contrast with Western society. This Eastern country is more tolerant and friendly in treating the development of artificial intelligence: Japan now has robot hotels, robot restaurants, and even virtual AI companions.

But even in Japan, the development of AI is not always exciting. The unemployment problem caused by AI robots has caused many people to worry in Japan.

"What kind of work can be replaced by machines and what kind of work should not be replaced by machines is a major issue, and Japan is in discussions like this." Arise Ema, assistant professor at the University of Tokyo and a member of the Ethics Committee of the Japanese Society of Artificial Intelligence, said. "We need to understand how humans and machines interact better, and we need to further think about how to better enable machines and people to play a better role. We are now facing many emotional, economic, social, ethical and governance issues. We need to further think about what role experts can play, and can we trust machines?"

Trust in machines is ultimately trust in people or trust in developers. This returns to the original question: "What kind of artificial intelligence should we develop?"

Ethics Artificial Intelligence refers to_Artificial Intelligence Ethics

Director of the Yale University Center for Life (left) and Zeng Yi (right), Director of the Center for Artificial Intelligence Ethics and Security at Zhiyuan Research Institute. Image provided by Zhiyuan.

New challenges to human values

From a realistic perspective, this problem is imminent - in today's globalization, it is not easy for large multinational companies to allow their respective AI systems to withstand the test of countries of different cultures, especially when the competition between the two major powers in the AI ​​field is becoming increasingly fierce.

"The United States and China are competing, and the world's leading countries and well-known AI companies are competing. We see a lot of competition as zero-sum games, which brings great risks," said the research director of the Future Association (The) in his speech.

To control these risks, “we need to reshape the model of the United States, China and some of the leading AI companies to work together rather than compete with each other. For top countries, researchers and companies, they need to work together to govern AI, and they need to make sure that AI is used to solve the big challenges we face,” said the report.

Cooperation can lead to the collision of values, whether it is human or machine.

Beijing Zhiyuan Artificial Intelligence Ethics and Security Research Center, established on the same day when the Beijing Consensus of Artificial Intelligence was released, previously proposed to "carry out research on intelligent autonomous learning models that conform to human ethics and morality, realize the calibration of artificial intelligence behavior and human values, and verify it in simulated environments and real scenarios." However, a key question is: What kind of values ​​should we let artificial intelligence learn? How can machines make judgments about those moral dilemmas that are difficult for humans to solve themselves?

On the morning of October 31, in an interview with Intellectuals, Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences and director of the Center for Artificial Intelligence Ethics and Security of Zhiyuan Research Institute, said: "Human values ​​are not unified. We have different perspectives on human values. These perspectives are not conflicting, but complementary to the reason why humans become humans. I think what artificial intelligence learns should be the values ​​of humans recognized by most people with statistical significance. For problems like the tram problem we encounter, such as those that humans have not yet solved, it may be left to humans to discuss clearly first. Of course, if artificial intelligence develops to the stage of super intelligence, perhaps artificial intelligence can help people solve this problem."

Zeng Yi also said: "We should actually reflect deeply. If artificial intelligence is constantly evolving and evolving, what should we humans do? There is a word called post-human era, which is the post-human era. Although human beings are called human beings, from a philosophical perspective, human beings should actually be called human, because human beings are also in the process of continuous evolution. So, in which direction will human beings go, do we need to re-examine human values? I think this is a new question raised by technological progress for the changes in human society."

Platemaking Editor | Pipi Fish

Past exciting

Ethics Artificial Intelligence refers to_What is the ethics of artificial intelligence_Artificial Intelligence Ethics

What is the ethics of artificial intelligence_Ethics Artificial Intelligence refers to_Artificial Intelligence Ethics

More