AI Ethics

Beyond The “Three Laws Of Robotics”, Artificial Intelligence Looks Forward To New Ethics

Beyond The “Three Laws Of Robotics”, Artificial Intelligence Looks Forward To New Ethics

Beyond The “Three Laws Of Robotics”, Artificial Intelligence Looks Forward To New Ethics

Azoulay, Director-General of UNESCO, stated at the “Global Conference on Promoting Humanized Artificial Intelligence” held in early March that there is currently no international ethical framework applicable to the development and application of all artificial intelligence.

Xinhua News Agency, Beijing, March 18

Xinhua News Agency reporter Yang Jun

The ethical principles of artificial intelligence have attracted much attention recently. Azoulay, Director-General of UNESCO, stated at the “Global Conference on Promoting Humanized Artificial Intelligence” held in early March that there is currently no international ethical framework applicable to the development and application of all artificial intelligence.

The industry has long recognized the limitations of the famous "Three Laws of Robotics" designed by science fiction writer Isaac Asimov in the last century to prevent robots from losing control. "The Three Laws of Robotics have made great contributions historically, but now it is no longer enough to stay on the Three Laws." Experts said in a recent interview with reporters.

New situation calls for new ethical principles

Scientists have always hoped to use the simplest method to ensure that artificial intelligence represented by robots does not pose any threat to human beings. The "Three Laws of Robotics" stipulate that robots cannot harm humans; they must obey humans; and they must protect themselves. Later, the "Zeroth Law" was added: robots must not harm the human race as a whole, and they must not cause harm to the human race as a whole through inaction.

"Robots cannot harm people, but in what way can robots harm people? How big is this harm? In what form may harm occur? When might it happen? How to avoid it? These issues either need to be refined today, or more specific principles need to be put in place to prevent them, and we cannot stay at the level of understanding 70 or 80 years ago." said Professor Chen Xiaoping, head of the AI ​​Ethics Professional Committee of the Chinese Society of Artificial Intelligence and director of the Robotics Laboratory of the University of Science and Technology of China.

Etzioni, CEO of the world-renowned Allen Institute for Artificial Intelligence, called for updating the "Three Laws of Robotics" in a speech two years ago. He also proposed that the artificial intelligence system must comply with all legal provisions applicable to its operators; the artificial intelligence system must clearly indicate that it is not a human; the artificial intelligence system cannot retain or disclose confidential information without the explicit permission of the source of the information. At the time, these innovative opinions sparked heated discussions in the industry.

The "AI Benefit Movement" promoted by Tegmark, a well-known American scholar in the field of artificial intelligence ethics and a professor at the Massachusetts Institute of Technology in recent years, proposes that new ethical principles are needed to ensure that the goals of artificial intelligence and humans are consistent in the future. This activity has received support from many of the world's top scientists, including Hawking, and well-known IT companies.

"Artificial intelligence will become more and more powerful driven by the multiplier effect, leaving less and less room for human trial and error," Tegmark said.

People-oriented, global exploration of new ethics

Currently, global research on new ethics of artificial intelligence is becoming increasingly active. Many people have a grudge against artificial intelligence, mostly due to the unknown nature brought about by its rapid development. "Protecting humanity" has become the first concern.

"We must ensure that artificial intelligence develops in a human-centered direction." Azoulay called.

Yudkovsky, the founder of the American Institute of Machine Intelligence, proposed the concept of "friendly artificial intelligence" and believed that "friendliness" should be injected into the machine's intelligent system from the beginning of design.

New ethical principles are constantly proposed, but the emphasis on people-centered concepts remains unchanged. Baidu founder Robin Li proposed the "Four Principles of Artificial Intelligence Ethics" at the 2018 China International Big Data Industry Expo. The first principle is safety and controllability.

The Institute of Electrical and Electronics Engineers stipulates that artificial intelligence should prioritize maximizing the benefits to humans and the natural environment.

The formulation of new ethical principles is also on the agenda of governments around the world.

When the Chinese government released the "New Generation Artificial Intelligence Development Plan" in July 2017, it pointed out that it would establish a system of artificial intelligence laws, regulations, ethical norms and policies to form artificial intelligence safety assessment and control capabilities. In April 2018, the European Commission issued a document "EU Artificial Intelligence" that proposed the need to consider establishing an appropriate ethical and legal framework to provide legal protection for technological innovation. On February 11 this year, US President Trump signed an executive order to launch the "American Artificial Intelligence Initiative". One of the five major focuses of the initiative is to formulate ethics-related artificial intelligence governance standards.

Concerns and cross-sector discussions on new issues

Chen Xiaoping called on the AI ​​academia and industry, as well as ethics, philosophy, law and other social disciplines, to participate in the formulation of principles and work closely together in order to avoid ethical and moral risks in the development of artificial intelligence.

He believes that although there is no evidence pointing to major risks in artificial intelligence in the short term, there are still problems such as privacy leaks and technology abuse. The ethical principles for driverless and service robots must also be discussed and formulated as soon as possible.

The Institute of Electrical and Electronics Engineers also stipulates that in order to resolve fault issues and avoid public confusion, AI systems must be accountable at the program level to prove why they operate in a specific way.

"In the long term, we cannot wait until serious problems arise before formulating measures to prevent them." Chen Xiaoping believes that artificial intelligence has different characteristics from other technologies, and many artificial intelligence technologies are autonomous. For example, autonomous robots have the ability to physically move in real society. If there are no appropriate preventive measures, serious problems may occur, which may cause greater harm.

Chen Xiaoping hopes that more people will participate in artificial intelligence ethics research, especially scientists and engineers, because it is they who know best the root causes of problems, how dangers may occur, details and related technological progress. "If they don't tell, the outside world can only guess, and it is difficult to draw correct judgments and appropriate responses."

He also reminded that while guarding against risks, attention should be paid to a balance and to avoid the side effects of curbing industrial development due to excessive restrictions or the use of inappropriate risk reduction methods.

More