AI Ethics

Tech Economics Observation | Putting A "tightening Curse" On Artificial Intelligence: Countries Around The World Are Actively Introducing Ethical Rules For Artificial Intelligence

Tech Economics Observation | Putting A "tightening Curse" On Artificial Intelligence: Countries Around The World Are Actively Introducing Ethical Rules For Artificial Intelligence

Tech Economics Observation | Putting A "tightening Curse" On Artificial Intelligence: Countries Around The World Are Actively Introducing Ethical Rules For Artificial Intelligence

In the future, the development of artificial intelligence will be full of uncertainties. Problems such as algorithmic discrimination, data abuse and human unemployment may follow, bringing unprecedented challenges to human society.

Artificial Intelligence Risk Control_Artificial Intelligence Ethics_Artificial Intelligence Ethics Code

Artificial Intelligence Ethics Code_Artificial Intelligence Ethics_Artificial Intelligence Risk Control

In the future, the development of artificial intelligence will be full of uncertainties. Problems such as algorithmic discrimination, data abuse and human unemployment may follow, bringing unprecedented challenges to human society. How to avoid major risks brought by the development of artificial intelligence and better use artificial intelligence technology to benefit society? Only by putting ethics first and putting a "curse" on the "omnipotent" artificial intelligence can we make artificial intelligence move in the right direction and become a powerful assistant in helping mankind explore the future.

Why is “artificial intelligence ethics” important?

Science and technology are neutral and can be used to benefit mankind or to harm mankind. With the rapid development of science and technology, the greater its power, the more serious the consequences it may bring. This impact is global and affects all mankind. Every major technological invention in history, such as electricity, steam engines, the Internet, etc., will bring new risks and challenges to society, such as unemployment, privacy, security and other issues. Scientific and technological progress has brought about ethical problems more than once. What exactly will technology bring to mankind? Is it progress? Is it destruction? This is a question that will never be debated.

Marx pointed out at the beginning of the second industrial revolution that the machine system is different from ordinary tools. It is a "human brain organ created by human hands." It is not only an alien and powerful organism, but also controlled by the "universal intelligence" accumulated and shared by the whole society. In the era of weak artificial intelligence, human beings are obviously more happy than worried about machines. Machines can carry heavy objects, machines can drive unmanned, and machines can calculate quickly... With the rapid advancement of technology, artificial intelligence can gradually realize some kind of computable perception, cognition and behavior, functionally simulate human intelligence and actions, and demonstrate the replacement of human thinking, the core organ that distinguishes humans from other living things. Artificial intelligence is no longer a simple tool. It has begun to blur the boundaries between the physical world and individuals, refresh human cognition and social relationships, and extend to complex ethical, legal and safety issues.

As early as more than 70 years ago, Asimov proposed in the short story "Ring Dance" (1942) that robots can obey moral laws through a built-in "machine ethics regulator". He conceived the three laws of robots:

The first law is that a robot may not harm a human being or allow a human being to be harmed;

Second Law, robots must obey human orders as long as they do not conflict with the First Law;

The third law, under the premise of not violating the first and second laws, robots have the obligation to protect themselves.

Later, in order to overcome the limitations of the first law, he also proposed the zeroth law of robots, which has a higher priority: robots must not harm humanity as a whole or sit back and watch humanity as a whole be harmed.

Artificial Intelligence Ethics Code_Artificial Intelligence Ethics_Artificial Intelligence Risk Control

Returning to the actual development of artificial intelligence, with the development of applications such as drones, autonomous driving, industrial robots, and lethal autonomous weapons, a large number of new social problems have emerged. However, there are still many possibilities and uncertainties in the future development of artificial intelligence. Legal regulations are lagging behind and conservative, making it difficult to effectively supervise artificial intelligence. Only by establishing complete ethical standards for artificial intelligence and putting ethical and moral "strings" on artificial intelligence in advance can we gain more dividends from artificial intelligence and allow technology to benefit mankind. The current mainstream view in the scientific community is that machines must be ethical, otherwise the world will be unimaginable. According to Professor Rosalind Pickard, director of the Affective Computing Research Group at MIT, “The freer the machine, the more it needs a moral code.”

Countries around the world are actively introducing ethical rules for artificial intelligence

British philosopher David Collingridge (David) mentioned in his book "Social Control of Technology": The social consequences of a technology cannot be anticipated early in the life of the technology. However, by the time undesirable consequences are discovered, the technology has often become so part of the entire economic and social fabric that controlling it is difficult. This is the control dilemma, also known as the Collingridge dilemma.

To prevent the governance of artificial intelligence technology innovation from falling into the "Collingridge Dilemma", it is necessary to conduct preliminary research and judgment and make arrangements in advance. Although people have different opinions on where artificial intelligence will go in the future, it has become a basic consensus to impose ethical regulations on artificial intelligence.

On June 6, 2016, the Ethics Committee of the Japan Artificial Intelligence Society drafted a draft ethics outline for researchers. The draft raises the possibility of AI bringing harm to human society and requires measures to eliminate threats and prevent malicious use. The draft states: Artificial intelligence is widespread and potentially independent, and may have an impact on humans in areas not foreseen by researchers, which is harmful to human society and public interests. It may be harmful. The draft warns: "Things created by humans must not destroy humans' own happiness." In addition, the draft also states that the threat of AI to human security should be eliminated, sounding the alarm to society about its potential dangers, and formulating relevant provisions to prevent malicious use.

In September 2016, the British Standards Industry Association released the industry's first public indicator on the ethical design of robots - "Guidelines for the Ethical Design and Application of Robots and Machine Systems", aiming to ensure that intelligent robots produced by humans can integrate into the ethics of human society.

In September 2016, five major technology giants including Amazon, Google, Microsoft and IBM announced the establishment of a non-profit organization: the Artificial Intelligence Cooperation Organization (on AI, full name: on to and). The goals include: 1. Provide demonstrations for artificial intelligence research, covering areas including ethics, fairness, and inclusiveness; transparency, privacy, and interoperability; cooperation between people and AI systems; and the reliability and robustness of related technologies. 2. Promote public understanding and popularization of artificial intelligence from a professional perspective, and regularly share progress in the field of artificial intelligence. 3. Provide researchers in the field of artificial intelligence with an open platform for discussion and participation, making communication between researchers easier and more hassle-free.

In November 2016, the Institute of Electrical and Electronics Engineers (IEEE) released the world's first "Artificial Intelligence Ethics Design Draft ( )". It is hoped that this document will help the technology industry create AI automation systems that can benefit mankind and change the idea that there is no need to worry about moral ethics.

In January 2017, at the Al conference held in Asilomar, California, nearly a thousand experts in artificial intelligence-related fields jointly signed the famous "Asilomar 23 Principles of Artificial Intelligence." Enable future AI researchers, scientists, and legislators to follow to ensure it is safe, ethical, and beneficial.

In November 2017, to ensure that the future of artificial intelligence development remains ethical and socially aware, the Institute of Electrical and Electronics Engineers (IEEE) announced three new ethical standards for artificial intelligence: standards for ethical promotion of machined systems, intelligent systems, and automated systems, standards for fail-safe design of automated and semi-autonomous systems, and standards for measuring the well-being of ethical artificial intelligence and automated systems.

On April 25, 2018, the European Commission released the "EU Artificial Intelligence" document, proposing the EU path for artificial intelligence. The document states that the EU must strive to ensure international competitiveness in the field of artificial intelligence, so that all EU countries can keep up with this digital transformation and use EU values ​​as the basis for new technologies. Building on European values, the European Commission proposes a three-pronged approach to the development of AI: increasing public and private investment, preparing for the socio-economic changes brought about by AI, and establishing an appropriate ethical and legal framework.

In April 2018, the British House of Lords Special Committee on Artificial Intelligence issued a report stating that in the process of developing and applying artificial intelligence, it is necessary to put ethics and morality at the core to ensure that this technology can better benefit mankind. The report proposes that an "artificial intelligence code" applicable to different fields should be established, which mainly includes five aspects: artificial intelligence should serve the common interests of mankind; artificial intelligence should follow the principles of understandability and fairness; artificial intelligence should not be used to weaken the data rights or privacy of individuals, families and even communities; all citizens should have the right to receive relevant education so that they can adapt mentally, emotionally and economically to the development of artificial intelligence; artificial intelligence should never be given any autonomous ability to harm, destroy or deceive human beings.

In April 2019, the European Commission released the official version of the "Ethical Guidelines for Trustworthy AI (for AI)", proposing a framework for realizing the full life cycle of trustworthy artificial intelligence. According to the official explanation, "trustworthy artificial intelligence" has two necessary components: first, it should respect basic human rights, rules and regulations, core principles and values; second, it should be technically safe and reliable, and avoid unintentional harm caused by insufficient technology.

Artificial Intelligence Ethics Code_Artificial Intelligence Risk Control_Artificial Intelligence Ethics

How should China make efforts to improve the ethics of artificial intelligence?

In recent years, China's artificial intelligence industry has developed rapidly, and the world has seen it. However, the discussion on the ethics of artificial intelligence in China is far from forming a climate, and there is a clear gap with Europe, the United States, and even Japan and South Korea. In the long run, the lack of ethical rules for artificial intelligence will inevitably lead to a decline in the competitiveness of artificial intelligence products and the loss of the right to speak about artificial intelligence standards. It is urgent to build artificial intelligence ethical norms based on global values ​​and in line with Chinese characteristics. The modernization and intelligent transformation of Chinese society not only requires the baptism of advanced technologies such as artificial intelligence, but also the spiritual influence of modern ethics.

First, we must attach great importance to the ethical supervision of cutting-edge technologies from a strategic perspective. Similar to artificial intelligence, cutting-edge technologies such as robots, gene editing, big data, 3D printing and "nanotechnology" may cause ethical and moral crises, and the issues involved in data abuse, personal privacy and discrimination are all similar. If these ethical crises cannot be prevented in advance and effectively resolved, they will seriously hinder the development of cutting-edge technologies. In this regard, we should strategically attach great importance to the ethical supervision of cutting-edge technologies, coordinate the resources of relevant departments, and actively explore ethical principles that are in line with the development of cutting-edge technologies.

The second is to promote people from all walks of life to participate in the formulation of artificial intelligence ethical rules. In order to avoid ethical risks in the development of artificial intelligence, artificial intelligence academia and industry, as well as all social disciplines such as ethics, philosophy, and law, should participate in the formulation of principles and work closely together. Existing research on artificial intelligence ethics often lacks the participation of senior R&D personnel in the industry, which may make it difficult to implement artificial intelligence ethical rules. Only with the extensive participation of people from all walks of life can artificial intelligence ethical rules be more universal and operable, and can they exert greater social value.

The third is to actively carry out external exchanges and build global consensus. As global culture and ethical values ​​become increasingly diversified, the ethical standards and values ​​of artificial intelligence technology need to reach an international consensus. At present, artificial intelligence ethics research is mostly dominated by the United States and Europe, and there is a lack of voices from developing countries and Eastern cultures. In this regard, we should actively carry out external exchanges, build global consensus, enhance my country's voice in the formulation of artificial intelligence standards, and seize opportunities for future development.

Author丨Wei Yinggong Xueyuan

Introduction to the Institute

The International Institute of Technology and Economics (IITE) was established in November 1985. It is a non-profit research institution affiliated with the Development Research Center of the State Council. Its main functions are to study major policy, strategic and forward-looking issues in my country's economic, technological and social development, track and analyze the world's technological and economic development trends, and provide decision-making consulting services to the central government and relevant ministries and commissions. "Global Technology Map" is the official WeChat account of the International Institute of Technology and Economics, dedicated to delivering cutting-edge technology information and technological innovation insights to the public.

Address: Building A, No. 20, Xiaonanzhuang, Haidian District, Beijing

Tel: /6558

More