AI Ethics

Wuzhen Observation丨How To Solve The Ethical Risks Of Artificial Intelligence? There Is An Urgent Need For "combined Boxing" Of Techniques And Rules

Wuzhen Observation丨How To Solve The Ethical Risks Of Artificial Intelligence? There Is An Urgent Need For "combined Boxing" Of Techniques And Rules

Wuzhen Observation丨How To Solve The Ethical Risks Of Artificial Intelligence? There Is An Urgent Need For "combined Boxing" Of Techniques And Rules

As early as 2016, AlphaGo defeated professional nine-dan Go master Lee Sedol and world champion Ke Jie, ranked first in the world. In recent years, machine learning has made considerable progress in autonomous driving, medical diagnosis, and even writing, painting, and poetry.

Reported by 21st Century Business Herald reporters Guo Meiting and Zhu Weijing from Wuzhen

As early as 2016, he defeated professional nine-dan Go master Lee Sedol and world champion Ke Jie, who is ranked number one in the world. In recent years, machine learning has made considerable progress in autonomous driving, medical diagnosis, and even writing, painting, and poetry.

A new round of technological changes and industrial changes represented by artificial intelligence are reshaping the global innovation landscape and reshaping the global economic structure. This not only creates broader development opportunities for countries around the world, but also brings risks and challenges in safety ethics and other aspects.

On November 10, the last sub-forum of the 2022 World Internet Conference - "Artificial Intelligence and Digital Ethics" was officially held in Wuzhen, Zhejiang. With the theme of "Industrial Direction and Ethical Orientation of Artificial Intelligence", this forum focuses on security risk management and digital ethics construction in the technology, application and industrial development of artificial intelligence.

The governance of artificial intelligence is urgent. How to promote the research and development of safe, trustworthy, controllable, reliable and scalable artificial intelligence technology? How to promote the construction of an artificial intelligence ethics system and the achievement of international consensus? How to ensure that artificial intelligence always develops healthily in a direction that is fair, just and beneficial to all mankind? The answers to these questions still depend on further exploration and pursuit.

“Long-term worries” have become “near-term worries”

In 1942, before the birth of artificial intelligence, American science fiction novelist Isaac Asimov first proposed in his novels that there may be ethical risks in the development of robots, and presupposed ways to avoid risks, which are also known as the "Three Laws of Robotics."

This warning is based on the premise that machine intelligence has surpassed humans. However, up to now, compared with human intelligence, artificial intelligence is still like the tortoise lagging far behind in the "tortoise and the hare".

Zhang Bo, honorary dean of the Institute of Artificial Intelligence at Tsinghua University and academician of the Chinese Academy of Sciences, pointed out at the forum that artificial intelligence research is currently in the exploratory stage, with slow progress and many difficult-to-solve problems. It is not easy to create superhuman machines, and whether we can achieve the goal of superintelligence through general artificial intelligence has always been controversial.

However, the impact of artificial intelligence on ethics and traditional norms has appeared early, and "long-term worries" have become "near-term worries." "At the beginning of this century, when deep learning based on big data emerged in artificial intelligence, people began to deeply feel that the ethical risks of artificial intelligence were right in front of them, and governance was urgent." Zhang Bo said.

On the one hand, this is due to the inherent shortcomings of artificial intelligence technology at this stage. Ni Xingjun, chief technology officer of Ant Group, pointed out that the new generation of artificial intelligence is driven by data and relies on machine self-learning rather than the input of human information and knowledge. This may lead to problems such as the system being not robust, unexplainable, and unreliable. In special fields closely related to social life such as medical care, manufacturing, and finance, the consequences of incorrect predictions made by machine learning models are often unacceptable.

On the other hand, the above-mentioned defects also bring opportunities for malicious abuse of artificial intelligence technology. For example, people maliciously take advantage of the vulnerability of the algorithm to attack it, causing the artificial intelligence system based on the algorithm to fail, or even perform the opposite destructive behavior; they use deep learning technology to create a large number of realistic fake news, fake videos, fake speeches, etc., disrupt social order and frame innocent people.

Zhang Bo believes that whether artificial intelligence technology is abused unintentionally, mistakenly or intentionally, it needs to be governed, but the nature of the two is different. The former carries out strict scientific assessment and full-process supervision of the artificial intelligence research, development and use process by formulating corresponding guidelines, and stipulates possible remedial measures when problems arise; while the latter relies on legal constraints and supervision of public opinion, with the implication of mandatory governance.

“Fundamentally speaking, the research and development of artificial intelligence must be human-centered, and we must start from the ethical principles of fairness and justice to build responsible artificial intelligence. To this end, we need to work hard to establish an explainable and robust artificial intelligence theory. On this basis, it is possible to develop safe, trustworthy, controllable, reliable and scalable artificial intelligence technology. technology, and ultimately promote the application and development of artificial intelligence that is fair, just and beneficial to all mankind. This is our idea of ​​developing the third generation of artificial intelligence." Zhang Bo advocates that people from different fields around the world participate in the research and governance of artificial intelligence and jointly develop a set of standards that are in the interest of all mankind through global cooperation.

The "combination boxing" of rules and techniques

Artificial intelligence governance relies on the cooperation of rules and technology to create a "combination punch". In terms of rules, the construction of digital ethics and artificial intelligence ethics systems is an important starting point in addition to laws and regulations.

Jiang Bixin, deputy chairman of the Constitution and Law Committee of the National People's Congress, summarized the necessity of ethical construction into "five sources", namely the duality of digital technology and artificial intelligence, the borderless, transnational, professional and non-transparent nature of digital networks, the limitations of legal norms, the indispensability of ethics and morality to human nature, and the dependence of network security on development and human morality.

In his view, the construction of an ethical system should start from the basic consensus of mankind, seek goodness, truth, and beauty, correct concepts, involve all subjects, incorporate all elements such as algorithms and data, carry out full-chain, full-cycle standardized design, and ensure it with all tools such as industry standards, technical guidelines, political leadership, and technical cracks.

Currently, there are relevant practices around the world. In November 2021, UNESCO released the "Artificial Intelligence Ethics Initiative", which provides a basic document of international legal nature for artificial intelligence governance from the perspective of the United Nations organization.

Gong Ke, former chairman of the World Federation of Engineering Organizations and executive director of the China New Generation Artificial Intelligence Development Strategy Research Institute, pointed out that this is the only international normative document on artificial intelligence issued by the United Nations system so far. The initiative is based on international law and was formulated with extensive global participation. It embodies a very important global consensus and firstly lays the foundation for the entire artificial intelligence governance from an ethical perspective.

Enterprise self-discipline also plays a vital role. For example, this year, Alibaba Group also proposed six basic principles for ethical science and technology governance: adhering to a people-oriented value orientation, practicing privacy protection, safety and reliability, inclusiveness and integrity, scientific and technological principles, and relying on scientific and technological methods that are trustworthy, reliable, and open and co-governance.

In terms of technology, Zhu Shiqiang, director of Zhijiang Laboratory and vice chairman of the China Artificial Intelligence Industry Alliance, proposed "using technical standards to frame the future direction." At the same time, around the new generation of artificial intelligence, we will change its opaque and black-box status quo by implementing the two-wheel drive of technology and knowledge; develop a series of tools such as cryptographic technology and data privacy platforms in the quantum era to curb bad applications; break through key technologies to ensure network security, and develop a fully dimensional and definable network with an evolvable infrastructure and flexible and definable capabilities.

In addition, Latif Latid, chairman of the Global IPv6 Forum, is also concerned about the problems of unemployment and uneven wealth distribution caused by artificial intelligence. He recommends reflecting on improving the education system to allow people to challenge more tasks and perform more innovative work.

More