AI Ethics

Kong Jun: Artificial Intelligence Urgently Needs To Strengthen International Ethical Norms

Kong Jun: Artificial Intelligence Urgently Needs To Strengthen International Ethical Norms

Kong Jun: Artificial Intelligence Urgently Needs To Strengthen International Ethical Norms

At present, a new round of scientific and technological revolution and industrial transformation are booming, and artificial intelligence has become the keyword of emerging technologies. In the military field, intelligence is regarded as another trend in the development of equipment after mechanization and informatization, and artificial intelligence technologies such as facial recognition have gradually emerged in regional armed conflicts.Meanwhile, more and more people are concerned about the moral and ethical risks arising from the use of artificial intelligence military. How to standardize the military application of artificial intelligence at the ethical level and ensure that artificial intelligence technology does not deviate from the principle of

At present, a new round of scientific and technological revolution and industrial transformation are booming, and artificial intelligence has become the keyword of emerging technologies. In the military field, intelligence is regarded as another trend in the development of equipment after mechanization and informatization, and artificial intelligence technologies such as facial recognition have gradually emerged in regional armed conflicts.

Meanwhile, more and more people are concerned about the moral and ethical risks arising from the use of artificial intelligence military. How to standardize the military application of artificial intelligence at the ethical level and ensure that artificial intelligence technology does not deviate from the principle of "intelligent for goodness" has become an important frontier issue facing all countries.

The ethical issues caused by the military application of artificial intelligence have a long history, and its core is how to view and deal with the relationship between humans and machines. In recent years, autonomous weapon systems such as robot dogs equipped with automatic rifles and machine posts used for border patrol have gradually entered the public's vision. Some scholars have even begun to discuss the military applications of the metaverse, but the relevant laws and ethical rules are far from perfect. How to prevent this new weapon from killing civilians indiscriminately? Does an autonomous weapon system require human authorization before firing? Who is responsible for the consequences of using this weapon? As the United States and the West accelerate the development of technologies such as drones, deep counterfeiting, and social robots, the international community's concerns about the ethical risks of military applications of artificial intelligence are increasing day by day.

In fact, although the current international law system does not have any rigid constraints on artificial intelligence technology, the application of artificial intelligence technology in military and other fields is not unlimited. According to international humanitarian law, countries must evaluate whether they comply with international law before using new weapons and whether they follow the principles of necessity, distinction, moderation, etc. Weapon systems equipped with artificial intelligence modules must also meet the above requirements, but they face many difficulties in implementation. Under the current technical level, the unique "black box effect" of artificial intelligence makes the output results often difficult to explain. Once an accident occurs, it is easy to fall into the dilemma of accountability.

With the significant improvement of weapon intelligence level, human-machine fusion technologies such as brain-computer interfaces and mechanical exoskeletons continue to emerge, and the relationship between humans and machines is facing fundamental changes. In the future, machines may even replace humans themselves and become actual decision makers and executors of military operations, triggering a deeper ethical crisis. Some people even predict that if the military applications of artificial intelligence are not restricted in time, the killing robots in the science fiction movie "Terminator" may become a reality.

China is taking action to solve the problem of artificial intelligence governance. As China enters the ranks of innovative countries, the international community generally expects China to contribute wisdom and solutions to the construction of the global governance system for artificial intelligence, promote exchanges, cooperation, and experience sharing, and promote artificial intelligence technology to better empower global sustainable development.

China attaches great importance to the issue of artificial intelligence governance, always adheres to the concept of "people-oriented and intelligent to good", and has released guiding documents such as "Principles of the Governance of New Generation of Artificial Intelligence" and "Ethical Norms of New Generation of Artificial Intelligence". At the beginning of this year, the Chinese government issued the "Opinions on Strengthening the Governance of Science and Technology Ethics", proposing the five basic principles of "advancing human well-being, respecting the right to life, adhering to fairness and justice, reasonably controlling risks, and maintaining openness and transparency", and emphasized that the focus will be on strengthening artificial intelligence. Ethical legislation research in cutting-edge fields has promptly elevated important ethical norms to national laws and regulations. At the international level, China organizes experts to actively participate in the drafting of important documents such as the WHO's Guide to Artificial Intelligence Ethics and UNESCO's Recommendation on Artificial Intelligence Ethics Issues. In participating in the discussion of the United Nations' "Deadly Autonomous Weapon System", China has always called on all parties to pay attention to ethical governance issues and firmly oppose the development of autonomous weapons systems that are completely out of human control. China is also the first permanent member of the Security Council to propose a proposal to regulate the military application of artificial intelligence under the framework of the United Nations Convention on Specific Conventional Weapons, and has received widespread positive evaluation.

Artificial intelligence technology represents the future of mankind, but it is also a "double-edged sword" and urgently needs to strengthen international governance and guidance. China has always advocated that only by adhering to true multilateralism and promoting the democratization of international relations can global governance develop in a more just and reasonable direction. The same is true for the issue of international AI governance, and it is imperative to improve the international ethical norms of artificial intelligence. All parties should stand at the strategic height of building a community with a shared future for mankind, and promote the achievement of an artificial intelligence ethical governance plan that is in line with the common interests of all mankind from the common values ​​of peace, development, fairness, justice, democracy and freedom. History will eventually prove that only by keeping the bottom line of ethics can we effectively avoid security risks and ensure that artificial intelligence technology always benefits mankind. (The author is an observer for international issues)

More