Kong Jun: Artificial Intelligence Urgently Needs To Strengthen International Ethical Norms
Kong Jun: Artificial Intelligence Urgently Needs To Strengthen International Ethical Norms
In the military field, intelligence is seen as another trend in equipment development after mechanization and informatization. Artificial intelligence technologies such as facial recognition have also gradually emerged in regional armed conflicts.
Currently, a new round of technological revolution and industrial transformation is booming, and artificial intelligence has become a keyword for emerging technologies. In the military field, intelligence is seen as another trend in equipment development after mechanization and informatization. Artificial intelligence technologies such as facial recognition have also gradually emerged in regional armed conflicts.
At the same time, more and more people are concerned about the moral and ethical risks arising from the military application of artificial intelligence. How to regulate the military application of artificial intelligence at an ethical level and ensure that artificial intelligence technology does not deviate from the principle of "intelligence for good" has become an important frontier issue facing all countries.
The ethical issues raised by the military application of artificial intelligence have been around for a long time, and their core is how to view and deal with the relationship between humans and machines. In recent years, autonomous weapons systems such as robot dogs equipped with automatic rifles and robot sentries for border patrols have gradually come into public view. Some scholars have even begun to explore the military applications of the metaverse. However, the relevant laws and ethical rules are far from perfect. How to prevent this new type of weapon from killing civilians indiscriminately? Do autonomous weapons systems require human authorization before firing? Who is responsible for the consequences of using such a weapon? As the United States and the West accelerate the development of drones, deep forgery, social robots and other technologies, the international community is increasingly concerned about the ethical risks of artificial intelligence military applications.
In fact, although the current international legal system has no hard restrictions on artificial intelligence technology, the application of artificial intelligence technology in military and other fields is not without restrictions. According to international humanitarian law, before using new weapons, countries must evaluate whether they comply with international law and follow the principles of necessity, distinction, and proportionality. Weapon systems equipped with artificial intelligence modules must also meet the above requirements, but they face many difficulties in implementation. Under the current level of technology, the unique "black box effect" of artificial intelligence makes the output results often difficult to interpret. Once an accident occurs, it is easy to fall into the dilemma of accountability.
As the intelligence level of weapons has greatly improved, and human-machine fusion technologies such as brain-computer interfaces and mechanical exoskeletons continue to emerge, the relationship between humans and machines is facing fundamental changes. In the future, machines may even replace humans themselves and become the actual decision-makers and executors of military operations, triggering a deeper ethical crisis. Some people even predict that if the military application of artificial intelligence is not restricted in time, the murderous robots in the science fiction movie "Terminator" may become a reality.
To solve the problem of artificial intelligence governance, China is taking action. As China enters the ranks of innovative countries, the international community generally expects China to contribute wisdom and solutions to the construction of a global artificial intelligence governance system, promote exchanges, cooperation, and experience sharing, and promote artificial intelligence technology to better empower global sustainable development.
China attaches great importance to the governance of artificial intelligence and has always adhered to the concept of "people-oriented, intelligence for good". It has issued guidance documents such as the "New Generation Artificial Intelligence Governance Principles" and the "New Generation Artificial Intelligence Ethical Code". At the beginning of this year, the Chinese government issued the "Opinions on Strengthening the Ethical Governance of Science and Technology", which proposed five basic principles of "enhancing human welfare, respecting the right to life, adhering to fairness and justice, rationally controlling risks, and maintaining openness and transparency." It emphasized that it will focus on strengthening research on ethical legislation in cutting-edge fields such as artificial intelligence, and timely upgrade important ethical norms into national laws and regulations. At the international level, China has organized experts to actively participate in the drafting of important documents such as the WHO’s Ethical Guidelines for Artificial Intelligence and UNESCO’s Recommendation on Ethical Issues in Artificial Intelligence. In participating in the discussions of the United Nations Government Expert Group on "Lethal Autonomous Weapon Systems", China has always called on all parties to pay attention to ethical governance issues and firmly opposes the development of autonomous weapons systems that are completely out of human control. China is also the first permanent member of the Security Council to propose regulating the military application of artificial intelligence under the framework of the United Nations Convention on Certain Conventional Weapons, which has been widely praised.
Artificial intelligence technology represents the future of mankind, but it is also a "double-edged sword" and requires urgent strengthening of international governance and guidance. China has always maintained that only by adhering to true multilateralism and promoting democratization of international relations can global governance be promoted in a more fair and reasonable direction. The same is true for the issue of international governance of artificial intelligence. It is imperative to improve international ethical norms for artificial intelligence. All parties should stand at the strategic height of building a community with a shared future for mankind, proceed from the common values of peace, development, fairness, justice, democracy, and freedom, and promote the establishment of an artificial intelligence ethical governance plan that is in the common interest of all mankind. History will eventually prove that only by adhering to the ethical bottom line can we effectively avoid security risks and ensure that artificial intelligence technology always benefits mankind. (The author is an observer of international issues)