The United States Releases Military Ethical Principles, Opening The Floodgates For AI Militarization | Beijing News Column
The United States Releases Military Ethical Principles, Opening The Floodgates For AI Militarization | Beijing News Column
On November 1, the Defense Innovation Committee under the U.S. Department of Defense launched the final version of the
On November 1, the Defense Innovation Committee under the U.S. Department of Defense launched the final version of the "Ethical and Ethical Standards of Artificial Intelligence". In fact, this standard has been in the works for several years. In July 2018, the committee proposed a draft version of the standard and accepted the opinions of hundreds of industry insiders through a hearing. The final version was unanimously supported in the committee vote.
The Defense Innovation Commission is a new agency established by the U.S. Department of Defense in 2016. Its main purpose is to organize leaders in the U.S. science and technology industry to provide consulting opinions on major strategic decisions about U.S. defense technology. Since almost all members of the committee are big names in the American industry, although the documents they issued are not mandatory and binding, they also have a wide impact on American society.
Ethical principle is to militarize artificial intelligence with "fuse"
In recent years, the US military has significantly accelerated the application of artificial intelligence, trying to completely change the US military's combat mode with the help of this new disruptive technology. As early as February 2019, the Department of Defense released the first US military's artificial intelligence strategic document, requiring the use of artificial intelligence to promote US security and prosperity and introduce artificial intelligence empowerment into specific military tasks.
The rapid introduction of artificial intelligence technology by the US military has also caused widespread concerns both inside and outside the United States. Some observers are beginning to worry that the U.S. military's pace may pose new security risks. The ethical standards of the National Defense Innovation Committee this time are to solve these potential risks and clarify the restrictions on the US military's use and development of artificial intelligence.
The National Defense Innovation Committee has proposed five core standards: responsibility, fairness, traceability, reliability and controllability. Among them, the two standards of fairness and controllability are more controversial.
The fairness proposed by the Commission means that the U.S. military should avoid unexpected deviations when developing and deploying artificial intelligence equipment that will inadvertently harm personnel. It is very difficult to achieve this standard. Because of the characteristics of artificial intelligence technology, it may "learn" during its application process, and some of the "self-protection" behaviors may violate the designer's original intention. This standard means that the artificial intelligence devices used by the US military may adopt a "accidental" veto system.
The concept of controllability tries to add "fuses" to artificial intelligence technology. The committee recommends that the U.S. military have "automatic or manual" shutdown switches in artificial intelligence systems and equipment, and that combat and command personnel can immediately shut down the system once the equipment runs away from the original intention of the user and designer.
This concept means that artificial intelligence cannot completely replace US military combatants for judgment. Fighters need to monitor the operation of artificial intelligence equipment in real time, and even make key decisions using remote control methods. Under this premise, it is difficult for the US military to apply strong artificial intelligence to military practice.
Ethical standards lift the biggest taboo for US AI militarization
The ethical standards proposed by the committee seem strict, but they lift the biggest taboo for the US military to apply artificial intelligence.
The five ethical standards neither include banning the US military from applying artificial intelligence technology to deadly weapons nor limiting the attack targets of artificial intelligence-based weapons. This means that the United States has completely abandoned the ideal goal of "demilitarization of artificial intelligence" and introduced it into the field of combat. This also provides a favorable argument and legal basis for the US military to apply artificial intelligence in the future, allowing American society to accept an artificial intelligence-based US military.
Based on this report from the Commission, the United States is likely to become the first country in the world to launch ethical standards for militarization of artificial intelligence.
From a global perspective, ethical standards are the leading rules for artificial intelligence governance. Discussing artificial intelligence from a moral and ethical perspective shows that the development and use of this technology may have a significant impact on human society and may change the basic form of human society. The moral and ethical standards that human society currently relies on have not yet been limited to artificial intelligence developers and users, and new ethical standards need to be proposed based on the development and changes of this technology.
Ethical standards are also the most widely discussed content in the international community in the field of artificial intelligence governance. American technology giants such as Google, governments of various countries, and non-governmental organizations in the technology industry have all proposed their own artificial intelligence ethics. The game and cooperation between all parties on this issue has just begun.
The United States has opened up a new battlefield in this international rules battle for artificial intelligence, namely the field of militarization of artificial intelligence. The standards proposed by the United States are somewhat different from the principles advocated by the United Nations deadly autonomous weapon system, and are also different from the basic positions of the EU, China, etc. against the "militarization of artificial intelligence".
In the next stage, the United States will try to evolve its ethical standards into fundamental principles of international rules in this field and obtain the "final right to interpret" for the militarization of artificial intelligence.
□Li Zheng (American Institute of China Institute of Contemporary International Relations)