The United States Releases Military Ethics Principles, Opening The Floodgates To The Militarization Of AI | Beijing News Column
The United States Releases Military Ethics Principles, Opening The Floodgates To The Militarization Of AI | Beijing News Column
Based on this report, the United States is likely to become the first country in the world to introduce ethical standards for militarization of artificial intelligence.
On November 1, the Defense Innovation Board under the U.S. Department of Defense launched the final version of the "Ethical Standards for Artificial Intelligence." In fact, this standard has been in the works for several years. In July 2018, the committee proposed a draft version of the standard and accepted opinions from hundreds of industry insiders through hearings. This final version received unanimous support in a committee vote.
The Defense Innovation Council is a new organization established by the U.S. Department of Defense in 2016. Its main purpose is to organize leading figures in the U.S. science and technology community to provide advisory opinions on major strategic decisions on U.S. defense science and technology. Since the committee members are almost all big names in the American industry, the documents it issues, although not binding, have a broad impact on American society.
Ethical principles provide a "fuse" for the militarization of artificial intelligence
In recent years, the U.S. military has significantly accelerated the application of artificial intelligence, trying to use this new disruptive technology to completely change the U.S. military's combat model. As early as February 2019, the Department of Defense issued the first U.S. military artificial intelligence strategy document, requiring the use of artificial intelligence to promote U.S. security and prosperity and the introduction of artificial intelligence empowerment into specific military tasks.
The rapid introduction of artificial intelligence technology by the US military has also caused widespread concern both inside and outside the United States. Some observers have begun to worry that the US military is moving too fast and may create new security risks. The ethics and moral standards of the Defense Innovation Committee are aimed at solving these potential risks and clarifying the restrictions on the use and development of artificial intelligence by the US military.
The Defense Innovation Board proposed five core standards: responsible, fair, traceable, reliable, and controllable. Among them, the two standards of fairness and controllability are more controversial.
The fairness proposed by the committee means that the US military must avoid unexpected deviations when developing and deploying artificial intelligence equipment, and these deviations can inadvertently cause harm to personnel. Achieving this standard is very difficult. Because of the characteristics of artificial intelligence technology, it may "self-learn" during its application, and some of its "self-protection" behaviors may violate the designer's original intention. This standard means that the artificial intelligence equipment used by the US military may adopt a one-vote veto system for "accidental occurrences".
The concept of controllability attempts to add a "fuse" to artificial intelligence technology. The committee recommends that the U.S. military have "automatic or manual" shutdown switches in artificial intelligence systems and equipment. Once the equipment operates outside of the original intention of the user and designer, combat and command personnel can immediately shut down the system.
This concept means that artificial intelligence cannot completely replace the judgment of US military combatants. Combatants need to monitor the operation of artificial intelligence equipment in real time and even use remote control to make key decisions. Under this premise, it is difficult for the US military to apply strong artificial intelligence in military practice.
Ethical standards lift the biggest taboo on US AI militarization
The ethical and moral standards proposed by the committee may seem strict, but they have lifted the biggest taboo on the use of artificial intelligence by the US military.
The five ethical standards neither prohibit the US military from applying artificial intelligence technology to lethal weapons, nor limit the attack targets of artificial intelligence weapons. This means that the United States has completely abandoned the ideal goal of "demilitarization of artificial intelligence" and fully introduced it into the combat field. This also provides favorable rhetoric and legal basis for the future application of artificial intelligence by the U.S. military, allowing American society to accept an artificial intelligence-based U.S. military.
Based on the committee's report, the United States is likely to become the first country in the world to introduce ethical standards for the militarization of artificial intelligence.
From a global perspective, ethical standards are the leading rules for artificial intelligence governance. Exploring artificial intelligence from a moral and ethical perspective shows that the development and use of this technology may have a significant impact on human society and may change the basic form of human society. The moral and ethical standards that human society currently relies on do not have sufficient restrictions on artificial intelligence developers and users. New ethical standards need to be proposed based on the development and changes of this technology.
Ethical standards are also the most widely discussed content in the field of artificial intelligence governance in the international community. American technology giants such as Google, governments of various countries, and non-governmental organizations in the technology community have all proposed their own artificial intelligence ethics. The game and cooperation between all parties on this issue have just begun.
The United States has opened up a new battlefield in this battle over international rules for artificial intelligence, namely the militarization of artificial intelligence. The standards proposed by the United States are somewhat different from the principles advocated by the United Nations for Lethal Autonomous Weapons Systems, and are also different from the basic stance of the European Union, China, and others against the "militarization of artificial intelligence."
In the next stage, the United States will try to evolve its own ethical standards into the fundamental principles of international rules in this field and obtain the "final right of interpretation" for the militarization of artificial intelligence.
Li Zheng (American Institute of China Institute of Contemporary International Relations)