AI Ethics

The U.S. Military Launches The Ethical Principles Of Artificial Intelligence

The U.S. Military Launches The Ethical Principles Of Artificial Intelligence

The U.S. Military Launches The Ethical Principles Of Artificial Intelligence

According to the US

According to the US "Intelligence and Reconnaissance" website on May 4, the Pentagon has officially announced the five major ethical principles of artificial intelligence, namely responsible, fair, traceable, reliable and controllable. Both combat and non-combat artificial intelligence systems must comply with the above principles, otherwise the US Department of Defense will not deploy them.

"Responsible" means that developers and users are responsible for the development, deployment, use and results of artificial intelligence systems. According to the official guidance documents, during the development stage, developers need to determine reasonable activity boundaries for artificial intelligence systems. Taking the autonomous driving system as an example, the upper and lower boundaries of its vigorous movement must be determined to avoid accidents during use. During the deployment stage, users should establish a clear command chain for the artificial intelligence system and clarify the identity of the operator and commander to avoid chaos.

"Fair" means that the Pentagon should minimize the impact of artificial bias on artificial intelligence systems. As we all know, the process of machine learning is the analysis of manual input data. If the inputer has a bias, the input data will inevitably be incomplete, and the artificial intelligence system will lose a fair learning environment. For example, if the facial recognition system wants to improve the recognition rate, it needs to compare and judge a large number of facial images. If the input facial images are limited to a certain gender or a certain race, it will inevitably lead to a decrease in the accuracy of the facial recognition system.

"Tracable" refers to the transparency of an artificial intelligence system during its use cycle, that is, accurate records must be left in every link from development, deployment to use, such as R&D methods, data sources, design programs, usage archives, etc. Technical experts or users can study this. Once a failure occurs, they can find the source from the R&D database or document, and eliminate it in a timely and quickly manner to prevent the artificial intelligence system from getting out of control.

"Reliable", the Pentagon defines "safe, stable and effective". Specifically, the reliability of an artificial intelligence system is based on "traceability", that is, when the system experiences instability or efficiency decreases, technicians should backtrack the system database and sensor records, find out the differences from the normal state and process it. For example, when there are signs of insecurity or instability in the autonomous driving system, technicians must immediately review the activity boundaries set by the developer and compare them with the actual boundaries to correct the changing factors in a timely manner. Under normal circumstances, a reliable AI system will communicate with the operator or the rear database if it is impossible to make a judgment to determine the final result.

"Controllable" refers to the ability of artificial intelligence systems to detect and avoid accidents. The Pentagon believes that controllable artificial intelligence systems should have sufficient anti-interference capabilities and stop working automatically or manually in the event of accidents or other malicious behaviors against humans. This capability must be embedded in the artificial intelligence system during the design stage. For example, an autonomous vehicle does not require a steering wheel, but the designer will still install the steering wheel to ensure that the occupant can switch to manual driving in the event of an accident.

More