How To Prevent AI From Doing Evil? Experts Interpret Artificial Intelligence Ethical Safety Risk Prevention Guidelines
How To Prevent AI From Doing Evil? Experts Interpret Artificial Intelligence Ethical Safety Risk Prevention Guidelines
Big data kills people, food delivery riders are trapped in the system, a 94-year-old man is carried to the bank for facial recognition... With the rapid popularization of artificial intelligence applications, a series of ethical and safety issues have surfaced. January 5
Big data kills people, food delivery riders are trapped in the system, a 94-year-old man is carried to the bank for facial recognition... With the rapid popularization of artificial intelligence applications, a series of ethical and safety issues have surfaced.
On January 5, the National Information Security Standardization Technical Committee (hereinafter referred to as the "Information Security Standardization Committee") officially released the "Cybersecurity Standard Practice Guidelines-Artificial Intelligence Ethical Security Risk Prevention Guidelines" (hereinafter referred to as the "Guidelines").
Many experts involved in the preparation work told Nandu reporters that the introduction of the guidelines is precisely to guide all parties in society to fully understand the relevant risks and ensure the safe and controllable development of artificial intelligence.
Put forward basic requirements for risk prevention
The guidelines summarize the ethical safety risks of artificial intelligence into out-of-control risks, social risks, infringement risks, discriminatory risks and liability risks.
"Now everyone is discussing the risks of artificial intelligence, but what risks are there? The answer to this question is actually not that clear. Therefore, the guidelines summarize five types of risks, which basically cover the most important matters currently related to artificial intelligence." Jia Kai, associate professor at the School of Public Administration of the University of Electronic Science and Technology of China, who participated in the preparation of the guidelines, told Nandu reporters.
According to the description of the guidelines, out-of-control risk refers to the risk that the behavior and impact of artificial intelligence exceed the preset, understood, and controllable range of relevant organizations or individuals, and have a negative impact on social value and other aspects.
Social risks refer to the risks of unreasonable use of artificial intelligence, including abuse and misuse, which have a negative impact on social value and other aspects.
Infringement risk refers to the risk that artificial intelligence will infringe or have a negative impact on people's basic rights such as personal rights, privacy, and property.
Discriminatory risk refers to the risk that artificial intelligence’s subjective or objective bias against specific human groups affects fairness and justice, causing rights infringement or negative impact.
Responsibility risk refers to improper behavior and unclear definition of responsibilities of all parties involved in artificial intelligence, which has a negative impact on social trust, social value and other aspects.
Combining the above risks, the guidelines put forward six basic requirements for risk prevention, including compliance with Chinese social values, compliance with laws and regulations, and the goal of promoting sustainable development, etc.

The basic requirements for risk prevention proposed in the guidelines.
It is worth noting that the basic requirements emphasize that "the basic rights of individuals, including personal, privacy, property and other rights, should be respected and protected, with special attention to the protection of vulnerable groups."
Currently, cases of disadvantaged groups suffering “algorithmic discrimination” are common. According to media reports, American technology companies once provided intelligent screening services and used "personality tests" to eliminate people with mental illness; Amazon also used artificial intelligence systems to screen resumes, but due to deviations in training data, "boys were favored over girls" and the project team was eventually disbanded.
"The negative impact of technology on people is especially likely to be magnified on vulnerable groups. Vulnerable groups often lack corresponding feedback channels, and are less likely to be discovered when their interests are harmed." Guo Rui, an associate professor at the School of Law at Renmin University of China and a researcher at the Future Rule of Law Institute who participated in the preparation of the guidelines, told Nandu reporters that the guidelines put forward this basic requirement in the hope of attracting all parties to pay attention to vulnerable groups and pay attention to the protection of their rights and interests.
Emphasis on legal compliance and users’ right to choose
In addition to clarifying the basic requirements for risk prevention, the guidelines also classify artificial intelligence activities into four categories: "research and development", "design and manufacturing", "deployment and application" and "user use", and provide targeted suggestions on risk prevention for each type of activity.
According to Nandu reporters, these suggestions mainly focus on the explainability, reliability, transparency, malicious applications, accident emergency response, risk relief and other aspects of artificial intelligence.
"Unlike commodities such as cars and furniture, artificial intelligence systems are more complex and often involve the entire chain of research, products and services. Therefore, the guidelines divide artificial intelligence activities into four categories, hoping to include all stakeholders." Jia Kai told Nandu reporters that the guidelines try to provide operational and implementable guidance and specifications for various types of activity entities, so they also take into account the uniqueness of different entities.
Taking accident emergency response as an example, he explained that for researchers and developers, the requirements of the guidelines are to record key decisions and establish a review mechanism to conduct necessary early warning, communication, and response. This is to allow a certain degree of freedom for scientific research work and to balance risk prevention and innovative development. For designers, manufacturers and deployers, the requirements of the guidelines are more stringent, including setting up manual emergency intervention mechanisms and ensuring timely response.
Overall, the guidelines place more demands on those deploying applications. Jia Kai believes that this is because the users deploying applications are more diverse, more dispersed, and directly oriented to users. Organizations or individuals of different sizes and natures have different risk prevention capabilities, so the guidelines have more detailed provisions.
Nandu reporters found out that the guidelines responded to many issues that the public expressed strongly. Taking forced use that frequently causes controversy as an example, the guidelines propose that deployers should provide users with a mechanism to refuse or stop using artificial intelligence-related systems, products or services; after users refuse or stop using, non-artificial intelligence alternatives should be provided to users as much as possible.
The guidelines also recommend that deployers provide users with a mechanism for complaints, questions, and feedback, and provide a response mechanism including manual services for processing and necessary compensation.
It is worth noting that compared with the consultation draft released in November, the guidelines include a new statement in terms of deployment and application: when using artificial intelligence as a direct basis for decision-making and affecting individual rights, there should be clear, unambiguous, and verifiable laws, regulations, and other basis requirements.
Guo Rui said that many artificial intelligence applications will have a significant impact on individual rights, but some current applications do not have a clear legal basis. The purpose of adding this statement to the guidelines is to remind various organizations and individuals to follow the principle of "lawful compliance" and prevent infringement of individual rights.
It should be noted that the guidelines are not national standards and are not mandatory. The "Network Security Standards Practice Guide" to which it belongs is a standard-related technical document issued by the Information Security Standards Committee. It aims to promote network security-related standards and knowledge and provide standardized practice guidance. This means that guidance plays more of a directional role and a reference for value selection.
Guo Rui believes that although the guidelines do not have the same mandatory force as legal provisions, they can play a role that is difficult to achieve by law. “The guidelines can enable researchers, developers, designers, manufacturers, and deployers to recognize the existence of risks as early as possible, respond in a timely manner, and manage risks, thereby achieving the result of reducing risks.” He pointed out.
Jia Kai believes that the governance of artificial intelligence requires a system, and guidance can be part of the system. Before laws and standards mature, the primary role of guidelines is to attract the attention of all parties in society and clarify what can and should be done. "Although the guidelines are not mandatory, the safe development of artificial intelligence is not only a business compliance issue, but also involves the long-term development of the company. So I think responsible companies will be willing to participate."