How To Prevent AI From Committing Evil? Experts Interpret The Guidelines For Preventing Risks Of Artificial Intelligence Ethical Security
How To Prevent AI From Committing Evil? Experts Interpret The Guidelines For Preventing Risks Of Artificial Intelligence Ethical Security
Big data kills old customers, takeaway riders are trapped in the system, 94-year-old man is carried to the bank for facial recognition... With the rapid popularity of artificial intelligence applications, a series of ethical and security issues have surfaced.On January 5, the National Information Security Standardization Technical Committee (hereinafter referred to as the
Big data kills old customers, takeaway riders are trapped in the system, 94-year-old man is carried to the bank for facial recognition... With the rapid popularity of artificial intelligence applications, a series of ethical and security issues have surfaced.
On January 5, the National Information Security Standardization Technical Committee (hereinafter referred to as the "Information Security Standards Committee") officially issued the "Guidelines for Practice of Cyber Security Standards - Guidelines for Preventing Risks of Artificial Intelligence Ethical Security" (hereinafter referred to as the "Guidelines").
Many experts who participated in the preparation work told Nandu reporters that the introduction of the guidelines is to guide all parties in society to fully understand the relevant risks and ensure the safe and controllable development of artificial intelligence.
Put forward basic requirements for risk prevention
Guidelines summarize the ethical security risks of artificial intelligence into out-of-control risks, social risks, infringement risks, discriminatory risks and liability risks.
"Now everyone is discussing the risks of artificial intelligence, but what are the risks? The answer to this question is actually not that clear. Therefore, the guidance summarizes five types of risks, basically covering the most important matters about artificial intelligence at present." Participation Jia Kai, associate professor at the School of Public Administration of the University of Electronic Science and Technology of China, told Nandu reporters.
According to the description of the guidelines, out-of-control risk refers to the risk that the behavior and impact of artificial intelligence exceeds the preset, understanding, and controllable scope of relevant organizations or individuals, and has a negative impact on social value and other aspects.
Social risks refer to the risk of unreasonable use of artificial intelligence, including abuse, misuse, etc., which has a negative impact on social value and other aspects.
Infringement risk refers to the risk that artificial intelligence causes infringement or negative impact on people's basic rights such as personal, privacy, property, etc.
Discriminatory risk refers to the risk of artificial intelligence affecting fairness and justice on subjective or objective biases of specific human groups, causing rights infringement or negative impacts.
Responsible risk refers to the misconduct of all parties involved in artificial intelligence and unclear definition of responsibilities, which has a negative impact on social trust, social value, etc.
In light of the above risks, the guidelines put forward six basic requirements for risk prevention, including conforming to Chinese social values, abiding by laws and regulations, and aiming to promote sustainable development, etc.
Basic requirements for risk prevention put forward by the guidelines.
It is worth noting that the basic requirements emphasize that "the basic rights of individuals, including personal, privacy, property, etc. should be respected and protected, and special attention should be paid to the protection of vulnerable groups."
At present, it is not uncommon for vulnerable groups to suffer "algorithm discrimination". According to media reports, American technology companies have provided intelligent screening services to eliminate people who have had mental illnesses with the help of "personality testing". Amazon has also used artificial intelligence systems to screen resumes, but as a result, "favoring boys over girls" due to training data deviations, the project team has shown that it is "respecting boys over girls". Finally disbanded.
"The negative impact of technology on people is particularly easy to amplify on vulnerable groups. Vulnerable groups often lack corresponding feedback channels, and their interests are even less likely to be discovered after being harmed." The Renmin University of China Law, which participated in the guidance preparation work, was held. Guo Rui, associate professor at the college and researcher at the Institute of Future Rule of Law, told Nandu reporters that the guidance put forward this basic requirement is to attract the attention of all parties to the vulnerable groups and pay attention to their rights protection.
Emphasize that there is law to choose from and users
In addition to clarifying the basic requirements for risk prevention, the guidelines also summarize artificial intelligence activities into four categories: "research and development", "design and manufacturing", "deployment and application" and "user use", and put forward targeted suggestions on risk prevention for each type of activity.
According to Nandu reporters, these suggestions mainly focus on the interpretability, reliability, transparency, malicious applications, emergency response to accidents, risk relief, etc. of artificial intelligence.
"Unlike goods such as cars and furniture, artificial intelligence systems are more complex and often involve full-chain research, products and services. Therefore, the guidance divides artificial intelligence activities into four categories, hoping to include all stakeholders." Jia Kai told Nandu reporters that the guidance attempts to provide operational and implementable guidance specifications for various activity subjects, and therefore special consideration is given to the uniqueness of different subjects.
He used the emergency response of accidents as an example to explain that for researchers and developers, the requirements for guidance are to record key decisions and establish a backtracking mechanism to conduct necessary early warnings, communications and responses. This is to leave a certain degree of freedom for scientific research work and balance risk prevention and innovative development. For designers and deploying users, the guidelines are more stringent, including setting up manual emergency intervention mechanisms and ensuring timely response.
Overall, guidance puts more demands on deploying applications. Jia Kai believes that this is because the deployed applications are more diverse, more dispersed, and are directly targeted at users. Organizations or individuals of different sizes and natures also have different risk prevention capabilities, so the guidance has made more detailed regulations.
Nandu reporters found that the guidance responded to many issues that the public has strongly reflected. Taking the compulsory use that frequently causes controversy as an example, the guidelines propose that deploying applications should provide users with mechanisms that can refuse or stop using artificial intelligence-related systems, products or services; after users refuse or stop using, users should provide users as much as possible. Non-Artificial Intelligence Alternatives.
The guidelines also recommend that the deployed application provide complaints, questions and feedback mechanisms to users, and provide response mechanisms including manual services for processing and necessary compensation.
It is worth noting that compared with the draft for soliciting opinions published in November, the guidance has added a new statement in terms of deployment and application: when using artificial intelligence as a direct decision-making basis and affecting individual rights, there should be clear, clear and verifiable laws. Requirements based on regulations and other basis.
Guo Rui said that many artificial intelligence applications will have a significant impact on individual rights, but some current applications do not have a clear legal basis. The guidance adds this statement in a hope to remind all types of organizations and individuals to follow the principle of "there is a law to follow" and prevent infringement of individual rights.
It should be pointed out that guidance is not a national standard and does not have coercive power. The "Practical Guide for Cybersecurity Standards" to which it belongs is a standard-related technical document issued by the Information Security Standards Commission, aiming to publicize relevant standards and knowledge of network security and provide standardized practical guidance. This means that guidance plays more roles in guiding direction and referring to value trade-offs.
Guo Rui believes that although the guidance does not have the same coercive power as the law stipulates, it can play a role that is difficult for the law to reach. "Guidelines can enable research developers, designers and deploying users to recognize the existence of risks as soon as possible, respond to and manage risks in a timely manner, thereby achieving the result of reducing risks," he pointed out.
Jia Kai believes that the governance of artificial intelligence requires a system, and guidance can be used as part of the system. Before the laws and standards are mature, the primary role of guidance is to attract the attention of all sectors of society and clarify what can be done and what should be done. "Although guidance is not a mandatory requirement, the safe development of artificial intelligence is not only a business compliance issue, but also involves the long-term development of the enterprise. So I think responsible companies will be willing to participate."
Text/Feng Qunxing, a researcher at the Nandu Artificial Intelligence Ethics Research Group