AI Ethics

Prevent Facial Recognition From Being Abused! Comments On The Ethics And Ethics Of Artificial Intelligence: Users Should Be Provided With A Rejection Mechanism

Prevent Facial Recognition From Being Abused! Comments On The Ethics And Ethics Of Artificial Intelligence: Users Should Be Provided With A Rejection Mechanism

Prevent Facial Recognition From Being Abused! Comments On The Ethics And Ethics Of Artificial Intelligence: Users Should Be Provided With A Rejection Mechanism

Face recognition is controversial because it is used in communities, zoos, shopping malls and other scenarios. The latest guidelines for artificial intelligence ethics and ethics that solicit opinions seem to give an answer.According to the official website of the National Information Security Standardization Technical Committee on November 9, the Secretariat of the National Information Security Standardization Technical Committee has compiled the

Face recognition is controversial because it is used in communities, zoos, shopping malls and other scenarios. The latest guidelines for artificial intelligence ethics and ethics that solicit opinions seem to give an answer.

According to the official website of the National Information Security Standardization Technical Committee on November 9, the Secretariat of the National Information Security Standardization Technical Committee has compiled the "Guidelines for Practice of Network Security Standards - Guidelines for the Ethics and Ethics of Artificial Intelligence (Draft for Comments)" for public comments.

In response to the possible ethical and moral issues of artificial intelligence, the draft solicitation for opinions provides security risk warnings and puts forward standardized guidelines for carrying out related activities such as artificial intelligence research and development, design and manufacturing, deployment and application. According to regulations, deploying applications should provide users with mechanisms that can refuse or stop using artificial intelligence and provide alternative options that are not artificial intelligence as much as possible.

Regarding the risks of artificial intelligence-related activities, the draft for soliciting opinions may have five risks: out of control, social, infringement, discrimination and responsibility.

Specifically, the risk of out-of-control refers to the risk that the behavior and impact of artificial intelligence exceeds the scope preset, understanding and controllable by researchers, designers, and deploying applications, and has negative consequences on social value; social risks, that is, the risk of unreasonable use of artificial intelligence, including abuse and misuse, affecting social values ​​and causing systemic social problems; infringement risks, that is, the risk of artificial intelligence causing harm or negative consequences on human basic rights, personal, privacy, property, etc.; discriminatory risks, that is, the risk of artificial intelligence causing subjective or objective bias to specific human groups, causing rights infringement or negative consequences; liability risks, that is, the risk of the liability boundaries of all parties related to artificial intelligence are unclear and unreasonable, resulting in misbehavior behavior of all parties and negative consequences on social trust and social value.

The draft for soliciting opinions introduces the guidelines for the ethics and ethics of artificial intelligence. When carrying out related activities, in addition to complying with laws and regulations and committed to achieving safe and controllable artificial intelligence, we should also respect and protect the basic rights of individuals, personal, privacy, property, and other rights, and pay special attention to the protection of vulnerable groups. The vulnerable groups here refer to those who are at a disadvantage in terms of survival status, employment status, voice channels or ability to protect legitimate rights and interests. At the same time, we should recognize the ethical and moral security risks of artificial intelligence, conduct necessary risk analysis, and carry out artificial intelligence-related activities within a reasonable range.

For research and development developers, the draft for soliciting opinions mentioned that application scenarios that damage people's basic rights, personal, privacy, property and other rights should be avoided, and the possibility of artificial intelligence being maliciously exploited should be reduced. Carefully carry out research and development of autonomous artificial intelligence with the ability to replicate or improve yourself to assess possible out-of-control risks. In addition, the interpretability and controllability of artificial intelligence should be continuously improved, key decisions on research and development should be recorded and traced back-tracking mechanisms should be established, and necessary communication and responses should be made to matters related to the ethical and moral security risks of artificial intelligence.

For designers, the draft for soliciting opinions stated that artificial intelligence systems, products or services that harm public interests or personal rights should not be designed and manufactured; the functions, limitations, safety risks and possible impacts of artificial intelligence systems, products or services should be explained to the deployed applications in a timely, accurate, complete, clear and unambiguous manner; accident emergency response mechanisms should be set up in the system, products or services, and accident information backtracking mechanisms should be set up, such as the black box to realize accident information backtracking of unmanned driving. At the same time, we should establish necessary protection mechanisms to deal with the ethical and moral security risks of artificial intelligence to provide relief to the losses caused, such as providing protection for necessary relief through means such as purchasing insurance.

For deploying applications, the draft for soliciting opinions requires that in public services, financial services, health, welfare education and other fields, if unexplainable artificial intelligence is used when making important decisions, it should only be used as an auxiliary decision-making means and not as a direct basis for decision-making. The inexplicable here means that it is difficult to provide explanation, evidence or argument for the process or cause of a specific decision or behavior.

At the same time, the deployed application should explain the functions, limitations, risks and impacts of AI-related systems, products or services to the users in a timely, accurate, complete, clear and unambiguous manner, and explain the relevant application processes and application results; provide users with a mechanism that can refuse or stop using AI-related systems, products or services in a clear and convenient way; after the user refuses or stops use, non-artificial intelligence alternatives should be provided to users as much as possible.

For users, the draft for soliciting opinions suggests that artificial intelligence should be used for good purposes and fully reflect the positive value of artificial intelligence. Artificial intelligence should not be used maliciously for the purpose of damaging social value and individual rights. At the same time, it should actively understand and feedback on the ethical and moral security risks of artificial intelligence.

More