"New Generation Artificial Intelligence Ethics Code" Released: Integrate The Full Life Cycle Of Artificial Intelligence, Pay Attention To Data Privacy Security And Algorithm Ethics
"New Generation Artificial Intelligence Ethics Code" Released: Integrate The Full Life Cycle Of Artificial Intelligence, Pay Attention To Data Privacy Security And Algorithm Ethics
On September 26, the National New Generation Artificial Intelligence Governance Professional Committee released the "New Generation Artificial Intelligence Ethics Code", which aims to integrate ethics and morality into the entire life cycle of artificial intelligence.
On September 26, the National Professional Committee for the Governance of New Generation Artificial Intelligence issued the "Ethical Code for New Generation Artificial Intelligence" (hereinafter referred to as the "Code"), which aims to integrate ethics and morality into the entire life cycle of artificial intelligence and provide ethical guidance for natural persons, legal persons and other relevant institutions engaged in artificial intelligence-related activities.
As early as January 5 this year, the National Information Security Standardization Technical Committee issued the "Network Security Standard Practice Guidelines-Artificial Intelligence Ethical Security Risk Prevention Guidelines" (hereinafter referred to as the "Guidelines"), stating that relevant organizations or individuals should fully identify, prevent, and control artificial intelligence ethical security risks when carrying out artificial intelligence research and development, design, manufacturing, deployment and application and other related activities.
Compared with the "Guidelines", the "Specifications" released this time provide a more detailed explanation of privacy protection and data security, and also pay attention to technical ethics issues such as algorithm bias.
Pay attention to data and privacy security, and emphasize algorithm ethics
The "Norms" put forward six basic ethical requirements, including enhancing human welfare, promoting fairness and justice, protecting privacy and security, ensuring controllability and credibility, strengthening responsibility, and improving ethical literacy. Among them, data and privacy security content runs through the specific ethical requirements for specific activities such as artificial intelligence management, research and development, and supply.
The "Specifications" point out that all types of artificial intelligence activities should fully respect the rights to know and consent to personal information, handle personal information in accordance with the principles of legality, legitimacy, necessity and good faith, ensure personal privacy and data security, and must not harm individuals' legitimate data rights and interests. They must not illegally collect and use personal information by stealing, tampering, leaking, etc., and must not infringe on personal privacy rights.
In terms of management, it is necessary to fully respect and protect the privacy, freedom, dignity, security and other legitimate rights and interests of relevant subjects, and prohibit improper exercise of power from infringing upon the legitimate rights and interests of natural persons, legal persons and other organizations.
In terms of research and development, it is necessary to improve data quality, strictly abide by data-related laws, standards and norms in data collection, storage, use, processing, transmission, provision, disclosure and other aspects, and improve the integrity, timeliness, consistency, standardization and accuracy of data.
In terms of supply, it is necessary to strengthen the quality monitoring and use evaluation of artificial intelligence products and services to avoid infringements on personal safety, property safety, user privacy, etc. caused by problems such as design and product defects. Products and services that do not meet quality standards are not allowed to be operated, sold, or provided.
Algorithmic ethics is also one of the focuses of this specification. The "Specifications" clearly state in the "R&D Specifications" that bias and discrimination should be avoided in data collection and algorithm development, ethical review should be strengthened, differentiated appeals should be fully considered, possible data and algorithm biases should be avoided, and efforts should be made to achieve inclusiveness, fairness and non-discrimination in artificial intelligence systems. At the same time, it is necessary to enhance security and transparency, improve transparency, explainability, understandability, reliability, and controllability in algorithm design, implementation, and application, enhance the resilience, adaptability, and anti-interference capabilities of artificial intelligence systems, and gradually achieve verifiability, auditability, supervision, traceability, predictability, and trustworthiness.
Technology ethics and data security become the focus of market supervision
In recent years, artificial intelligence ethical governance issues have attracted increasing attention. Data security and algorithm ethical compliance have always been the focus of my country's artificial intelligence industry, and relevant systems are being continuously improved.
On July 14, 2021, the "Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations (Draft)" drafted by the Organization Department of the Standing Committee of the Shenzhen Municipal People's Congress requires artificial intelligence companies to "set up ethical risk positions" and "perform ethical review and risk assessment responsibilities", and clearly stipulates that a series of behaviors such as infringement of personal privacy and personal information protection, harm to national security and social public interests, and algorithmic discrimination are prohibited in the conduct of artificial intelligence research and application activities.
On July 28, 2021, the Ministry of Science and Technology issued the "Guiding Opinions on Strengthening Science and Technology Ethics Governance (Draft for Comment)", which clarified basic requirements such as ethics first and agile governance, and proposed five science and technology ethics principles, requiring "Institutions engaged in science and technology activities such as life sciences, medicine, artificial intelligence, etc., if the research content involves sensitive areas of science and technology ethics, should establish a science and technology ethics (review) committee."
In addition, the Data Security Law of the People's Republic of China and the Personal Information Protection Law of the People's Republic of China were promulgated in June and August this year, and the importance of data and information security has been emphasized as never before. Under the policy trend, AI companies are paying more and more attention to the compliance issues of data security and algorithm ethics when financing and listing.
In the past year, "AI concept stocks" such as Yitu, Megvii, Yuncong, Haitian Ruixiang, and Yuntian Lifei have flocked to IPOs. Data compliance and technology ethics have been the focus of review. In May this year, during the IPO process of Megvii Technology, the Shanghai Stock Exchange conducted an inquiry on technology ethics for the first time, requiring Megvii to disclose the company's organizational structure, core principles, internal controls and implementation of artificial intelligence ethics. Yuncong Technology was once suspended from issuance and listing review due to expired financial information. Yitu Technology has withdrawn its application and its issuance and listing review has now been terminated.
Against the background of intensified supervision, SenseTime, one of the “Four Little Dragons of AI”, recently submitted a Hong Kong stock listing prospectus, and included a special chapter in the prospectus to disclose in detail the privacy and AI governance aspects, explaining how it achieves data privacy and personal information protection in its business, and stated that it will follow three artificial intelligence ethical principles: the principle of sustainable development, the principle of people-oriented principle, and the principle of technology controllability.
Wang Qiongfei, founding partner of Zhejiang Kenting Law Firm, said in a previous interview with 21 reporters that for the AI industry and technology companies, in the short term, data compliance and AI governance will consume a lot of energy and cost; but in the medium term, listed technology companies will take the initiative to deal with personal privacy and AI governance. The disclosure will undoubtedly serve as a model and guide for other companies, thereby easing data standardization and reducing the workload of data governance; in the long term, this move will also accelerate the healthy development of the AI industry and gradually achieve a balance between artificial intelligence innovation and supervision on the premise of respecting and protecting personal privacy.