Jun Legal Review丨Artificial Intelligence Ethics Review Is Officially Included In The Regulations—Analysis Of The "Artificial Intelligence Technology Ethics Management Service Measures" Publicly Solicited Opinions (Part 2): Ethical Principles And Dynamic Observation And Prospects Of International Legislation
Jun Legal Review丨Artificial Intelligence Ethics Review Is Officially Included In The Regulations—Analysis Of The "Artificial Intelligence Technology Ethics Management Service Measures" Publicly Solicited Opinions (Part 2): Ethical Principles And Dynamic Observation And Prospects Of International Legislation
In this section, the author will observe the evolution of the governance principles of ethical review in China's legal items and the relevant international legislative dynamics and prospects, so as to provide some reference for the implementation of relevant institutions and enterprises.
Zhang Xuyao, Chen Chao, Shi Xiaoyu, Rao Jiacheng
Above summary
Previously, we introduced and interpreted the background and specific provisions of the "Measures for the Management of Services of Artificial Intelligence Technology (Trial) (Draft for Public Comments)" (hereinafter referred to as the "AI Ethical Measures").
What content should be reviewed, what dimensions should be used, and what standards should be used for ethical review be conducted and completed. Based on the governance and regulatory framework of the relevant ethical review stipulated in the "AI Ethical Measures", it will be the most critical issue that institutions and enterprises that carry out artificial intelligence development and application should consider during the implementation process. With the continuous development of artificial intelligence technology and practice, I believe it will continue to evolve over time. This is both an academic and research topic, but also a practical topic.
In this section, we will observe the evolution of the governance principles of ethical review in China's legal items and the relevant international legislative dynamics and prospects, so as to provide some reference for the implementation of relevant institutions and enterprises.
Principles of ethical review
Evolution under Chinese law
Article 3 of the "AI Ethical Measures" establishes eight ethical principles for artificial intelligence technology activities: "to enhance human well-being, respect the right to life, respect intellectual property rights, adhere to fairness and justice, reasonably control risks, maintain openness and transparency, ensure controllability and credibility, and strengthen responsibility." These principles reflect the country's basic principles for the development of artificial intelligence. Further, it is reflected in the five key review points stipulated in the "AI Ethical Measures".
Previously, in the "Principles of Governance of New Generation Artificial Intelligence - Developing Responsible Artificial Intelligence" released in 2024, the National New Generation Artificial Intelligence Governance Professional Committee proposed eight governance principles, namely: "Harmony, Friendship, Fairness and Justice, Inclusiveness and Sharing, Respect for Privacy, Security and Control, Shared Responsibility, Open Collaboration and Agile Governance". Since the main subject involved in the document is "all parties related to the development of artificial intelligence", the governance principle also starts from the perspective of comprehensive parties and proposes expressions such as harmony, friendship, inclusiveness and sharing, and open collaboration.
Since then, the "Ethical Code of New Generation Artificial Intelligence" (hereinafter referred to as "AI Ethical Code") released in 2025 further determined the goals and purposes of the development of artificial intelligence through formal documents, and proposed that artificial intelligence activities should "adhere to people-oriented and follow the common values of mankind" and "promote the fair sharing of the benefits brought by artificial intelligence in the whole society, and promote social fairness, justice and equal opportunities." This is also reflected and implemented in the "Technology Ethics Review Measures (Trial)" (hereinafter referred to as the "Technology Ethics Measures").
Based on the above rules, the "AI Ethics Measures" further adds three principles of "respecting the right to life, reasonably controlling risks, and maintaining openness and transparency", including:
Under the guidance of the above eight principles, Article 15 of the "AI Ethics Measures" stipulates five major review priorities for the ethics review of artificial intelligence technology. For example:
Ethical Review
International legislative updates
EU's "Credited Artificial Intelligence Ethics Guide"
In 2019, the European High-level Expert Group of Artificial Intelligence under the European Commission released the "Ethical Guidelines for Trusted Artificial Intelligence" (hereinafter referred to as the "Ethical Guidelines"), establishing the three major requirements for credible artificial intelligence of "legal, ethical and robust" and the four ethical principles of "respect for human autonomy, prevention of harm, fairness, and interpretability", and further clearly put forward the seven key requirements that artificial intelligence governance needs to be achieved: "human autonomy and supervision, technical robustness and security, privacy and data governance, transparency, diversity, non-discrimination and fairness, social and environmental well-being, and accountability."
1. Three major requirements for artificial intelligence
2. Four major ethical principles
3. Seven key requirements and representative review issues
Based on the four ethical principles of "respecting human autonomy, preventing harm, fairness, and interpretability", the "Ethics Guide" proposes seven key requirements for the implementation of artificial intelligence governance, and has designed a specific list of review issues for each requirement to provide reference for enterprises to conduct ethical review.
(1) Human autonomy and supervision
Including basic rights, human autonomy, and human supervision, artificial intelligence systems should support human autonomy and decision-making. Some representative review issues under this requirement, such as:
(2) Technical robustness and safety
This includes attack and security resilience, backup solutions and general security, accuracy, reliability and reproducibility, and requires the development of AI systems to adopt a preventive approach to risk and ensure they operate reliably as expected, while minimizing unintentional and unexpected injuries and preventing unacceptable injuries. In addition, human physical and mental health should be ensured. Some of the representative review issues in the Ethical Guide include:
(3) Privacy and data governance
This includes respect for privacy, data quality and integrity, and the possibility of data access. Some of the representative review issues under this requirement include:
(4) Transparency
This includes traceability, interpretability, and communication. Representative specific review issues also include the decisions made from artificial intelligence systems and the extent to which the resulting results can be understood, whether to ensure that all users can explain why the system makes specific choices and produces specific results, and that the explanation content is easy to understand, whether it has been evaluated whether the training and test data can be analyzed, and whether it has been evaluated whether the interpretability can be tested after the model training and development is completed.
(5) Diversity, non-discrimination and fairness
The review focused on preventing prejudice and discrimination and promoting diversity in population application. Artificial intelligence providers should pay attention to review whether AI has formulated relevant strategies or processes in the development, training and algorithm design process to avoid the generation or reinforce unfair bias and discrimination risks in artificial intelligence systems, and try to be accessible and versatile in the applicable population to meet a wide range of personal preferences and capabilities. Representative review questions include whether the diversity and representativeness of users in the data have been evaluated, and the testing of specific populations or usage scenarios with potential problems has been conducted, as well as evaluation and consideration of technical tools, processes, feedback methods, etc.
(6) Social and environmental well-being
The Ethical Guidelines point out that in the entire life cycle of artificial intelligence systems, the wider society, other sentient beings and environments should also be regarded as stakeholders, so artificial intelligence systems should be encouraged to be sustainable and assume ecological protection responsibilities to achieve sustainable development goals. Ideally, artificial intelligence systems should be used to benefit all humanity, including future generations. The representative specific review of issues under this requirement includes assessing the impact on the environment, human society and the wider range of other stakeholders in the development, deployment and use of AI models and taking necessary responses. For example, whether to establish mechanisms for environmental impacts (such as the type of energy used to build data centers) and take measures to reduce negative impacts on the environment, assess the social impact of artificial intelligence (such as whether there is a risk of unemployment or decline in labor skills) and take measures to deal with these risks.
(7) Accountability
This includes auditability, minimization and reporting of negative impacts, trade-offs and relief, requiring mechanisms to be established throughout the life cycle of the AI system to ensure that necessary traceable record retention measures are taken before and after the development, deployment and use of AI systems, ensuring that relevant responsibilities across the entire link are traceable and accountable.
The European Union Artificial Intelligence Act
Although the Ethical Guide is a non-mandatory guideline document issued by the European High-Level Expert Group on Artificial Intelligence, the ethical principles it proposes have been recognized to some extent in the EU's Artificial Intelligence Act. However, the EU's Artificial Intelligence Act does not directly stipulate specific provisions for ethical review.
According to Article 27 of the Preamble to the European Artificial Intelligence Act, in the design and use of artificial intelligence models, the ethical principles in the Ethical Guidelines should be transformed into practical actions as much as possible, and these principles should be the basis for the formulation of codes of conduct in accordance with this Act.
The EU encourages all stakeholders (including industry, academia, civil society and standardized organizations) to consider these ethical principles as appropriate when developing voluntary best practices and standards. Accordingly, in Article 95 of the Artificial Intelligence Act, the provisions on the Code of Conduct of Conduct in Article 95 of the Artificial Intelligence Act, further pointed out that the EU Artificial Intelligence Office and member states should promote the formulation of codes of conduct regarding the voluntary application of specific requirements, and subjects such as artificial intelligence deployers can apply these requirements to all artificial intelligence systems accordingly. The key indicators to be considered when formulating such codes of conduct include applicable elements specified in the Ethical Guidelines. Considering that the code of conduct can be used as an important tool for enterprises to prove their compliance under the legislative system of the EU's Artificial Intelligence Act, the ethical rules on trusted artificial intelligence will also enter the compliance review matters required by the EU's Artificial Intelligence Act.
Further, the "basic rights impact assessment" requirement for high-risk AI systems is stipulated in the EU's Artificial Intelligence Act (Right, basic rights impact assessment), aiming to protect individuals' basic rights from the possible adverse effects of the deployment of AI systems. Its role includes: identifying specific risks that may pose to affected individuals or groups and developing preventive measures to effectively reduce these risks.
The obligation to conduct a fundamental right impact assessment applies to high-risk AI systems explicitly specified in Annex III (except for the second item in Annex III), including AI systems for the following areas:
Under Article 27 of the EU Artificial Intelligence Act, certain deployers must conduct a fundamental rights impact assessment before deploying high-risk AI systems. The EU Artificial Intelligence Act does not specify specific methods for the impact assessment of basic rights, but it clearly requires that it must include the following core content:
The above content, especially the content of risk assessment and manual supervision, can also be regarded as the specific implementation of the content mentioned in the EU's credible AI ethical rules.
USA
In 2021, the US National Artificial Intelligence Initiative Act 2020 (hereinafter referred to as the "Initiative Act") came into effect, and for the first time, it incorporated artificial intelligence ethical norms into national strategic plans from the federal level, and required the US National Institute of Standards and Technology (hereinafter referred to as the "NIST") to formulate a voluntary risk management framework for AI systems containing ethical assessment standards.
The "AI Plan" released in July 2025 is a programmatic document formulated by the Trump administration in response to global competition in artificial intelligence, emphasizing that federal R&D resources will be concentrated on breakthroughs in interpretability, controllability and robustness of artificial intelligence to provide more predictable and verifiable artificial intelligence technology.
1. Fairness and justice
The Initiative Act clearly states that AI systems lack gender and racial diversity during the research phase, which may have a disproportionate adverse impact on vulnerable groups.
To this end, NIST formulated the core document of the American artificial intelligence ethical review system, the Artificial Intelligence Risk Management Framework (hereinafter referred to as the "Risk Management Framework"), in accordance with the requirements put forward by the Initiative Act, which defines artificial intelligence fairness as "actively managing bias to promote equality and equity", and the core is to identify and mitigate harmful biases and discrimination. Given the different perceptions of fairness in different cultures, the Risk Management Framework emphasizes that the considerations of multicultural and different population characteristics must be included in the path to achieve fairness.
The Risk Management Framework summarizes the possible biases caused by artificial intelligence into three categories:
(1) Systematic bias: inequality caused by training data, institutional rules and even social norms, such as algorithm solidification caused by stereotyped gender impressions of specific occupations, etc. These are not intentional discrimination, but rather prejudice solidification caused by past data;
(2) Computational and statistical bias: technical deviations arising from algorithm execution or data processing, such as using unrepresentative training data for training, resulting in excessive amplification of specific group characteristics;
(3) Human cognitive bias: The invisible preferences of developers or users lead to selective interpretation of the information output by AI, and bias is implanted in the entire life cycle decision-making of artificial intelligence.
2. Trusted, transparent, explainable
The Risk Management Framework further constructs a characteristic system of trustworthy artificial intelligence, believing that it should have eight major attributes: credibility, reliability, security, resilience, accountability, transparency, explainability, privacy enhancement, and ensuring fairness. To achieve this goal, the Risk Management Framework requires that AI-related companies should organize regular inspections of AI-related indicators and conduct written reports; at the same time, the risk management process is solidified through transparent policies and procedures to ensure that results are traceable and responsibilities can be checked back.
The Risk Management Framework also divides the entire life cycle of AI system into six stages: planning and design, data collection and processing, model construction and use, verification and confirmation, deployment and use, operation and monitoring, providing a standardized framework for the implementation of risk control and compliance audits in stages. Each stage is equipped with corresponding control goals and testing requirements, so as to achieve credibility, transparency and explanation of artificial intelligence.
3. Freedom
In addition to the global consensus on artificial intelligence ethical, the United States has also put forward the following ethical requirements based on its institutional tradition:
(1) Freedom of speech: The Artificial Intelligence Action Plan proposes that AI systems must protect freedom of speech, must not promote misinformation, and must not become tools for promoting specific positions or instilling specific values. This requirement stems from the high protection of freedom of speech by the First Amendment of the U.S. Constitution, which translates into a constraint on technology neutrality in the context of artificial intelligence.
(2) Freedom of use: The Artificial Intelligence Action Plan emphasizes that the federal government should support open source models, prevent large models from being monopolized by a few technology companies, protect the right to use and review of small and medium-sized enterprises, academia and the public, prevent closed models from forming market monopoly, ensure innovation, and prevent technological oligopolies from controlling public information infrastructure. At the same time, the United States also regards over-regulation itself as an ethical risk that needs to be prevented.
Based on geopolitical competition, the Artificial Intelligence Action Plan is also trying to strengthen the United States' voice in the global AI field. The Artificial Intelligence Action Plan requires NIST to modify the Risk Management Framework, delete specific expressions such as "climate change" and "diversity", focus on "freedom of speech" and "objective truth", adjust the ethical framework to strengthen the core values of the United States, and set the requirements of federal procurement to "ensure that the system has no top-down ideological bias", so as to ensure that the AI used by the federal government is bound to the core values of the United States. At the same time, the Artificial Intelligence Action Plan also requires the Ministry of Commerce to promote the export of large-scale models with American core values to form a global coordinated ethical rules. Through internal regulations and external promotion of AI ethical requirements, the United States hopes to establish global AI standards and gain a leading competitive position.
For specific industries, some institutions have also put forward ethical requirements for artificial intelligence. For example, the "Artificial Intelligence Ethics Guide" released by the American Higher Education Information Technology Association proposes that when applying artificial intelligence technology, higher education institutions should maintain the core academic values of fairness and transparency, while committed to reducing the risks of technological bias, privacy leakage and technology abuse, and put forward eight core ethical principles, namely benevolence, justice, respect for autonomy, transparency and interpretability, accountability and responsibility, privacy and data protection, non-discrimination and fairness, risk assessment and benefit assessment.
The American Medical Association also issued the "Principles for Enhanced Intelligence Development, Deployment and Use", proposing that in the medical field, artificial intelligence should focus on supervision, emphasize transparency, pay attention to privacy and security, reduce bias, and ensure traceability of responsibilities.
future
Outlook
The ethical principles and review priorities proposed in the "AI Ethical Measures" put forward the implementation requirements for ethical review for enterprises that carry out artificial intelligence technology activities. With the subsequent development of artificial intelligence technology, the implementation of ethical review will become more and more important in the sustainable development and compliance level of artificial intelligence products themselves. Enterprises developing and using artificial intelligence technology may need to consider layout considerations in advance, whether an ethics committee is needed, internal systems and processes for ethics review, list of questions and evaluation standards, and whether there will be cases where expert review is required.
For artificial intelligence products that have overseas demand, it is also necessary to consider establishing a reusable mechanism and flexibly maintaining the regulatory and regulatory requirements of the main legal regions on which the product is listed.
