AI Ethics

Zhou Ming|Draft And Explanation Of The Legislative Framework Of The Judicial Artificial Intelligence Governance Ethics Committee-Judicial AI Governance Series (VI)

Zhou Ming|Draft And Explanation Of The Legislative Framework Of The Judicial Artificial Intelligence Governance Ethics Committee-Judicial AI Governance Series (VI)

Zhou Ming|Draft And Explanation Of The Legislative Framework Of The Judicial Artificial Intelligence Governance Ethics Committee-Judicial AI Governance Series (VI)

Academic proposals and explanations for the legislative framework of the Judicial Artificial Intelligence Governance Ethics Committee-Judicial AI Governance Series (VI) (with the main reference clauses and verification reports connected with current laws and regulations) Zhou Ming's summary As artificial intelligence technology is deeply integrated into the judicial field

Draft and explanation of the legislative framework of the Judicial Artificial Intelligence Governance Ethics Committee-Judicial AI Governance Series (VI)

(Attached the main reference clauses and verification reports connected with current laws and regulations)

Zhou Ming

summary

As artificial intelligence technology is deeply integrated into the judicial field, the breadth and complexity of technology intervention in judicial decision-making continue to expand. In order to prevent the erosion of judicial justice and basic rights of citizens by problems such as algorithm bias, data abuse and technical black boxes, it is urgent to establish a specialized ethical review and governance mechanism. This article focuses on the formulation of the Legislative Framework of the Ethics Committee on Judicial Artificial Intelligence Governance, focusing on explaining the necessity of legislation, the functional positioning of the ethics committee, the review standard system, the design of the operating mechanism and the responsibility constraint path. The framework aims to achieve a dynamic balance between technological innovation and judicial ethics through institutional means, clarify the legality boundaries, fairness standards and transparency requirements of technological application, and at the same time build governance rules covering the entire chain of "project establishment-deployment-application". The core of legislation is to establish an authoritative and independent ethical review agency, improve the technical compliance review procedures, and prevent and control systemic risks through hierarchical supervision, hearing systems and suspension of operation mechanisms, and ultimately protect judicial credibility and citizens' rights and interests from being affected by technological alienation.

With the deep of (AI) into the field, the scope and of 's in - to . To risks such as bias, data , and that may and erode ' , there is an need to and . This on the of the for the of , on the of , the of the , the of , and for .

The aims to a and by the legal , , and for AI . It rules the of AI —from and to . At its core, the seeks to an and body, , and risks , , and , and ' from .

Keywords: judicial artificial intelligence, ethical governance, ethical committee setting, legality review, algorithm transparency, responsibility traceability

: , , of , , .

1. The necessity of legislation

At present, the application of artificial intelligence technology in the fields of evidence identification, legal document generation, sentencing assistance, etc. has gradually become normalized, but the ethical risks and legal challenges it arouses are becoming increasingly prominent. On the one hand, the opacity of the algorithmic decision-making process may weaken the referee's ability to judge key facts, leading to the risk of "technology transgression of justice"; on the other hand, if the bias in data collection and model training is not effectively regulated, it may aggravate the discrimination of the referee's results and damage the equal rights of the parties.

The current legal norms mainly focus on improving technical security and efficiency, and lack a systematic ethical review mechanism. Although the "Opinions of the Supreme People's Court on Standardizing and Strengthening the Judicial Application of Artificial Intelligence" put forward the principle of "adhering to ethics first", the review subject, standards and procedures are not clarified. Against this background, it is necessary to legislate to build a judicial artificial intelligence governance ethics committee:

1. Fill in the system gap: By establishing an independent ethical review agency, we can solve the problem of missing standards for the legality, fairness and transparency of technology application;

2. Balance technological innovation and judicial authority: prevent technological intervention from breaking through the constitutional principle of "the independent judicial power of judges is irreplaceable" and safeguarding the core position of judicial discretion;

3. Respond to social concerns: Enhance the public's trust in the judicial application of artificial intelligence through open and transparent review procedures, and avoid the risk of public opinion caused by the black boxing of technology.

2. Setting up and functional positioning of the ethics committee

The Judicial Artificial Intelligence Governance Ethics Committee is established by the Supreme People's Court. It is an advisory body independent of the judicial organs and performs its ethical review and governance responsibilities for the judicial application of artificial intelligence across the country. Its institutional design reflects the following characteristics:

1. Both independence and professionalism

• The committee does not directly participate in the case trial and only provides professional advice on the ethical compliance of technology applications to avoid conflicts of power with judicial power.

• Member composition covers multiple disciplines such as law, computer science, and sociology, ensuring the comprehensiveness and scientificity of the review standards.

2. Functions focus on risk prevention and control

• Formulate review standards: clarify the three core dimensions of legality, fairness and transparency, and refine the red lines and restricted areas for technical application.

• Supervision and compliance implementation: Through regular inspections, special evaluations and other methods, ensure that judicial organs at all levels strictly follow ethical norms.

• Early warning and suggestions: Issuing early warning reports for emerging technology risks (such as deep forgery and big data portraits) and propose governance measures.

3. Democracy and openness guarantees

• The proportion of public representatives shall not be less than 20%, and shall be generated through an open selection mechanism to ensure that social interests are fully expressed during the review process.

• The review process is open to the parties and the public to enhance the transparency and credibility of ethical decision-making.

3. Content and mechanism of ethical review

Ethical review is centered on "preventive compliance" and covers the entire life cycle of technology application, including the following aspects:

1. Legality Review

• It is prohibited to create legal rules or change the statutory evidence identification standards in disguise, such as the algorithm shall not presume the liability requirements that are not explicitly stipulated in legal provisions.

• The basis for review includes the Legislative Law and the Civil Procedure Law, etc., to ensure that technology application does not break through the current legal framework.

2. Fairness review

• Algorithms are required to design eliminate discriminatory factors such as gender, race, and religious beliefs, such as prohibiting the inclusion of physiological characteristic data that are not related to criminal behavior in sentencing assistance systems.

• Introduce third-party bias detection agencies to conduct regular audits of training data to ensure fairness in decision-making results.

3. Transparency Review

• The artificial intelligence system is required to retain traceable decision-making paths, and the referee has the right to retrieve the technical logic and data sources of key nodes.

• Through the "Technical Manual" system, clarify the interpretability obligations of algorithm developers, such as marking specific parameters and confidence in image analysis in the evidence identification system.

Review mechanism design:

• Graded review system: implement differentiated supervision according to the nature of the case. Civil cases shall be registered, criminal cases shall be subject to ethical review reports, and national security cases shall be approved by the committee before they can be deployed.

• Objection Hearing Procedure: When the parties object to the application of artificial intelligence, the court shall organize the hearing within the statutory period. Experts only express opinions on technical compliance and do not involve fact determination and legal application.

• Risk suspension mechanism: For scenarios that may cause systemic risks (such as the risk of large-scale data leakage), immediately suspend the application and start review to ensure that the risks are controllable.

4. Selection of members and behavioral constraints

1. Selection criteria and procedures

• Members must have advanced professional qualifications in the fields of law, information technology, ethics, etc., and public representatives shall be appointed through public registration and differential elections, and shall be subject to social supervision during the process.

• Establish a recusal system to prohibit members from having interest relationships with technology developers and parties to the case.

2. Constraints and supervision of performance of duties

• Members must sign a confidentiality agreement and a statement of conflict of interest, and regularly report their duties to the Supreme People's Court.

• Establish a "lifelong accountability system", hold criminal responsibility for abuse of power, favoritism and fraud in accordance with the law, and severely punish dereliction of duty.

3. Dynamic evaluation and exit mechanism

• Each committee has a fixed term and is re-selected after the expiration of the term.

• Members with insufficient professional ability or negative performance of their duties shall be dismissed by the Supreme People's Court.

V. Supervision system and responsibility mechanism

1. Multi-level supervision network

• The Supreme People's Court conducts a formal review of the review decision of the Ethics Committee to ensure that it complies with the provisions of the superior law.

• Judicial administrative agencies supervise the compliance implementation of courts at all levels through random inspections, technical audits, etc.

2. Presumption of fault and joint liability

• If a technology developer fails to provide a description of the decision-making path, he or she is presumed to be at fault and shall bear the tort liability stipulated in Article 1165 of the Civil Code, including but not limited to repairing damages and compensating for losses.

• For data flow in cross-border judicial cooperation, the National Security Department and the Ethics Committee will be reviewed jointly, and the scope of data usage and transmission paths are strictly limited.

3. Accountability and relief approaches

• Parties affected by improper application of technology may apply for administrative reconsideration or file an administrative lawsuit in accordance with the law, requiring the revocation or correction of illegal decisions.

• Establish a mechanism for retrieval of accidents in technical application and initiate accountability procedures for the court leadership team that illegally deploys the system.

Conclusion

The legislative framework of the Judicial Artificial Intelligence Governance Ethics Committee is a key institutional innovation in responding to the challenges of judicial governance in the technology era. Through a professional and independent ethical review mechanism, it not only defines the boundaries of the rule of law for the rational application of artificial intelligence, but also provides an institutional barrier for the protection of civil rights and judicial justice. In the future, technical standards need to be further refined, supervision procedures need to be improved, and review rules need to be dynamically adjusted through practice to ensure that the ethical governance system and technological development evolve simultaneously. Only by regulating technological innovation within the framework of the rule of law can we achieve the organic unity of judicial modernization and rights protection.

Attached:

Main reference clauses for the draft academic proposals for the legislative framework of the Judicial Artificial Intelligence Governance Ethics Committee

(Formulated in accordance with the "Legislative Law of the People's Republic of China", "Opinions of the Supreme People's Court on Standardizing and Strengthening the Judicial Application of Artificial Intelligence", etc.)

Chapter 1 General Provisions

Article 1 (Purpose of legislation)

In order to standardize the ethical review and governance of judicial artificial intelligence, safeguard judicial fairness and basic rights of citizens, this framework is formulated in accordance with the Constitution, Legislative Law and relevant provisions of the Supreme People's Court.

Explanation: This clause establishes the legislative goal to balance technological innovation and judicial ethics, and ensures that the application of artificial intelligence does not violate the spirit of the Constitution and judicial laws.

Article 2 (Scope of application)

This framework applies to ethical governance activities in the entire process of research and development, deployment and application of artificial intelligence systems in people's courts, people's procuratorates and judicial administrative agencies at all levels.

Interpretation: It is clear that the jurisdiction subject is the judicial organ, and the application of artificial intelligence systems in the non-judicial field should be excluded, so as to avoid crossing the scope of application of Article 27 of the "Science and Technology Progress Law".

Article 3 (Basic Principles)

Judicial artificial intelligence applications should follow the following principles:

(1) The independent judge's right to judge is irreplaceable;

(2) Citizens’ right to equality, privacy and litigation participation are not violated;

(III) The risk of technical application is controllable throughout the process.

Interpretation: Principle design connects Article 33 of the Constitution and Article 6 of the Organization Law of the People's Courts, prohibiting artificial intelligence from interfering in judicial discretion.

Chapter 2 Establishment and Responsibilities of Ethics Committee

Article 4 (Nature of the Committee)

The Supreme People's Court has established a Judicial Artificial Intelligence Governance Ethics Committee to coordinate national ethical governance as an independent consulting agency.

Interpretation: The organizational positioning refers to Article 28 of the Organization Law of the People's Procuratorate, emphasizing its independence and consulting functions.

Article 5 (Organizational Structure)

The committee is composed of famous members, including legal experts, judicial business backbone, social science researchers and public representatives, of which the proportion of public representatives is not less than []%.

Interpretation: The personnel composition meets Article 9 of the People's Jury Law for democratic participation, and public representatives must be selected through open selection.

Article 6 (Core Responsibilities)

(1) Formulate judicial artificial intelligence ethics review standards;

(2) Supervise the compliance of court systems at all levels;

(III) Issuing major ethical risk warnings and governance suggestions.

Interpretation: The division of responsibilities refers to Article 12 of the "Rules of the Work of the Judicial Committee of the Supreme People's Court", focusing on standard formulation and risk assessment.

Chapter 3 Ethical Review Standards

Article 7 (Legality Review)

The judicial application of artificial intelligence shall not create legal rules or change the statutory evidence identification standards in disguise.

Interpretation: According to Article 8 of the Legislative Law, technical systems are prohibited from interpreting laws beyond their authority.

Article 8 (Fairness Guarantee)

System design should eliminate potential biases that may affect the fairness of the referee results, and prohibit discriminatory algorithm design based on gender, race, and religious beliefs.

Explanation: Refine the requirements of Article 58 of the Personal Information Protection Law on “Governance of Algorithm Prejudice” and clarify the prohibited situations.

Article 9 (Transparency Requirements)

The artificial intelligence decision-making process should retain traceable review paths to ensure the referee’s right to know and control over key nodes.

Interpretation: The transformation of technological transparency into procedural rights is in line with Article 64 of the Civil Procedure Law.

Chapter 4 Governance Mechanism

Article 10 (Scenario-specific review)

Implement hierarchical review based on case type:

(1) Civil cases are subject to filing system;

(2) Criminal cases are subject to a pre-examination system;

(3) A special approval system is implemented for cases involving national security.

Interpretation: The classification standards refer to the "Supreme People's Court Case Quality Evaluation Standards" to distinguish the risk levels of civil, criminal and administrative cases.

Article 11 (Technical Compliance Hearing Procedure)

When the parties object to the application of artificial intelligence, the court should organize a technical compliance hearing process within three days, and the hearing experts only express their opinions on technical compliance.

Interpretation: The procedural specifications refer to Article 36 of the Administrative Litigation Law, and the expert responsibilities are limited to technical review, which is distinguished from the appraisal opinions of Article 82 of the Civil Procedure Law.

Article 12 (Suspension of operation mechanism)

For application scenarios that may cause systemic risks, the suspension mechanism should be immediately activated and reported to the Ethics Committee for review.

Explanation: Replace the original "circuit breaker procedure" expression and connect to Article 21 of the Cyber ​​Security Law with the third-level guarantee requirements.

Chapter 5 Special Provisions

Article 13 (Cross-border judicial data)

Artificial intelligence applications involving cross-border judicial cooperation need to be reviewed by the National Security Department and the Ethics Committee, and the data classification management is in accordance with Article 47 of the Data Security Law.

Explanation: Make it clear that judicial data belongs to the category of critical information infrastructure data and strengthen sovereign protection.

Article 14 (Rules for Presumption of Fault)

If a technology developer fails to provide an explanation of the decision-making path, he or she is presumed to be at fault and shall bear the tort liability stipulated in Article 1165 of the Civil Code.

Explanation: Taking "technical interpretability" as a negative reason for disclaimer will force the algorithm to be designed transparently.

Chapter 6 Attachment

Article 15 (Explanation authority)

This framework is interpreted by the Judicial Committee of the Supreme People's Court, and local courts at all levels shall not formulate implementation rules.

Interpretation: The authority setting complies with the exclusive right provisions of Article 104 of the Legislative Law on Judicial Interpretation.

Article 16 (Date of Implementation)

This framework will be implemented from the date of its publication, and the deployed systems must complete compliance transformation within [] months.

Interpretation: The transition period is set up in accordance with the implementation clauses of Article 1260 of the Civil Code to ensure the smooth implementation of the system.

Conclusion

This draft proposal proposes institutional innovations in terms of constitutional rights protection, data sovereignty maintenance, responsibility identification mechanism, etc., but the following measures need to be used to enhance operability:

1. Technology benchmark construction: jointly with the Information Center of the Supreme People's Court to release the "Judicial AI Technology White Paper" to clarify technical standards such as algorithm transparency and bias detection;

2. Consensus-building mechanism: hold a "Judicial AI Ethical Governance Seminar" and invite judges, technical experts, lawyers and public representatives to jointly formulate implementation rules;

3. Simulation evaluation pilot: Beijing, Shanghai and Guangzhou courts are selected to carry out post-legislative evaluation pilot, focusing on testing the actual effectiveness of the responsibility traceability and suspension operation mechanism.

The verification and feasibility report on the connection between current laws and regulations on the "Draft Academic Recommendations for the Legislative Framework of the Judicial Artificial Intelligence Governance Ethics Committee"

1. Analysis of the connection between the main reference clauses and current laws and regulations

1. Connect the purpose of legislation and the constitution

Article 1 of the draft proposal clearly defines the goal of "balancing technological innovation and judicial ethics" based on the Constitution, Legislative Law and relevant provisions of the Supreme People's Court. This design is in line with the legislative spirit of Article 33 of the Constitution (Protecting Civil Rights) and Article 6 of the Organization Law of the People's Courts (Judicial Independence), and is consistent with the requirements of "preventing the risk of technology abuse" in the "Technology Ethics Review Measures (Trial)" and has no conflicts.

2. Limitations of judicial fields of scope of application

Article 2 limits the applicable subject to judicial organs and excludes non-judicial fields. This provision is different from the requirements of "covering the entire process of scientific and technological activities" in the "Technology Ethics Review Measures (Trial)", but it is in line with the particularity of the judicial field. It is necessary to pay attention to the connection with Article 27 of the "Science and Technology Progress Law" (Broad Applicability of Science and Technology Ethics Review), and it is recommended to clarify the special rules in the judicial field through judicial interpretations.

3. Organizational structure of the Ethics Committee

Article 4 proposes that the Supreme People's Court establish an independent committee, referring to Article 28 of the Organization Law of the People's Procuratorate. However, the "Measures for Review of Science and Technology Ethics (Trial)" requires that the national-level committee be affiliated with the Science and Technology Ethics Committee of the State Council, and the local branch committees are established by provincial governments. The internal judicial system model of the draft proposal may weaken the cross-departmental coordination capabilities and the coordination mechanism with the State Council Science and Technology Ethics Committee needs to be supplemented in the annex.

4. Grade review mechanism

Article 10: Graded review shall be implemented according to the type of case, referring to the "Supreme People's Court Case Quality Evaluation Standards". This design complies with the idea of ​​"classified and hierarchical supervision" in the "Interim Measures for the Management of Generative Artificial Intelligence Services", but it is necessary to refine the specific review standards for civil, criminal and national security cases to avoid conflicts with the requirements of "full-process review of high-risk projects" in the "Technology and Ethics Review Measures (Trial)".

5. Technical Compliance Hearing Procedure

Article 11 requires that the hearing be organized within three days, and the experts only express their opinions on technical compliance. This procedure refers to Article 36 of the Administrative Litigation Law, but the Measures for Science and Technology Ethics Review (Trial) emphasizes that ethical review must include social impact assessment. It is recommended to expand the subjects of hearing participation and increase ethicists and public representatives to balance technical review with ethical concerns.

2. Feasibility Assessment

1. Institutional innovation

• Protection of sovereignty of judicial data: Article 13 requires that cross-border judicial cooperation must be reviewed by the national security department and the ethics committee, and comply with Article 47 of the Data Security Law to strictly control critical information infrastructure data.

• Rules for presumption of fault: Article 14 uses "technical interpretability" as a negative reason for disclaimer, which forces algorithms to be designed transparently, echoes the requirements of algorithm bias governance in Article 58 of the Personal Information Protection Law.

2. Operational Challenges

• Committee independence: The internal committees of the judicial system may face the question of "being both a referee and an athlete". They need to clarify the review standards through the "Judicial AI Technology White Paper" and establish a third-party supervision mechanism.

• Responsibility Traceability Mechanism: Article 12's "Suspension Operation Mechanism" needs to be connected with the third-level guarantee requirements of Article 21 of the Cybersecurity Law, but the complexity of judicial scenarios may increase the difficulty of operation. It is recommended to optimize the process through pilot evaluation.

3. Transition period settings

Article 16 stipulates that the six-month compliance reform period shall be based on the implementation clauses of Article 1260 of the Civil Code, which shall comply with the legislative experience of "combining the trial period and the revision period" in the "Technology Ethics Review Measures (Trial)", but shall be accompanied by supporting technical transformation guidance plans to avoid "one-size-fits-all" and lead to a decrease in judicial efficiency.

III. Improvement suggestions

1. Strengthen cross-departmental collaboration

In the annex, the coordination clauses with the State Council’s Science and Technology Ethics Committee will be added to clarify the rules for the connection between judicial review and scientific and technological ethics review.

2. Refine the rules of hearing procedures

Expand the subjects participating in the hearing, require the hearing report to be disclosed to the public, and establish a mechanism for reconsideration of objections.

3. Establish technical benchmarks and pilot assessments

The Supreme People's Court clarifies technical standards such as algorithm transparency and bias detection; it selects pilot local courts to carry out pilot projects to test the actual effectiveness of the responsibility traceability and suspension operation mechanism.

4. Conclusion

This draft proposal is innovative in terms of constitutional rights protection, data sovereignty maintenance, responsibility identification mechanism, etc., and is generally well connected with current laws and regulations, but the following measures need to be used to enhance operability:

1. Technical benchmark construction: Release the "Judicial AI Technology White Paper" to clarify technical standards such as algorithm transparency and bias detection;

2. Consensus-building mechanism: hold a "Judicial AI Ethical Governance Seminar" and formulate implementation rules;

3. Pilot simulation assessment: carry out post-legislative assessment and optimize the responsibility traceability and suspension operation mechanism.

[Brief introduction of lawyer Zhou Ming]

2024- Member of the 12th Digital Technology and Artificial Intelligence Professional Committee of Shanghai Lawyers Association

2019-2024 Member of the 11th Internet and Information Technology Business Research Committee of Shanghai Lawyers Association

2015-2019 Member of the 10th Merger and Acquisition Restructuring Business Research Committee of Shanghai Lawyers Association

Member of Shanghai Big Data Social Application Research Association

Member of Shanghai Law Society

More