AI Ethics

Song Hualin: Construction Of Artificial Intelligence Ethical Norms From The Perspective Of The Rule Of Law

Song Hualin: Construction Of Artificial Intelligence Ethical Norms From The Perspective Of The Rule Of Law

Song Hualin: Construction Of Artificial Intelligence Ethical Norms From The Perspective Of The Rule Of Law

Artificial intelligence is an important driving force leading a new round of technological revolution and industrial transformation. It not only brings new technical support to the development of all walks of life, but also profoundly changes people's lifestyles.

Artificial Intelligence Ethics Code_Artificial Intelligence Ethics_Artificial Intelligence Governance Principles

Author | Song Hualin, dean, professor and doctoral supervisor of Nankai University Law School

The 10th "National Outstanding Young Jurists"

Contracted author of Peking University Legal Information Network

Source | Peking University Magic Law Journal Database

"Digital Rule of Law" Issue 6, 2023

Editor's note: Artificial intelligence is an important driving force leading a new round of technological revolution and industrial transformation. It not only brings new technical support to the development of all walks of life, but also profoundly changes people's lifestyles. Since the 18th National Congress of the Communist Party of China, the Party Central Committee has accurately grasped the trend of the times, regarded artificial intelligence as a key technological innovation capability, and made a series of important arrangements to promote the development of artificial intelligence. On the one hand, we should actively create an innovation ecosystem, strengthen basic research, strengthen infrastructure, and improve the supply chain system; on the other hand, we should pay equal attention to development and safety, prevent risks in accordance with laws and regulations, and gradually establish and improve relevant laws, regulations, and supervision systems. How to accurately understand the new demands and challenges that artificial intelligence technology poses to legal regulations? How to strike a balance between promoting innovative development of the industry and preventing risks and ensuring safety? This issue of "Roundtable Topic" continues to focus on "The Governance of Artificial Intelligence in China", inviting well-known experts and scholars to engage in dialogue and jointly explore the concepts and methods of artificial intelligence governance, hoping to provide intellectual support for the development and governance of artificial intelligence in my country, and to promote the efficient and healthy development of artificial intelligence.

Catalog:

1. Adhere to the “ethics first” approach to artificial intelligence governance

2. Formulation of ethical norms for artificial intelligence

3. Basic principles that artificial intelligence ethics standards should adhere to

4. Build a diverse and overlapping artificial intelligence ethical norm system

An artificial intelligence system is software developed using one or more artificial intelligence technologies and methods that can generate outputs such as content, predictions, recommendations, or decisions that affect its interaction environment for a specific set of goals defined by humans. Ethical risk refers to the uncertain events or conditions that may occur due to positive or negative impacts in the ethical relationships between people, people and society, people and nature, and people and themselves, especially the uncertain ethical negative effects they produce. At the technical, data, application and social levels, artificial intelligence ethical risks may arise: The technical level involves algorithm and system security issues, interpretability issues, algorithmic discrimination issues and algorithmic decision-making issues; the data level involves the risks involved in data collection, storage, circulation and utilization in the artificial intelligence governance process; the application level risks are reflected in the abuse and misuse of algorithms in artificial intelligence activities; at the social level, artificial intelligence applications may cause inequality and indirectly lead to ethical issues such as unemployment and wealth redistribution.

On October 30, 2018, while presiding over the ninth collective study session of the Political Bureau of the CPC Central Committee, General Secretary Xi Jinping pointed out that it is necessary to "strengthen research on legal, ethical, and social issues related to artificial intelligence, and establish and improve laws, regulations, institutional systems, and ethics to ensure the healthy development of artificial intelligence." It is necessary to seriously treat and prevent the ethical risks of artificial intelligence, establish laws, regulations and ethical frameworks to ensure the healthy development of artificial intelligence, and formulate laws, regulations and ethical norms to promote the development of artificial intelligence.

1. Adhere to the “ethics first” approach to artificial intelligence governance

Ethics is an intricate concept that can be defined as the moral principles that govern the behavior of individuals or groups. In other words, ethics is a system of principles, rules, or guidelines that helps determine what is good or right. Ethics can also be thought of as the discipline that defines right and wrong, and defines the moral obligations and responsibilities of human or artificial entities. There are great differences between law and ethics in terms of normative value level, normative adjustment scope, normative approach and degree of coercion. Legal norms are behavioral norms that regulate conscious behavior, while ethical norms include values ​​and codes of conduct. Legal norms are heteronomous, and are norms formulated by the state that are backed by state coercion to ensure implementation; ethical norms are more self-disciplined, formed by society, and mainly rely on voluntary compliance by members of society. In the field of artificial intelligence, it is necessary to advocate the design and development of "ethical" artificial intelligence ethics norms, integrate the concept of "ethics first" throughout the entire process of the development, design and use of artificial intelligence, promote the coordinated development and positive interaction between artificial intelligence scientific and technological activities and artificial intelligence ethics, ensure the safety, reliability and controllability of artificial intelligence, and realize responsible artificial intelligence innovation.

"Ethics first" in artificial intelligence governance is also a vivid manifestation of inclusive and prudent supervision. Inclusive and prudent supervision requires regulators to be tolerant of new business forms, encourage innovation, protect innovation, and tolerate innovation. Article 55 of the "Regulations on Optimizing the Business Environment" stipulates: "The government and its relevant departments shall, in accordance with the principle of encouraging innovation, implement inclusive and prudent supervision of new technologies, new industries, new business formats, new models, etc." The artificial intelligence industry is still in the process of rapid development and change. The author believes that it may be difficult to restrict rights and obligations through legislation at this time. We should adhere to "ethics first", based on my country's own artificial intelligence development stage and social and cultural characteristics, follow the laws of scientific and technological innovation, and gradually establish an artificial intelligence ethics system that is in line with my country's national conditions. Ethical constraints should be integrated into all aspects of the artificial intelligence research and development process, and corresponding ethical requirements should be strictly observed in data collection, storage, use, etc., and the use of artificial intelligence technology and applications that violate ethical requirements is prohibited. The purpose of artificial intelligence ethical governance is not to "put on the brakes" on artificial intelligence technology and applications. Its purpose is also to encourage innovation and to set feasibility conditions for exploration and innovation at the forefront of artificial intelligence technology.

The principle of "ethics first" implies an examination of the relationship between ethical norms and legal norms. To a certain extent, ethics is an important supplement to law. Ethical concepts dominate the values ​​contained in law and affect the nature and direction of law. The spiritual essence and value orientation contained in ethical norms are often absorbed by legal norms. Artificial intelligence ethical norms arise before legal norms. Future artificial intelligence legislation can incorporate the core elements of ethical norms into the legal framework. For example, the "Opinions on Strengthening the Governance of Science and Technology Ethics" issued by the General Office of the Central Committee of the Communist Party of China and the General Office of the State Council in 2022 proposed: "Promote clear provisions on science and technology ethics supervision, investigation and punishment of violations and other governance work in the basic legislation of science and technology innovation, and implement science and technology ethics requirements in other relevant legislation." In the "Generative Human" issued by 7 departments including the Cyberspace Administration of China in 2023 The Interim Measures for the Management of Artificial Intelligence Services stipulates that the provision and use of generative artificial intelligence must comply with social morality and ethics, and includes requirements such as adhering to core socialist values, preventing discrimination, prohibiting monopolization and unfair competition, respecting the legitimate rights and interests of others, and improving service transparency. This reflects, to a certain extent, the absorption of ethical rules in artificial intelligence legal norms.

2. Formulation of ethical norms for artificial intelligence

In emerging risk areas such as artificial intelligence, legislators are unable to specify in detail the activities related to artificial intelligence risks and the safety standards and requirements to be followed. Moreover, emerging risks are also developing dynamically. At present, it is not yet mature for my country to unify artificial intelligence legislation, and it is impossible to establish behavioral norms for artificial intelligence activities by law. Therefore, there is an urgent need to introduce artificial intelligence ethical norms. The consideration of introducing ethical norms is to not only guide the basic direction of scientific and technological development, but also provide relevant R&D institutions and enterprises with flexible space to make choices based on specific technical scenarios. After sufficient experience has been accumulated through ethical AI governance, consideration can be given to gradually replacing AI ethical norms with more detailed and precise laws.

For example, the U.S. Department of Defense promulgated the "Ethical Guidelines for Artificial Intelligence" in February 2020, and the U.S. Office of the Director of National Intelligence promulgated the "Ethical Guidelines for Artificial Intelligence for the Intelligence Community" and the "Ethical Framework for Artificial Intelligence for the Intelligence System" in July 2020. Article 69(1) of the European Commission's 2021 Artificial Intelligence Law stipulates that the European Commission and Member States shall encourage and promote the development of codes of conduct to voluntarily apply the requirements specified in Chapter 2 of the draft to artificial intelligence systems other than high-risk artificial intelligence systems based on technical specifications and solutions. This code of conduct can be targeted at the intended use of the artificial intelligence system and constitutes a suitable means to ensure system compliance. On March 29, 2019, the Japanese cabinet adopted the "Human-centered Artificial Intelligence Social Principles", establishing the highest standards for ethical supervision of artificial intelligence in Japan. The principles set out seven principles for the development and utilization of artificial intelligence. Japan's Ministry of Economy, Trade and Industry has formulated the "Artificial Intelligence Contract Utilization Guidelines" and "Regulatory Guidelines for the Implementation of Artificial Intelligence Principles" in 2019 and 2021. The former is used to guide companies to protect privacy and rights in the process of data utilization and artificial intelligence software development, and the latter is used to guide companies to establish an artificial intelligence ethics supervision system.

The formulation of my country's artificial intelligence ethical standards reflects the "top-down" characteristics to some extent. In the "New Generation Artificial Intelligence Development Plan" issued by the State Council in 2017, it was proposed that the government should better play its important role in formulating ethical regulations and establish an ethical and moral framework to ensure the healthy development of artificial intelligence. When the General Office of the Central Committee of the Communist Party of China and the General Office of the State Council issued the "Opinions on Strengthening the Governance of Science and Technology Ethics" in 2022, it was pointed out that each member unit of the National Science and Technology Ethics Committee is responsible for the formulation of science and technology ethics norms and other related work according to the division of responsibilities, and proposed the formulation of science and technology ethics norms and guidelines, including artificial intelligence and other key areas. In the "Guidelines for the Construction of the National New Generation Artificial Intelligence Standard System" promulgated by five departments including the National Standardization Administration Committee in 2020, it is pointed out that artificial intelligence ethical standards must be established, especially to prevent risks arising from artificial intelligence services impacting traditional moral ethics and legal order.

In June 2019, China's National New Generation Artificial Intelligence Governance Professional Committee issued the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence", which emphasized eight principles of harmony and friendship, fairness and justice, inclusiveness and sharing, respect for privacy, security and controllability, shared responsibility, open collaboration, and agile governance. In September 2021, the National Professional Committee for the Governance of New Generation Artificial Intelligence issued the "Ethical Code for New Generation Artificial Intelligence", which proposed six basic ethical requirements to enhance human welfare, promote fairness and justice, protect privacy and security, ensure controllability and credibility, strengthen responsibility, and improve ethical literacy. It also proposed 18 specific ethical requirements for specific activities such as artificial intelligence management, research and development, supply, and use.

Emerging technologies like artificial intelligence are characterized by rapid iteration, strong uncertainty, complexity, and potential risks. The introduction of artificial intelligence ethical norms is also a manifestation of "agile governance". “Agile governance” is “an action or approach that is flexible, fluid, flexible or adaptable”. Its characteristics can be summarized as the following two points.

First, the formation of artificial intelligence ethical norms requires a relatively wide range of participation, which requires stakeholders such as governments, enterprises, and consumers to participate in the norm formation process. In the process of forming ethical norms for artificial intelligence, programmed concepts and participatory procedures are introduced to allow different subjects to express their respective opinions, preferences and positions, thus forming a mechanism to promote and encourage negotiation and mutual learning between organizations. Take the "Artificial Intelligence Ethical Governance Standardization Guide" promulgated in March 2023 as an example. It was led by the China Electronics Technology Standardization Institute, relying on the National Artificial Intelligence Standardization Overall Group and the Artificial Intelligence Sub-Technical Committee of the National Beacon Committee, and organized 56 government, industry, academic, research and user units including Zhejiang University and Shanghai SenseTime Intelligent Technology Co., Ltd. to jointly compile and complete it.

Secondly, artificial intelligence ethical norms also reflect the typical form of "reflective law". In the dynamically evolving artificial intelligence governance environment, the performance of the artificial intelligence ethics code can be regularly evaluated, and various contents in the artificial intelligence ethics system need to be changed, so as to make appropriate revisions to the principles and content of the artificial intelligence ethics code. Compared with legal norms, ethical norms are "living documents" ( ), and it is easier to continuously supplement and amend them. By dynamically adjusting governance methods and ethical norms in a timely manner, we can quickly and flexibly respond to the ethical challenges brought about by artificial intelligence innovation.

3. Basic principles that artificial intelligence ethics standards should adhere to

Limited by historical conditions and development stages, humans have a lag in understanding the moral risks of artificial intelligence products. They often lack perfect ethical control over artificial intelligence products. At the same time, these products are given more independent decision-making rights, which has given rise to more ethical and moral issues. Therefore, it is even more necessary to take the form of government leadership and participation of multiple subjects to jointly promote the formation of artificial intelligence ethical norms. As a code of ethics for science and technology, the code of ethics for artificial intelligence should also reflect the principles of enhancing human welfare, promoting fairness and justice, protecting privacy and security, maintaining openness and transparency, and strengthening accountability.

(1) Improving human welfare

Our country’s Constitution does not explicitly stipulate the national task of “promoting human well-being.” However, the preamble of the Constitution states that it is necessary to “promote the coordinated development of material civilization, political civilization, spiritual civilization, social civilization, and ecological civilization.” Article 47 of the Constitution stipulates that citizens have the freedom to conduct scientific research. According to this article, the state has the obligation to "encourage and assist" "creative work that is beneficial to the people" in scientific and technological undertakings. Artificial intelligence brings new opportunities for social construction. Its widespread application in education, medical care, elderly care, environmental protection, urban operation and other scenarios will improve the precision of public services and comprehensively enhance human welfare.

At the Central Committee’s Work Conference on Comprehensive Law-based Governance in November 2020, General Secretary Xi Jinping pointed out that comprehensive law-based governance “must persist in serving the people and relying on the people. Reflecting the people’s interests, reflecting the people’s wishes, safeguarding the people’s rights, and enhancing the people’s well-being must be implemented into the entire process of comprehensively governing the country in accordance with the law.” The development and application of artificial intelligence should adhere to the people-centered development philosophy, follow the common values ​​of mankind, respect human rights and the fundamental interests of mankind, and abide by ethics and morals. The development and utilization of artificial intelligence should promote the harmony and friendship between humans and machines, build an intelligent society, promote the improvement of people's livelihood and well-being, and continuously enhance people's sense of gain and happiness.

Along the lines of this argument, the development and use of artificial intelligence must not infringe on "human dignity." People cannot be regarded as objects or tools. Respecting "human dignity" means that every individual member of society will receive the minimum positive guarantee of a dignified life from society. Article 38 of my country's Constitution stipulates: "The personal dignity of citizens of the People's Republic of China shall not be violated." When developing and using artificial intelligence, special attention must be paid to safeguarding the dignity of children and the elderly, to avoid causing harm to the personal dignity of children, and to avoid exacerbating the sense of powerlessness and loneliness of the elderly.

(2) Promote fairness and justice

In the application process of artificial intelligence, algorithm discrimination and data discrimination may exist. Algorithms may contain value judgments, may improperly give inappropriate weight to specific objects, specific projects, and specific risks, and may even be implanted with inappropriate purposes. Moreover, it is difficult for algorithms to fully consider factors other than rules and numbers, and the results of algorithm learning may also be unpredictable. The data used by artificial intelligence applications may also lack balance and representativeness, and the data itself may be biased, which will affect the fairness and impartiality of artificial intelligence activities.

In the application process of artificial intelligence, the principle of promoting fairness and justice should be adhered to, that is, adhering to the equality concept of "the same things are treated the same", and giving the same allocation to people or groups with the same important characteristics. First, when using artificial intelligence to carry out administrative law enforcement, judicial adjudication, or allocate limited social resources, factors such as the activities carried out by the audience, the results generated, and the actual needs of individuals should be considered to try to give the audience "what they deserve." Second, the application of artificial intelligence should be universal and inclusive (), and efforts should be made to reduce, reconcile or even eliminate the state of "de facto inequality", so that everyone can equally enjoy the opportunity to share artificial intelligence in the whole society on the same footing, promote social fairness, justice and equal opportunities, and also consider the specific needs of different ages, different cultural systems, and different ethnic groups. Third, it is necessary to avoid discrimination and prejudice against different or specific groups in the process of data acquisition, algorithm design, technology development, product development and application.

(3) Protect privacy and security

1. Protect privacy

Privacy refers to a natural person's private life peace and private space, private activities, and private information that he does not want others to know. Natural persons have the right to privacy. No organization or individual may infringe on the privacy rights of others through spying, intrusion, leakage, disclosure, etc. The use of artificial intelligence is based on deep learning of algorithms, but as a data-driven technology, deep learning requires the collection of large amounts of data, which may involve private information such as user interests, hobbies, and personal information. In addition, when artificial intelligence technology is used to collect, analyze and use personal data, information, and speech, it may cause harm, threats, and losses to personal privacy.

In terms of privacy protection, when conducting research and development and application of artificial intelligence, products and services that infringe on personal privacy or personal information rights and interests must not be provided. Providers of artificial intelligence services should comply with the relevant provisions of laws and administrative regulations such as the Cybersecurity Law, Data Security Law, and Personal Information Protection Law, as well as the relevant regulatory requirements of the relevant competent authorities. When conducting research, development and application of artificial intelligence and involving personal information, personal information should be processed in accordance with the principles of legality, legitimacy, necessity and good faith. The processing of personal information shall be based on the individual's voluntary and explicit consent with full knowledge. The legitimate data rights and interests of individuals shall not be harmed, personal information shall not be illegally collected and used by stealing, tampering, leaking, etc., and personal privacy rights shall not be violated.

2. Protect security

According to Article 76 of the Cybersecurity Law, network security “refers to the ability to prevent network attacks, intrusions, interference, destruction, illegal use, and accidents by taking necessary measures to keep the network in a stable and reliable operating state, and to ensure the integrity, confidentiality, and availability of network data.” The security of artificial intelligence systems should be ensured, the algorithms cannot be controlled by hackers, and the systems and algorithms cannot be attacked or changed by hackers. At the same time, attention should also be paid to personal safety in artificial intelligence activities, that is, to ensure that artificial intelligence technology does not harm humans. Therefore, it is necessary to strengthen the network security protection of artificial intelligence products and systems, build an artificial intelligence security monitoring and early warning mechanism, and ensure that the development of artificial intelligence is regulated within a safe and controllable range.

In addition, according to the importance and degree of harm of the artificial intelligence system, the artificial intelligence system may be divided into three levels: medium-low risk intelligent system, high-risk intelligent system and ultra-high-risk intelligent system. Relevant national competent authorities should improve scientific supervision methods that are compatible with innovative development and formulate corresponding classified and graded supervision rules or guidelines based on the characteristics of different artificial intelligence technologies and their service applications in relevant industries and fields. For example, for high-risk and ultra-high-risk artificial intelligence applications, a regulatory model of ex-ante assessment and risk warning can be adopted; for medium- and low-risk artificial intelligence applications, a regulatory model of ex-ante disclosure and post-event tracking can be adopted. This helps to better allocate limited regulatory resources and ensure the safe use of artificial intelligence.

(4) Maintain openness and transparency

In the field of artificial intelligence ethics, the openness and transparency of artificial intelligence means disclosing the source code and data used in its artificial intelligence system without harming the interests of the owners of the artificial intelligence algorithm, and avoiding "technical black boxes". For example, Article 16 of the "Administrative Regulations on Algorithm Recommendations for Internet Information Services" stipulates: "Algorithm recommendation service providers should inform users in a conspicuous manner of the provision of algorithm recommendation services, and disclose the basic principles, purpose intentions, and main operating mechanisms of algorithm recommendation services in an appropriate manner." Consider disclosing the algorithm process, disclosing the appropriate records generated when verifying the algorithm, and disclosing to the public how the algorithm is developed and what considerations are taken when developing the algorithm. However, specific scenarios and specific objects should be combined to determine the degree of disclosure of the algorithm. Sometimes the algorithm should be disclosed, sometimes it is only suitable for small-scale disclosure, and sometimes the algorithm is not even disclosed. The disclosure of algorithms should not be applied mechanically as a general principle. Complete disclosure of algorithm code and data may leak personal sensitive privacy data, may damage the commercial secrets and competitive advantages of artificial intelligence system design entities, and may even endanger national security.

Openness and transparency are also reflected in the explainability of artificial intelligence systems. Article 24, Paragraph 3 of the Personal Information Protection Law stipulates: “When decisions are made through automated decision-making that have a significant impact on personal rights and interests, individuals have the right to request an explanation from the personal information processor, and have the right to refuse the personal information processor to make a decision only through automated decision-making.” Reasoning can help protect the procedural rights of the parties and enhance the acceptability of the decision. Therefore, when artificial intelligence products and services have a significant impact on personal rights and interests, artificial intelligence users have the right to require the provider to explain the process and methods of product and service decision-making, and have the right to complain about unreasonable explanations. When an AI provider provides an explanation, it starts with a partial explanation, which is an explanation of a specific decision and does not require an explanation of the activities of the entire AI system. The second is an explanation of the causal relationship, explaining what factors exist and why these factors lead to such results. But there is no need to explain too much about the technical details of the system.

(5) Strengthen accountability

Accountability () applies to different subjects in artificial intelligence activities. Accountability is part of good governance and is about accountability, transparency, answerability and responsiveness. Accountability is explanatory, with the responsibility to record or explain actions taken; accountability is also corrective, with the responsibility to correct errors if they occur. Accountability in the use of artificial intelligence can be decomposed into six elements: "who is responsible," "who is responsible," "what standards are followed to be responsible," "what matters," "what procedures should be used," and "what results should be produced." By clarifying the accountability subject, accountability method, accountability standard, accountability scope, accountability procedure, and accountability consequences, it is possible to hold multiple subjects accountable in the artificial intelligence activity network. Through an all-out accountability method, and through procedural mechanisms set in accordance with the law, the accountability system can match the artificial intelligence activities being held accountable.

AI systems are aggregations of data sets, technology stacks, and complex networks of people, so accountability relationships are often complex. Artificial intelligence technology has the potential to replace human labor and even control people's spiritual world. However, artificial intelligence fragments, divides and decentralizes human characteristics and skills. Or artificial intelligence can be regarded as a "person with a specific purpose", but it can only play a role in specific fields, specific aspects, and specific links. Artificial intelligence cannot completely replace humans, nor can it weaken humans’ subjective status. Therefore, in the process of using artificial intelligence, we should insist that humans are the ultimate responsible subjects, clarify the responsibilities of stakeholders, comprehensively enhance the awareness of responsibility, self-examine and self-discipline in all aspects of the entire life cycle of artificial intelligence, establish an artificial intelligence accountability mechanism, and do not evade responsibility review or evade responsibility.

4. Build a diverse and overlapping artificial intelligence ethical norm system

The above discussed the possibility of introducing the "ethics first" principle in the entire life cycle of artificial intelligence utilization, and discussed the general status of my country's artificial intelligence ethical norms, as well as what substantive principles and core concerns should be included in ethical norms, and how to internalize these principles and concerns into my country's artificial intelligence legal rules and policy system.

It should be pointed out that artificial intelligence ethical norms cannot be relied upon to respond to all problems with artificial intelligence activities in our country. As far as the content formation and implementation mechanism of artificial intelligence ethical norms are concerned, there may still be the following problems: First, artificial intelligence ethical norms are often guiding norms or initiative norms. If the core content of artificial intelligence ethical norms cannot be embedded in legal norms, many contents of ethical norms will still be advocacy, and the industry will not be consistent. Second, the content of artificial intelligence ethical norms may be vague or too idealistic, and the industry may not know what measures to take to implement the requirements of artificial intelligence ethical norms; third, it is difficult to confirm violations of artificial intelligence ethical norms, and it is difficult to impose subsequent sanctions on violations of ethical norms.

In future artificial intelligence legislation, ethical principles and ethical norms should be clearly given a place. The author believes that it can be stipulated in the legislation that "engaging in artificial intelligence research, development, application and related activities should comply with ethical principles and ethical norms." However, the subjects of ethical code formulation are not limited to administrative departments. The law can clearly stipulate that societies, associations, industrial technology alliances, etc. have the right to formulate and implement artificial intelligence ethical codes by promulgating standards or rules. It should be noted that Article 18, paragraph 1, of the Standardization Law stipulates: “The state encourages social groups such as societies, associations, chambers of commerce, federations, and industrial technology alliances to coordinate relevant market entities to jointly formulate group standards that meet the needs of the market and innovation, which shall be adopted by agreement among the members of the group or for voluntary adoption by society in accordance with the regulations of the group.” Group standards are standards that are voluntarily adopted by group members or society. The formulation of self-disciplined artificial intelligence ethical norms by societies, associations, and industrial technology alliances will help to formulate ethical norms on the basis of consensus and better adapt to changes in the artificial intelligence industry. This not only stimulates the power of society, but also uses professional knowledge to seek self-regulation, shortens the distance between rule makers and the public, helps the implementation of ethical norms, and allows artificial intelligence ethical norms and legal norms to replace and complement each other, thus forming an overlapping rule system.

Individual enterprises should be encouraged to promulgate artificial intelligence ethics codes that regulate their own enterprises to implement credible requirements for artificial intelligence technology, products and services. Since 2018, domestic and foreign companies such as Google, Microsoft, IBM, Megvii, and Tencent have launched corporate artificial intelligence governance guidelines and set up internal institutions to implement artificial intelligence governance responsibilities. The author believes that in future artificial intelligence legislation, it should be clearly stipulated that units engaged in artificial intelligence scientific and technological activities must fulfill the main responsibilities of artificial intelligence ethical management and strive to formulate their own artificial intelligence ethical norms or guidelines. For units engaged in artificial intelligence scientific and technological activities, when the research content involves sensitive areas of scientific and technological ethics, an artificial intelligence ethics review committee should be established, and the responsibilities of artificial intelligence ethics review and supervision should be clarified, and rules and processes such as artificial intelligence ethics review, risk treatment, and violation handling should be improved.

When the law sets obligations for market entities in the field of artificial intelligence to formulate artificial intelligence ethical norms and establish artificial intelligence ethics review committees, it embodies the essence of "regulation of self-regulation" or "meta-regulation," which links administrative regulations characterized by command means and self-regulation of private entities, forming a "hinge." Meta-regulation opens up a "sparing but not leaking" network, leaving enough flexibility for the activities of the self-regulatory system. However, the tightness is that when self-regulation fails, the stability of the artificial intelligence regulatory structure can still be achieved.

It needs to be pointed out that the requirements for due legal process and public participation should not be abandoned just because the ethical standards for artificial intelligence have a strong scientific and technological background. When building a diverse and overlapping artificial intelligence ethical norm system, a more appropriate democratic deliberation process should be constructed to allow stakeholders, including the general public, social groups, and news media, to discuss corresponding artificial intelligence policy issues with sufficient information, equal opportunities to participate, and open decision-making procedures, so that different voices can be heard. Being able to enter the arena where artificial intelligence ethical norms are formed, so that different interests can be appropriately weighed, thereby better condensing scientific consensus, ensuring the legality, purposefulness, democracy, and transparency of ethical norms, thereby improving the scientificity, effectiveness, and flexibility of ethical norms, and effectively providing assistance and guidance to the artificial intelligence industry.

Magic New AI·Intelligent Writing

Whether it is work reports, product introductions, legal research reports, or marketing copywriting, the Magic Intelligent Writing System can provide you with high-quality writing support, meet the writing needs of legal workers in various fields in their daily study and work, provide a steady stream of creativity and inspiration, and comprehensively assist your copywriting. You can choose different writing models on the platform, enter keywords and key points, and the document outline and content will be automatically generated. The platform is embedded with the Magic V6 database to provide you with evidence-based content creation. At the same time, the intelligent writing platform also supports real-time modification and optimization of generated documents to ensure the accuracy of article writing.

—— System Highlights ——

"Generate article outline with one click" - enter keywords and content requirements, and the article outline will be automatically generated, providing you with a creative starting point and clear writing ideas.

"Intelligent generation of article content" - GPT model combined with the magic database to quickly generate logically self-consistent and content-rich articles.

"Magic V6 database support" - check relevant laws and regulations, academic journals and other information that generates results. It can accurately understand legal terms and help generate legal documents that meet the requirements; it can automatically match corresponding laws and regulations, realize automated legal logic processing, and enhance the authority and credibility of articles. Magic Smart Writing can track the latest changes in laws and regulations in a timely manner and avoid using invalid or abolished legal provisions as a reference.

Artificial Intelligence Ethics Code_Artificial Intelligence Ethics_Artificial Intelligence Governance Principles

More