Song Hualin: Construction Of The Ethical Norms Of Artificial Intelligence From The Perspective Of The Rule Of Law
Song Hualin: Construction Of The Ethical Norms Of Artificial Intelligence From The Perspective Of The Rule Of Law
Artificial intelligence is an important driving force for leading the new round of scientific and technological revolution and industrial transformation. It not only brings new technical support to the development of all walks of life, but also profoundly changes people's lifestyles.

Author | Song Hualin, Dean, Professor and Doctoral Supervisor of Nankai University Law School
The 10th "National Outstanding Young Legal Scholars"
Signed author of Peking University Legal Information Network
Source | Peking University Law School Journal Library
"Digital Rule of Law" 2023 Issue 6
Editor's note: Artificial intelligence is an important driving force for leading the new round of scientific and technological revolution and industrial transformation. It not only brings new technical support to the development of all walks of life, but also profoundly changes people's lifestyles. Since the 18th National Congress of the Communist Party of China, the Party Central Committee has accurately grasped the general trend of the times, regarded artificial intelligence as a key technological innovation capability, and made a series of important arrangements to promote the development of artificial intelligence. On the one hand, actively create an innovation ecosystem, strengthen basic research, strengthen infrastructure, and improve the supply chain system; on the other hand, adhere to the equal importance of development and safety, prevent risks in accordance with the law and regulations, and gradually establish and improve relevant laws, regulations and regulatory systems. How to accurately understand the new needs and challenges brought by artificial intelligence technology to legal regulations? How to balance promoting industry innovation and development with risk prevention and security? This issue of the "Round Table Topic" continues to focus on "China's Governance of Artificial Intelligence", and invites well-known experts and scholars to have a dialogue to jointly explore the concepts and methods of artificial intelligence governance, hoping to provide intellectual support for the development of artificial intelligence and its governance in my country, and help the efficient and healthy development of artificial intelligence.
Table of Contents:
1. Adhere to the "ethics first" of artificial intelligence governance
2. Formulation of artificial intelligence ethical norms
3. Basic principles that should be adhered to in the ethical norms of artificial intelligence
4. Build a diverse and overlapping artificial intelligence ethical normative system
An artificial intelligence system is software developed using one or more artificial intelligence technologies and methods. It aims at a specific series of goals defined by humans and can generate outputs such as content, predictions, suggestions or decisions that affect their interactive environment. Ethical risk refers to the ethical relationship between people, people and society, people and nature, and people and themselves, uncertain events or conditions may occur due to positive or negative effects, especially the uncertain ethical negative effects generated. At the technical, data, application and social levels, it may cause ethical risks of artificial intelligence: the technical level involves algorithm and system security issues, interpretability issues, algorithm discrimination issues and algorithm decision-making issues; the data level involves risks contained in data collection, storage, circulation and utilization behaviors in the process of artificial intelligence governance; the application level risks are reflected in the abuse and misuse of algorithms in artificial intelligence activities; at the social level, artificial intelligence applications may cause inequality, indirectly causing ethical issues such as unemployment and wealth redistribution.
On October 30, 2018, General Secretary Xi Jinping pointed out that when presiding over the ninth collective study of the Political Bureau of the CPC Central Committee, he pointed out that it is necessary to "strengthen the research on legal, ethical and social issues related to artificial intelligence, and establish and improve laws, regulations, systems, and ethics to ensure the healthy development of artificial intelligence." It is necessary to take and prevent the ethical risks of artificial intelligence seriously, establish laws, regulations and ethical frameworks to ensure the healthy development of artificial intelligence, and formulate laws, regulations and ethical norms to promote the development of artificial intelligence.
1. Adhere to the "ethics first" of artificial intelligence governance
Ethics is a complex concept that can be defined as a moral principle that regulates the behavior of an individual or group. In other words, ethics is a system of principles, rules or norms that help determine what is good or correct, and can also be regarded as a discipline that defines right and wrong, and defines the moral obligations and responsibilities of humans or artificial intelligence. Laws and ethics have great differences in terms of standardizing the level of value, standardizing the scope of adjustment, standardizing methods and degree of coercion. Legal norms are behavioral norms that adjust conscious behavior, while ethical norms include values and codes of conduct. Legal norms are heterogeneous, norms formulated by the state to ensure implementation backed by the state; ethical norms are more self-disciplined, formed by society, and mainly rely on conscious compliance by social members. In the field of artificial intelligence, we must advocate the design and development of "ethical" artificial intelligence ethical norms, integrate the concept of "ethics first" into the entire process of development, design and use of artificial intelligence, promote the coordinated development and benign interaction between artificial intelligence technology activities and artificial intelligence ethics, ensure that artificial intelligence is safe, reliable and controllable, and realize responsible artificial intelligence innovation.
The "ethics first" in artificial intelligence governance is also a vivid manifestation of inclusive and prudent supervision. Inclusive and prudent supervision requires regulators to be tolerant of new business formats and encourage innovation, protect innovation, and tolerate innovation. Article 55 of the Regulations on Optimizing the Business Environment stipulates: "The government and its relevant departments should implement inclusive and prudent supervision of new technologies, new industries, new business forms, new models, etc. in accordance with the principle of encouraging innovation." The artificial intelligence industry is still in the process of rapid development and change. The author believes that it may be difficult to restrict rights and obligations through legislation at this time. We should adhere to "ethics first", based on our own artificial intelligence development stage and social and cultural characteristics, follow the laws of scientific and technological innovation, and gradually establish an artificial intelligence ethics system that conforms to our national conditions. Ethical constraints should be integrated into all aspects of the artificial intelligence research and development process, and corresponding ethical requirements should be strictly abide by the data collection, storage, and use. It is prohibited to use artificial intelligence technologies and applications that violate ethical requirements. The purpose of the ethical governance of artificial intelligence is not to "put on the brakes" for artificial intelligence technology and applications. Its purpose is to encourage innovation and set feasibility conditions for the exploration and innovation of the cutting-edge artificial intelligence technology.
The principle of "ethics first" contains an examination of the relationship between ethical norms and legal norms. Ethics is an important supplement to the law to a certain extent. Ethical concepts dominate the values contained in the law and affect the nature and direction of the law; the spiritual essence and value orientation contained in ethical norms are often absorbed by legal norms. Before legal norms emerged, future AI legislation could incorporate core elements in ethical norms into the legal framework. For example, the General Office of the CPC Central Committee and the General Office of the State Council proposed in the "Opinions on Strengthening the Governance of Science and Technology Ethics" issued in 2022: "Promote clear provisions on the governance work of science and technology ethics supervision, investigation and punishment of violations in the basic legislation of scientific and technological innovation, and implement science and technology ethics requirements in other relevant legislation." In the "Interim Measures for the Management of Generative Artificial Intelligence Services" issued by the State Cyberspace Administration of China and 7 other departments in 2023, it stipulates that the provision and use of generative artificial intelligence should abide by social morality and ethics, and the requirements for upholding the core socialist values, preventing discrimination, not implementing monopoly and unfair competition behaviors, respecting the legitimate rights and interests of others, and improving service transparency, which to a certain extent reflects the absorption of ethical rules by the legal norms of artificial intelligence.
2. Formulation of artificial intelligence ethical norms
In emerging risk areas such as artificial intelligence, lawmakers are unable to specify in detail the activities related to artificial intelligence risks and the safety standards and requirements they follow. Moreover, emerging risks are also under dynamic development. At present, the time to unify the legislation of artificial intelligence in my country is not yet ripe, and it is impossible to use laws to establish behavioral norms for artificial intelligence activities. Therefore, it is urgent to introduce artificial intelligence ethical norms. The consideration of introducing ethical norms is to guide the basic direction of scientific and technological development, and to provide relevant R&D institutions and enterprises with flexible space for choice based on specific technical scenarios. After accumulating sufficient experience through the governance of ethical norms of artificial intelligence, we can consider gradually replacing the ethical norms of artificial intelligence with more detailed and precise laws.
For example, the US Department of Defense promulgated the "Ethical Code of Artificial Intelligence" in February 2020, and the Office of the US Director of National Intelligence promulgated the "Ethical Code of Artificial Intelligence of the Intelligence Community" and the "Ethical Framework of Artificial Intelligence Systems" in July 2020. Article 69, paragraph 1 of the European Commission's Artificial Intelligence Act 2021 stipulates that the European Commission and member states should encourage and promote the formulation of codes of conduct, so as to voluntarily apply the requirements set forth in Chapter 2 of the draft to artificial intelligence systems other than high-risk artificial intelligence systems, based on technical specifications and solutions. This code of conduct can target the intended use of artificial intelligence systems and constitute an appropriate means to ensure system compliance. On March 29, 2019, the Japanese cabinet adopted the "Principles of Human-Centered Artificial Intelligence Society", which established the highest criterion for the ethical supervision of artificial intelligence in Japan, which expounded seven principles for the development and utilization of artificial intelligence. The Ministry of Economy, Trade and Industry of Japan successively formulated the "Artificial Intelligence Contract Utilization Point" and the "Regulatory Point to Implement Artificial Intelligence Principles" in 2019 and 2021. The former is used to guide enterprises to protect privacy and rights in the process of data utilization and artificial intelligence software development, and the latter is used to guide enterprises to establish an artificial intelligence ethical supervision system.
The formulation of my country's artificial intelligence ethical norms reflects the characteristics of "top-down". In the "New Generation Artificial Intelligence Development Plan" issued by the State Council in 2017, it is proposed to better play the important role of the government in the formulation of ethical laws and regulations, and establish an ethical and moral framework to ensure the healthy development of artificial intelligence. When the General Office of the CPC Central Committee and the General Office of the State Council issued the "Opinions on Strengthening the Governance of Science and Technology Ethics", it pointed out that all member units of the National Science and Technology Ethics Committee are responsible for the formulation of scientific and technological ethics norms and other related work in accordance with their division of responsibilities, and proposed to formulate scientific and technological ethics norms and guidelines in key areas such as artificial intelligence. In the "Guidelines for the Construction of a National New Generation Artificial Intelligence Standard System" issued by the National Standardization Management Committee and other five departments in 2020, it is pointed out that it is necessary to establish artificial intelligence ethical standards, especially to prevent risks caused by artificial intelligence services that impact traditional moral ethics and legal order.
In June 2019, the China National New Generation Artificial Intelligence Governance Professional Committee issued the "Next Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence", emphasizing the eight principles of harmony, friendship, fairness and justice, inclusiveness and sharing, respect for privacy, security and controllability, sharing of responsibilities, open collaboration, and agile governance. In September 2021, the National New Generation Artificial Intelligence Governance Professional Committee issued the "New Generation Artificial Intelligence Ethical Norms", which put forward six basic ethical requirements: improving human welfare, promoting fairness and justice, protecting privacy and security, ensuring controllability and credibility, strengthening responsibility, and improving ethical literacy. It put forward eighteen specific ethical requirements for specific activities such as artificial intelligence management, research and development, supply, and use.
Emerging technologies such as artificial intelligence have the characteristics of fast iteration, strong uncertainty, complexity and potential risks. The introduction of artificial intelligence ethical norms is also a reflection of "agile governance". "Agile governance" is "actions or methods that are flexible, fluid, flexible or adaptable." Its characteristics can be summarized into the following two points.
First, the formation of artificial intelligence ethical norms has a wide range of participation, which requires stakeholders such as government, enterprises, and consumers to participate in the process of forming the norm. In the process of forming the ethical norms of artificial intelligence, by introducing programmatic concepts and participatory procedures, different subjects can express their respective views, preferences and positions, thus forming a mechanism that promotes and encourages consultation and mutual learning between organizations. Taking the "Guidelines for Standardization of Artificial Intelligence Ethical Governance" promulgated in March 2023 as an example, it was led by the China Electronics Technology Standardization Research Institute, and relied on the National Artificial Intelligence Standardization Group and the National Beacon Committee Artificial Intelligence Sub-Technical Committee, 56 government, industry, academic, research and application units such as Zhejiang University and Shanghai SenseTime Intelligence Technology Co., Ltd. were jointly compiled and completed.
Second, the ethical norms of artificial intelligence also reflect the typical form of "reflective method". In the dynamically evolving artificial intelligence governance environment, the performance of artificial intelligence ethical norms can be regularly evaluated, and whether various contents in the artificial intelligence ethical system need to be changed, so as to appropriately correct the principles and contents of artificial intelligence ethical norms. Compared with legal norms, ethical norms are "living documents" ( ), which are easier to continuously supplement and correct them. By timely and dynamically adjusting governance methods and ethical norms, we can quickly and flexibly respond to the ethical challenges brought by artificial intelligence innovation.
3. Basic principles that should be adhered to in the ethical norms of artificial intelligence
Due to historical conditions and development stages, humans have a lag in their understanding of the moral hazard of artificial intelligence products, and often lacks perfect ethical control over artificial intelligence products. At the same time, these products are given more independent decision-making power, which has given rise to more ethical and moral issues. Therefore, it is necessary to adopt the form of government-led and multi-subject participation to jointly promote the formation of artificial intelligence ethical norms. As a scientific and technological ethical norm, artificial intelligence ethical norm should also embody the principles of improving human welfare, promoting fairness and justice, protecting privacy and security, maintaining openness and transparency, and strengthening accountability.
(I) Enhance human welfare
my country's Constitution does not explicitly stipulate the national task of "enhancing human welfare", but in the preamble of the Constitution, it states that it is necessary to "promote the coordinated development of material civilization, political civilization, spiritual civilization, social civilization and ecological civilization." Article 47 of the Constitution stipulates that citizens have the freedom to conduct scientific research. According to this article, the state has the obligation to "encourage and help" the "creative work that is beneficial to the people" in scientific and technological undertakings. Artificial intelligence brings new opportunities for social construction. Its widespread use in education, medical care, elderly care, environmental protection, urban operation and other scenarios will improve the accuracy of public services and comprehensively enhance human welfare.
At the Central Conference on Comprehensively Promoting the Rule of Law in November 2020, General Secretary Xi Jinping pointed out that comprehensive rule of law in the country "must insist on doing things for the people and relying on the people. We must implement the reflection of the people's interests, reflecting the people's wishes, safeguarding the people's rights and interests, and enhancing the people's well-being in all fields and processes of comprehensive rule of law." The development and application of artificial intelligence should adhere to the people-centered development philosophy, follow the common values of mankind, respect human rights and fundamental interests of mankind, and abide by ethics and morality. The development and utilization of artificial intelligence should promote harmony and friendship between human and computers, promote the improvement of people's livelihood and welfare by building an intelligent society, and continuously enhance people's sense of gain and happiness.
Following the context of this theory, the development and utilization of artificial intelligence must not infringe on "human dignity". People cannot be regarded as objects or tools. Respecting "human dignity" means that every individual among the members of society will receive the minimum and dignified positive guarantee from society. Article 38 of the Constitution of our country stipulates: "The personal dignity of citizens of the People's Republic of China is not violated." When developing and using artificial intelligence, special attention should be paid to maintaining the dignity of children, the elderly, etc., avoid causing harm to the personal dignity of children, and avoid aggravating the sense of powerlessness and loneliness of the elderly.
(II) Promote fairness and justice
During the application of artificial intelligence, there may be algorithmic discrimination and data discrimination. Algorithms may contain value judgments, may improperly give improper weights to specific objects, specific items, and specific risks, and may even implant improper purposes. Moreover, it is difficult for algorithms to fully consider factors other than rules and numbers, and the results of algorithm learning may also be unpredictable. The data used by artificial intelligence applications may also lack balance and representation, and the data itself may have bias, which will affect the fairness and justice of artificial intelligence activities.
In the process of artificial intelligence application, we should adhere to the principle of promoting fairness and justice, that is, adhering to the equality concept of "treating the same things the same thing" and giving the same distribution to people or groups with the same important characteristics. First, when carrying out administrative law enforcement, judicial judgments or allocating limited social resources through artificial intelligence, factors such as the activities carried out by the audience, the results generated, and the actual needs of the individual should be considered, and the audience should try to "get what they deserve". Second, the application of artificial intelligence should be universal and inclusive (), and efforts should be made to reduce, reconcile and even eliminate the state of "de facto inequality", so that everyone can equally enjoy the opportunity of sharing artificial intelligence in the whole society on the basis of the same foothold, promote social fairness and justice and equal opportunities, and also consider the specific needs of different ages, different cultural systems, and different ethnic groups. Third, we must avoid discrimination and prejudice against different or specific groups in the process of data acquisition, algorithm design, technology development, product development and application.
(III) Protect privacy and security
1. Protect privacy
Privacy is the tranquility of a natural person’s private life and private spaces, private activities, and private information that he does not want to know about others. Natural persons enjoy privacy. No organization or individual may infringe on the privacy rights of others by means of spying, intrusion, disclosure, disclosure, etc. The use of artificial intelligence is based on deep learning of algorithms, but deep learning, as a data-driven technology, requires the collection of a large amount of data, which may involve privacy information such as user interests, hobbies, and personal information. In addition, when using artificial intelligence technology to collect, analyze and use personal data, information, and speech, it may bring harm, threats and losses to personal privacy.
In terms of privacy protection, when conducting artificial intelligence research and development and application, no products and services that infringe on personal privacy or personal information rights shall be provided. Providers of artificial intelligence services should abide by the relevant provisions of laws, administrative regulations such as the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law and the relevant regulatory requirements of relevant competent departments. When carrying out the research and development and application of artificial intelligence, personal information should be processed in accordance with the principles of lawfulness, legitimacy, necessity and integrity. When processing personal information, the individual should be voluntarily and clearly agreed upon with full knowledge. It shall not damage the legitimate data rights and interests of individuals, nor shall it illegally collect and utilize personal information by stealing, tampering, or leaking, or infringing on personal privacy rights.
2. Protect safety
According to Article 76 of the Cybersecurity Law, network security "refers to the ability to prevent attacks, intrusions, interference, destruction and illegal use of the network and accidents by taking necessary measures to prevent attacks, invasions, interferences, destruction and illegal use of the network and accidents, so as to keep the network in a stable and reliable state of operation, and to ensure the integrity, confidentiality and availability of network data." The security of artificial intelligence systems should be guaranteed, algorithms should not be controlled by hackers, and systems and algorithms should not be attacked or changed by hackers. At the same time, we should also pay attention to personal safety in artificial intelligence activities, that is, to ensure that artificial intelligence technology does not harm humans. Therefore, it is necessary to strengthen network security protection for artificial intelligence products and systems, build an artificial intelligence security monitoring and early warning mechanism to ensure that the development of artificial intelligence is regulated within a safe and controllable range.
In addition, according to the importance and degree of harm of the artificial intelligence system, artificial intelligence systems may be divided into three levels: medium- and low-risk intelligent systems, high-risk intelligent systems and ultra-high-risk intelligent systems. The relevant national competent departments should improve scientific supervision methods that are compatible with innovative development based on the characteristics of different artificial intelligence technologies and their service applications in relevant industries and fields, and formulate corresponding classification and hierarchical supervision rules or guidelines. For example, for high-risk and ultra-high-risk artificial intelligence applications, a regulatory model of pre-evaluation and risk warning can be adopted; for medium- and low-risk artificial intelligence applications, a regulatory model of pre-disclosure and post-tracking can be adopted. This helps better allocate limited regulatory resources and ensure safe utilization of artificial intelligence.
(IV) Stay open and transparent
In the field of artificial intelligence ethics, the openness and transparency of artificial intelligence refers to the disclosure of the source code and data used in its artificial intelligence system without harming the interests of the owner of the artificial intelligence algorithm to avoid a "technical black box". For example, Article 16 of the "Regulations on the Management of Algorithms for Internet Information Services" stipulates: "Algorithm recommendation service providers should inform users of their provision of algorithm recommendation services in a significant way, and publicize the basic principles, purpose and main operating mechanisms of algorithm recommendation services in an appropriate way." You can consider publicizing the algorithm process, publicizing the appropriate records generated when verifying the algorithm, and disclosing to the public how to develop the algorithm, and what considerations are required when developing the algorithm. However, the degree of disclosure of algorithms should be determined based on specific scenarios and specific objects. Sometimes the algorithm should be disclosed, sometimes it is only suitable for small-scale disclosure, and sometimes it is not even disclosed. The disclosure of algorithms should not be mechanized as a general principle. Fully exposing the code and data of the algorithm may leak individual sensitive privacy data, may damage the business secrets and competitive advantages of the artificial intelligence system design subject, and may even endanger national security.
Openness and transparency are also reflected in the interpretability of artificial intelligence systems. Article 24, paragraph 3 of the "Personal Information Protection Law" stipulates: "If a decision that has a significant impact on personal rights and interests is made through automated decision-making, an individual has the right to ask the personal information processor to explain it, and has the right to refuse the personal information processor to make decisions only through automated decision-making." Reason statements help protect the procedural rights of the parties and enhance the acceptability of decisions. Therefore, when artificial intelligence products and services have a significant impact on personal rights and interests, artificial intelligence users have the right to require providers to explain the process and methods of product and service decision-making, and have the right to complain about unreasonable explanations. When an AI provider explains, it is first a local explanation, which is an explanation of a particular decision, and does not require an explanation of the activities of the entire AI system. The second is the explanation of causality, which explains what factors exist and why these factors derive such results. But there is no need to explain too much about the technical details of the system.
(V) Strengthen accountability
Accountability () applies to different subjects in artificial intelligence activities. Accountability is part of good governance and it is related to responsibility, transparency, responsibility and responsibility. Accountability is explanatory and it is responsible for recording or explaining the actions taken; accountability is also corrective. If errors occur, you should bear the responsibility for correction. The accountability in the utilization of artificial intelligence can be decomposed into six elements: "who assumes responsibility", "who assumes responsibility", "what standards to follow to assume responsibility", "what matters", "what procedures should be passed", and "what results should be produced". By clarifying the accountability subjects, accountability methods, accountability standards, scope of accountability, accountability procedures, and responsibility consequences, it is possible to hold multiple subjects in the artificial intelligence activity network accountable, and through an omissionless accountability method and a legally set procedural mechanism, the accountability system matches the accountability activities.
Artificial intelligence systems are a collection of data sets, technology stacks and complex interpersonal networks, so accountability relationships are often complex. Artificial intelligence technology may replace human labor and even control human spiritual world, but artificial intelligence is to fragment, divide and decentralize human characteristics and skills. Or artificial intelligence can be regarded as a "person for a specific purpose", but it can only play a role in specific fields, specific aspects, and specific links. Artificial intelligence cannot completely replace people, and it cannot weaken people's subjective status. Therefore, in the process of using artificial intelligence, we should insist that human beings are the ultimate responsible subject, clarify the responsibilities of stakeholders, comprehensively enhance the sense of responsibility, reflect on oneself and discipline in all links of the entire life cycle of artificial intelligence, establish an artificial intelligence accountability mechanism, and do not avoid responsibility review, and do not evade responsibility.
4. Build a diverse and overlapping artificial intelligence ethical normative system
The above discusses the possibility of introducing the principle of "ethics first" in the entire life cycle of artificial intelligence utilization, and discusses the general status of my country's artificial intelligence ethical norms, as well as what substantive principles and core concerns should be contained in ethical norms, and how to internalize these principles and concerns into my country's artificial intelligence legal rules and policy system.
It should be pointed out that all the problems of artificial intelligence activities in my country cannot be relied on artificial intelligence ethical norms. As for the content formation and implementation mechanism of artificial intelligence ethical norms, there may still be the following problems: First, artificial intelligence ethical norms are often guiding norms or advocated norms. If the core content of artificial intelligence ethical norms cannot be embedded in legal norms, many of the contents of the ethical norms are still advocated and the industry does not necessarily comply; Second, the content of artificial intelligence ethical norms may be vague or too idealized, and the industry may not know what measures to take to implement the requirements in artificial intelligence ethical norms; Third, it is difficult to confirm behaviors that violate artificial intelligence ethical norms, and it is also difficult to impose subsequent sanctions on behaviors that violate ethical norms.
In future artificial intelligence legislation, ethical principles and ethical norms should be clearly given a place. The author believes that legislation can stipulate that "engaging in artificial intelligence research, development, application and related activities should comply with ethical principles and ethical norms." However, the subject of ethical norms is not limited to administrative departments. It can be clearly stipulated in the law that society, association, industrial technology alliance, etc. have the right to formulate and implement artificial intelligence ethical norms by issuing standards or rules. It should be noted that Article 18, paragraph 1 of the Standardization Law stipulates: "The state encourages social groups such as societies, associations, chambers of commerce, federations, industrial technology alliances to coordinate relevant market entities to jointly formulate group standards that meet market and innovation needs, and shall be adopted by members of this group or voluntarily adopted by society in accordance with the provisions of this group." Group standards are standards for voluntary adoption by group members or society. The formulation of self-disciplined artificial intelligence ethical norms by the society, associations, and industrial technology alliances will help formulate ethical norms based on consensus and better adapt to changes in the artificial intelligence industry. This not only stimulates the power of society, but also uses professional knowledge to seek self-regulation, shortens the distance between rule makers and the public, helps to implement ethical norms, and allows artificial intelligence ethical norms to replace and supplement each other, thus forming an overlapping rule system.
Individual enterprises should be encouraged to issue artificial intelligence ethical norms that regulate their enterprises to implement credible requirements for artificial intelligence technology, products and services. Since 2018, domestic and foreign companies such as Google, Microsoft, IBM, Megvii, and Tencent have launched corporate artificial intelligence governance guidelines and set up internal institutions to implement the responsibilities of artificial intelligence governance. The author believes that in future artificial intelligence legislation, it should be clearly stipulated that units engaged in artificial intelligence technology activities should fulfill their main responsibilities for artificial intelligence ethics management and strive to formulate artificial intelligence ethical norms or guidelines for their units. For units engaged in artificial intelligence technology activities, when the research content involves sensitive areas of science and technology ethics, an artificial intelligence ethics review committee should be established, and the responsibilities of artificial intelligence ethics review and supervision should be clarified, and the rules and procedures for artificial intelligence ethics review, risk disposal, and violation handling should be improved.
When the law sets obligations for market entities in the field of artificial intelligence, etc. to formulate artificial intelligence ethical norms and establishes artificial intelligence ethics review committees, it embodies the essence of "regulation of self-regulation" or "meta-regulation", which connects between administrative regulations characterized by command means and self-regulation of private subjects, forming a "hinge". Metaregulation has opened up a "sparse but not leaked" net, leaving enough elastic space for the activities of the self-regulation system, but the densest thing is that when self-regulation fails, the stability of the artificial intelligence regulatory structure can still be achieved.
It should be pointed out that the requirements of legitimate legal procedures and public participation should not be given up because of the strong technological background of artificial intelligence ethics. When building a diversified and overlapping artificial intelligence ethical normative system, a more appropriate democratic deliberation procedure should be constructed so that stakeholders, including the general public, social groups, and news media, can discuss corresponding artificial intelligence policy issues under the conditions of full information, equal participation opportunities, and open decision-making procedures, so that different voices can enter the arena formed by artificial intelligence ethical norms, and allow different interests to be properly measured, so as to better condense scientific consensus, to ensure the legitimacy, purposefulness, democracy and transparency of ethical norms, and thus improve the scientificity, effectiveness and flexibility of ethical norms, and to effectively provide assistance and guidance for the artificial intelligence industry.
Magic weapon new AI·intelligent writing
Whether it is work reports, product introductions, legal research reports, and market promotional copywriting, the Magical Intelligent Writing System can provide you with high-quality writing support, meet the writing needs of various fields in the daily study and work of legal workers, provide a steady stream of creativity and inspiration, and comprehensively assist your copywriting. You can select different writing models on the platform, enter keywords and key points, and you can automatically generate document outlines and content. The platform embeds magic weapon V6 database to make your content creation well-founded. At the same time, the intelligent writing platform also supports real-time modification and optimization of generated documents to ensure the accuracy of article writing.
—— System Highlights ——
"Generate an article outline with one click" - Enter keywords and content requirements to automatically generate an article outline, providing you with a starting point for creation and a clear and clear writing idea.
"Intelligent Generation of Article Content" - The GPT model combines magic weapon database to quickly generate logically self-consistent and rich articles.
"Magic weapon V6 database support" - check the relevant laws and regulations, academic journals and other information about the generated results. It can accurately understand legal terms and help generate legal documents that meet the requirements; it can automatically match corresponding laws and regulations, realize the automation of legal logic processing, and enhance the authoritative and credibility of articles. Magical Intelligent Writing can promptly track the latest changes in laws and regulations, and avoid using invalid or abolished legal provisions as reference.