Special Topic: Artificial Intelligence Safety | A Preliminary Study On Artificial Intelligence Ethical Review And Supervision
Special Topic: Artificial Intelligence Safety | A Preliminary Study On Artificial Intelligence Ethical Review And Supervision
The review and supervision of artificial intelligence ethics is a complex topic that requires a large amount of pragmatic basic research, focusing on clarifying the boundaries of relevant issues. This article gives preliminary thoughts on the focus and direction of artificial intelligence ethics research, hoping to trigger a broader discussion
In November 2022, American research institutions released a conversational large-scale language training model, which triggered unprecedented popularity around the world. However, shortly after its release, the Italian Data Protection Authority banned its use in Italy and restricted developers from processing the information of Italian users, citing data security issues. At the same time, proposals for regulating generative artificial intelligence have emerged in many places around the world. In this discussion about the security risks of artificial intelligence, part of it belongs to the category of ethics and cannot be examined only from the perspective of network security. However, artificial intelligence ethics is very different from traditional human body and animal ethics, and relevant theories have not yet been formed. For example, how to define the scope of artificial intelligence ethics? What is the content of artificial intelligence ethics review? What measures should our country take to supervise the ethics of artificial intelligence? The above issues deserve in-depth study.
1. Overview of Artificial Intelligence Ethics
(1) Definition of artificial intelligence ethics
Artificial intelligence ethics is a new concept that has emerged recently. The national standard GB/T 41867-2022 "Information Technology Artificial Intelligence Terminology" defines "artificial intelligence ethics" as "the moral norms or guidelines to be followed when carrying out basic research and application practice of artificial intelligence technology." The "Standardization Guidelines for Ethical Governance of Artificial Intelligence" released by the National Artificial Intelligence Standardization Overall Group and the Artificial Intelligence Subcommittee of the National Beacon Committee in March 2023 summarizes the ethical connotation of artificial intelligence into three aspects: first, the moral principles and behavioral norms for humans when developing and using artificial intelligence-related technologies, products and systems; second, the moral principles or value embedding methods that the artificial intelligence body itself has that are in line with ethical standards; third, the ethical norms formed by the artificial intelligence agent through self-learning and reasoning.
The above concepts have both inheritance and development. Generally speaking, ethics is the value standard and behavioral system that deals with the relationship between people and between people and nature in the process of human beings realizing the coordination of individual interests and the overall interests of society. In the context of technological development, artificial intelligence ethics has added the connotation of the relationship between people and artificial intelligence technology and products. The subject of artificial intelligence ethics is still people. The fundamental intention of the widespread use of artificial intelligence is to facilitate human life. But the complexity of the problem is that not all issues with artificial intelligence technology and products are ethical issues. They fall under the category of artificial intelligence ethics only when they affect or may affect human interests.
In addition to definitions, people have also gradually characterized the ethical risks of artificial intelligence. In essence, the object of artificial intelligence ethical norms is artificial intelligence activities, and different types of artificial intelligence activities have different ethical risks. To this end, the EU's Artificial Intelligence Law in 2021 divides artificial intelligence risks into four levels: unacceptable, high risk, limited risk, and minimal risk, and requires the implementation of different levels of management systems for different risk levels. Obviously, the review and supervision system of artificial intelligence ethics should set different norms for different types of artificial intelligence activities to guide the healthy development of artificial intelligence activities.
(2) The development context of artificial intelligence ethics
Artificial intelligence ethics as a whole is developed from robot ethics. In the 1950s, American writer Isaac Asimov first proposed the Three Laws of Robotics in his work "I, Robot", which became the origin of robot ethics. The first law of the three laws of robots states that robots must not harm people, nor must they stand by while people are being harmed; the second law states that robots should obey all orders of people, but must not violate the first law; the third law states that robots should protect their own safety, but must not violate the first and second laws.
At the Dartmouth Conference in 1956, scholars first elaborated on the concept of "artificial intelligence". Subsequently, artificial intelligence entered a long and silent period of development. It was not until 2016 that artificial intelligence returned to the focus of people's attention after the computer developed by Google's Deep Mind company defeated the top Go player Lee Sedol with a score of 4:1. Governments, enterprises, and scientific research institutions of various countries have invested a lot of money in research, hoping to occupy technological high points and gain competitive advantages as soon as possible. In the research, development and application of artificial intelligence, people have gradually discovered that the field of artificial intelligence has the same ethical risk issues as traditional medicine and experimental animals, and needs further regulation.
On July 8, 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan", which requires the preliminary establishment of artificial intelligence laws, regulations, ethical norms and policy systems, and the formation of artificial intelligence safety assessment and control capabilities. In October 2018, General Secretary Xi Jinping clearly stated at the ninth collective study session of the Political Bureau of the 19th CPC Central Committee: "We must strengthen the analysis and prevention of potential risks in the development of artificial intelligence, safeguard people's interests and national security, and ensure that artificial intelligence is safe, reliable, and controllable."
On September 26, 2021, the "New Generation Artificial Intelligence Ethics Code" was released, proposing 6 basic ethical requirements such as enhancing human welfare, promoting fairness and justice, protecting privacy and security, ensuring controllability and credibility, strengthening responsibility, and improving ethical literacy, and also proposed 18 specific ethical requirements for specific activities such as artificial intelligence management, research and development, supply, and use.
On March 20, 2022, the General Office of the State Council issued the "Opinions on Strengthening Science and Technology Ethics Governance", requiring "further improving the science and technology ethics system, enhancing science and technology ethics governance capabilities, effectively preventing and controlling science and technology ethics risks, and continuously promoting science and technology for good and benefiting mankind", and incorporating artificial intelligence ethics into the comprehensive management of science and technology ethics.
(3) Ethical risks faced by the development of artificial intelligence
1. Ethical risks of artificial intelligence related to algorithms
Algorithm flaws: At this stage, artificial intelligence technology mainly uses algorithms such as deep learning to perform statistical learning on massive structured data. In essence, it is still based on algorithms as the core and data as the driver. However, algorithms are not always perfect. Algorithmic flaws in some areas may seriously threaten citizens' rights to life and health. Taking the field of self-driving as an example, self-driving cars such as Tesla and Uber have repeatedly been exposed to cases of vehicle crashes and fatalities due to algorithm loopholes.
Algorithm black box: The rules of the algorithm are in the hands of the developer. The user does not know the content of the algorithm and can only passively accept the results of the algorithm. In addition, due to the use of complex neural networks and other algorithms, the interpretability of the algorithm is limited. Sometimes developers themselves cannot explain the results of an algorithm and its generation mechanism. The problem of algorithm black box seriously infringes upon citizens’ rights to know, supervise and other legitimate rights.
Algorithmic discrimination: Artificial intelligence cannot judge whether the data is right or wrong, and can only retain or discard data according to fixed procedures. Due to developers' personal preferences, training data selection errors and other reasons, artificial intelligence may produce discriminatory results. At the same time, some algorithms that rely on interaction with users to learn will also continue to increase their own discriminatory feedback due to the input of discriminatory ideas from certain users, producing discriminatory results again and infringing on people's right to equality.
2. Data-related artificial intelligence ethical risks
Artificial intelligence needs to learn by collecting massive amounts of data, so it inevitably touches on data protection issues, especially in terms of personal information protection. In reality, in order to promote technological progress and obtain commercial profits, developers are increasingly inclined to rely on their dominant position to mine user data more extensively and deeply, which directly leads to the proliferation of illegal and illegal acquisition of personal data. Much of the recent criticism has been directed at its collection and processing of personal information. The Cyberspace Administration of China’s “Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments)” also sets out special provisions to protect personal information. For this reason, the reasonable use of personal information and privacy protection have become another important content of artificial intelligence ethics.
3. Social and ethical risks brought about by artificial intelligence
Artificial intelligence and employment: The development of artificial intelligence technology will surely promote a new round of industrial revolution, thereby bringing about changes in employment opportunities. Jobs in industries with low innovation requirements, such as labor-intensive industries and low-end manufacturing, will be largely replaced by artificial intelligence, while new job demands will be created in industries with higher innovation requirements, such as scientific research and design. Substantial changes in the social employment rate and job positions may trigger an unemployment crisis and increase social uncertainties.
Artificial Intelligence and Intellectual Property: Under normal circumstances, intellectual property rights will of course belong to the organization or individual who created a certain intellectual work result. However, artificial intelligence technology can produce a large number of "creative" results in a very short period of time based on algorithms and data. Whether artificial intelligence technology can be used as the subject of intellectual property rights, and which of its achievements can be used as the objects of intellectual property rights, there is currently no conclusion.
Artificial Intelligence and Responsibility: Artificial Intelligence technology is being used in an increasingly wide range of fields, and its autonomous decision-making capabilities are also increasing. However, the results of artificial intelligence decision-making are not foolproof, and there is still a certain error rate. In situations where artificial intelligence replaces humans in reasoning, decision-making, and execution, how to identify the responsible subject, divide the scope of responsibility, and assume responsibility has brought new challenges to existing laws and regulations. In the long run, once the attribution of responsibilities is improperly defined, it will lead to the risk of imbalance of responsibilities, weaken the enthusiasm for artificial intelligence research and development and application, and may ultimately constrain the development of artificial intelligence.
2. The necessity of studying the ethics of artificial intelligence
(1) Improve the scientific and technological ethics theoretical system
At present, technological risks and technological ethics have gradually become core social concerns. How to maximize the risk avoidance while obtaining technological benefits has become one of the severe challenges facing the industry. In this context, science and technology ethics itself has become an important part of scientific research. Countries are establishing scientific and technological ethics theoretical systems, and relevant academic results are constantly being released. However, the initial entry point for science and technology ethics research is the scientific research of humans and animals. At present, artificial intelligence has become an important source of scientific and technological ethics risks, but research in this area is still in its infancy. In particular, the connotation and basic requirements of artificial intelligence science and technology ethics need to be improved urgently.
(2) Service technology ethics review work
On July 24, 2019, the ninth meeting of the Central Committee for Comprehensive Deepening Reforms reviewed and approved the "National Science and Technology Ethics Committee Formation Plan". According to the requirements of the document, in October 2019, the General Office of the Central Committee of the Communist Party of China and the General Office of the State Council issued a notice to establish the National Science and Technology Ethics Committee, and successively established three subcommittees on life sciences, medicine, and artificial intelligence. The first two subcommittees already have mature operating mechanisms, but the work of the artificial intelligence subcommittee needs to start from scratch, and a full set of working mechanisms needs to be studied to better serve the scientific and technological ethics review work. Generally speaking, the scientific and technological ethics review of artificial intelligence is still in the exploratory stage in our country.
(3) Serving national supervision
Before the ethical issue of artificial intelligence was raised, my country already had relevant regulatory systems, typically the safety supervision of data on which artificial intelligence relies heavily, and part of it involved ethical issues. But the two are not the same thing. The "National Science and Technology Ethics Committee Establishment Plan" points out that it is necessary to promote the construction of a science and technology ethics governance system that is comprehensive, clear in direction, standardized, orderly, and coordinated. It is necessary to improve institutional norms, improve governance mechanisms, strengthen ethical supervision, refine relevant laws, regulations and ethical review rules, and standardize various scientific research activities. However, the relationship between ethical supervision of artificial intelligence technology and the existing data security supervision system is currently unclear. To this end, it is necessary to study the artificial intelligence ethical risk regulatory framework and coordinate with existing data security and cyberspace governance work.
3. Key issues that need to be resolved in ethical review and supervision of artificial intelligence
(1) What kind of artificial intelligence activities will be reviewed?
In order to prevent the occurrence of artificial intelligence ethical risk incidents, it is particularly important to conduct ethical review of artificial intelligence activities beforehand. However, the algorithms and technologies used in different types of artificial intelligence activities are very different, and their application scope spans many fields such as manufacturing, logistics, and medicine. Ethical review of all artificial intelligence activities is neither realistic nor necessary. If the scope of artificial intelligence activities that require ethical review is too wide, it will increase the operating costs of relevant industries and hinder industry development; if the scope is too narrow, it will increase the probability of ethical risk events. How to reasonably determine the fields and types of artificial intelligence activities that need to be reviewed, and how to grasp the balance between regulatory ethical risks and promoting the healthy development of the industry, has become one of the key issues in artificial intelligence ethical supervision.
This is fundamentally different from human and animal ethics review. The review thresholds of the latter two are very simple and clear. As long as it is involved, it must be reviewed. Applications for relevant research must have approval from the ethics committee. But can all research involving artificial intelligence be required to undergo pre-ethical review? This obviously can't be done.
(2) How to conduct artificial intelligence ethical risk assessment
Artificial intelligence technology is widely used in many fields such as autonomous driving, the Internet, medical care, and media, and the ethical risks caused by it are becoming increasingly complex and changeable. Among them, there are both intuitive short-term risks, such as algorithm vulnerabilities that pose security risks, and algorithm bias leading to the formulation of discriminatory policies. There are also relatively indirect long-term risks, such as the possible impact on property rights, competition, the job market, and social structure. In order to pre-judge the ethical risks of artificial intelligence that may be involved in scientific and technological activities and avoid potential ethical risks, some scholars in my country have tried to establish an ethical risk assessment template and form an ethical assessment checklist from review points such as motivation, process, results, and supervision. Relevant laws, regulations and policy documents from major countries in the world, such as the European Union's Artificial Intelligence Law, the United States' Algorithm Accountability Act, and the United Kingdom's "Establishing an Innovative Approach to Supervising AI," advocate the use of risk-level management mechanisms based on different scenarios. At present, my country's ethical risk assessment standards for artificial intelligence activities are still blank and need to be further refined.
(3) How to deal with artificial intelligence ethical risk events
In recent years, artificial intelligence ethical safety risk incidents have occurred frequently, AI face-changing has caused privacy disputes, self-driving safety accidents have occurred frequently, and intelligent algorithm recommendations have caused information cocoons... Artificial intelligence technology is usually associated with the Internet field. Once an ethical risk incident occurs, its scope of impact will far exceed traditional ethical risk incidents. However, the handling measures for ethical risk incidents are still in their infancy, and no standardized processes and unified standards have been formed. This has led to chaos in the handling of ethical risk incidents and caused subsequent disputes. The imperfect response mechanism for artificial intelligence ethical risk incidents has seriously hindered the future development of the industry. How to use typical artificial intelligence ethical risk events as a reference, summarize the highly practical and reference disposal measures, and form a standard handling specification for artificial intelligence ethical risk events has become a top priority.
(4) How to deal with the relationship between artificial intelligence ethical restrictions and technological innovation
Although it quickly became popular around the world, an open letter titled "Pause Giant AI" calls for a moratorium on the development and training of artificial intelligence models more powerful than GPT-4 because the artificial intelligence laboratory is temporarily unable to understand, predict, or reliably control the giant artificial intelligence models developed by itself. More powerful AI systems can only be developed if humans are convinced that giant AI models will have a positive impact and the risks are controllable.
With the popularization of artificial intelligence ethical concepts, humans will restrict or postpone the development of more powerful artificial intelligence systems out of considerations of avoiding ethical risks of artificial intelligence. However, this restriction may have an impact on potential major scientific and technological breakthroughs. At the beginning of its inception, in vitro fertilization technology was controversial due to ethical restrictions, but now it has become one of the mainstream technologies of assisted reproduction. How to avoid making the same mistakes during the development of artificial intelligence and correctly handle the relationship between ethical restrictions on artificial intelligence and technological innovation is an important issue facing the development of artificial intelligence and requires careful thinking.
4. Suggestions on my country’s artificial intelligence ethical risk review and supervision system
(1) Increase its influence on the formulation of global artificial intelligence ethical rules
Artificial intelligence is a booming emerging technology. The United States, the European Union, Japan and other countries or regions have invested a lot of manpower and material resources in research and development. Countries have successively introduced laws, regulations, policies and technical standards related to artificial intelligence. On the one hand, they are leading global scientific and technological progress, and at the same time, they are also competing for the right to speak internationally. In this new international game in the field of artificial intelligence, it is particularly important to control the formulation of ethical rules for artificial intelligence. While advocating the universal value of "ethics," some countries may try to inject their own ideological concepts and interests into the ethical rules of artificial intelligence, and use this to further strengthen the containment of China's high-tech development. Facing this “must-have-win battleground”, my country should make a difference, enhance its influence on the formulation of global artificial intelligence ethical rules, and contribute Chinese wisdom and Chinese solutions to the global governance of artificial intelligence.
In November 2022, the Ministry of Foreign Affairs issued the "China's Position Paper on Strengthening the Ethical Governance of Artificial Intelligence," which emphasized that "governments of all countries should encourage cross-country, cross-field, and cross-cultural exchanges and collaboration in the field of artificial intelligence, ensure that all countries share the benefits of artificial intelligence technology, and promote all countries to jointly participate in the discussion and formulation of rules on major international artificial intelligence ethical issues." The release of this position paper is of great significance. If China wants to promote the opening of an international dialogue process in the field of artificial intelligence ethical governance and provide a blueprint for formulating global rules, further actions are needed.
(2) Improve artificial intelligence ethics legislation
China's current official documents related to the ethics of artificial intelligence include the "Development Plan for New Generation Artificial Intelligence", "Ethical Code for New Generation Artificial Intelligence", "Opinions on Strengthening Ethical Governance of Science and Technology", etc. However, most of these document types are macro-guidance policies and lack more detailed mandatory legal regulations. In order to better regulate the development of artificial intelligence, it is recommended to introduce laws and regulations related to the ethical supervision of artificial intelligence as soon as possible. First, the relevant concepts in the field of artificial intelligence ethics should be clarified, to draw a clear line between ethical issues and non-ethical issues, and to promote the healthy development of artificial intelligence; second, to summarize the common rules and general principles of artificial intelligence ethical supervision, and provide standard management, research and development, supply and use specifications for artificial intelligence ethical supervision in different scenarios; third, to improve the artificial intelligence ethical responsibility system and clarify the specific responsibilities of regulatory departments and operating units in the form of legislation.
(3) Propose guidelines for the construction and operation of artificial intelligence ethics committees
The "Opinions on Strengthening the Governance of Science and Technology Ethics" requires that units engaged in artificial intelligence science and technology activities whose research content involves sensitive areas of science and technology ethics should establish a science and technology ethics (review) committee. To this end, the role of the artificial intelligence ethics committee should be actively brought into play, the daily management of artificial intelligence ethics should be strengthened, and ethical risks existing in relevant scientific and technological activities should be proactively studied and judged in a timely manner. The Artificial Intelligence Ethics Committee should review the unit's plans before carrying out relevant research, promptly prevent or correct research projects that violate the ethics of Artificial Intelligence, and nip problems in the bud. At the same time, the Artificial Intelligence Ethics Committee can also conduct emergency handling of artificial intelligence ethical risk incidents that have occurred to avoid further spread and deterioration of the situation. In view of the complexity of artificial intelligence ethical issues, it is appropriate to provide construction and operation guidelines for various artificial intelligence ethics committees at the national level, strengthen top-level design, and improve the scientific and normative nature of artificial intelligence ethics review.
(4) Establish a collaborative mechanism for data security supervision and artificial intelligence ethics supervision
Artificial intelligence ethical governance includes two aspects, one is the artificial intelligence ethics review of the parties involved, and the other is the artificial intelligence ethics supervision of the national competent authorities. Since the basis of artificial intelligence applications is data, there will inevitably be overlap between artificial intelligence ethics supervision and data security supervision. Not only that, data security governance itself also includes concerns about data ethics issues. From a historical perspective, safety supervision and ethics supervision have developed along their own lines, and have not achieved synergy either in academic research or in practice. At present, my country has not yet established an artificial intelligence ethics regulatory agency. The relationship with data security supervision needs to be considered in the next step of work to ensure that the two systems coordinate and cooperate with each other.
5. Conclusion
The development and application of artificial intelligence technology is not only the common welfare of human society, but also adds new ethical risks. In the face of the booming development of artificial intelligence technology, artificial intelligence ethics research has obviously lagged behind, and there is an urgent need to change the status quo. As the name suggests, artificial intelligence ethics research is a typical interdisciplinary subject, which involves both artificial intelligence science in the field of information technology and ethics in the field of humanities and social sciences. The review and supervision of artificial intelligence ethics is a complex topic that requires a large amount of pragmatic basic research, focusing on clarifying the boundaries of relevant issues. This article gives preliminary thoughts on the focus and direction of artificial intelligence ethics research, hoping to trigger a broader discussion and promote the progress of artificial intelligence ethics research.
(This article was published in the 2023 Issue 5 of "China Information Security" magazine)
Recommended by "China Security Information" magazine
"Enterprise Growth Plan"