Topic·Artificial Intelligence Security | A Preliminary Study On The Ethical Review And Supervision Of Artificial Intelligence
Topic·Artificial Intelligence Security | A Preliminary Study On The Ethical Review And Supervision Of Artificial Intelligence
Text | Jiang Qin Zuo Xiaodong, School of Public Affairs, University of Science and Technology of China. In November 2022, American research institutions released a dialogue-based large-scale language training model, which attracted unprecedented popularity around the world. However, shortly after its release, the Italian Data Protection Agency banned the use in Italy on the grounds of data security issues and restricted developers from processing information from Italian users. At the same time, proposals have emerged in many places around the world to regulate generative artificial intelligence. In this discussion on the security risks of artificial intelligence, some of them belong to the ethical category and cannot be examined from the perspective of network security alone. However, the ethics of artificial intelligence are very different from the traditional ethics of human and animal, and related theories have not yet been formed. For example, how to define the scope of artificial intelligence ethics?What is the content of artificial intelligence ethics review?What measures should our country take to regulate the ethics of artificial intelligence?The above issues are worthy of in-depth research.Artificial intelligence ethics is a new concept that has just emerged. National Standard GB/T 41867-2022
Postal code 2-786
Subscription hotline:
Text | Jiang Qin Zuo Xiaodong, School of Public Affairs, University of Science and Technology of China
In November 2022, American research institutions released a dialogue-based large-scale language training model, which attracted unprecedented popularity around the world. However, shortly after its release, the Italian Data Protection Agency banned the use in Italy on the grounds of data security issues and restricted developers from processing information from Italian users. At the same time, proposals have emerged in many places around the world to regulate generative artificial intelligence. In this discussion on the security risks of artificial intelligence, some of them belong to the ethical category and cannot be examined from the perspective of network security alone. However, the ethics of artificial intelligence are very different from the traditional ethics of human and animal, and related theories have not yet been formed. For example, how to define the scope of artificial intelligence ethics? What is the content of artificial intelligence ethics review? What measures should our country take to regulate the ethics of artificial intelligence? The above issues are worthy of in-depth research.
1. Overview of Artificial Intelligence Ethics
(I) Definition of Artificial Intelligence Ethics
Artificial intelligence ethics is a new concept that has just emerged. National Standard GB/T 41867-2022 "Information Technology Artificial Intelligence Terms" defines "artificial intelligence ethics" as "ethical norms or guidelines followed when carrying out basic research and application practice of artificial intelligence technology." The "Guidelines for Standardization of Artificial Intelligence Ethical Governance" released by the National Artificial Intelligence Branch of the National Beacon Committee in March 2023 summarizes the connotation of artificial intelligence ethical connotations into three aspects: First, humans are related to the development and use of artificial intelligence. Ethical norms and behavioral norms in technology, products and systems; second, moral norms or value embedding methods that the artificial intelligence itself has in line with ethical norms; third, ethical norms formed by artificial intelligence through self-learning reasoning.
The above concepts include both inheritance and development. Generally speaking, ethics is the value standard and behavioral system that deals with the relationship between people and man and nature in the process of coordinating humans' individual interests and the overall interests of society. In the context of technological development, artificial intelligence ethics has added the connotation of the relationship between humans and artificial intelligence technology and products, and the main body of artificial intelligence ethics is still humans. The fundamental purpose of the widespread use of artificial intelligence is to facilitate human life. But the complexity of the problem lies in that not all problems in artificial intelligence technologies and products are ethical issues, and they belong to the category of artificial intelligence ethics only when they affect or may affect human interests.
In addition to definition, people have gradually portrayed the ethical risks of artificial intelligence. In essence, the object of artificial intelligence ethical norms is artificial intelligence activities, and different types of artificial intelligence activities have different ethical risks. To this end, the European Union's Artificial Intelligence Law in 2021 divides artificial intelligence risks into four levels: unacceptable, high risk, limited risk, and extremely small risk, and requires the implementation of different levels of management systems for different risk levels. Obviously, the review and regulatory system of artificial intelligence ethics should set different norms for different types of artificial intelligence activities to guide the healthy development of artificial intelligence activities.
(II) The development context of artificial intelligence ethical development
Artificial intelligence ethics have been developed overall by robotic ethics. In the 1950s, American writer Asimov proposed the three laws of robots for the first time in his work "I, Robot", which became the origin of robot ethics. The first law of the three laws of robots proposes that robots must not harm people, nor should they stand by when they see people being hurt; the second law clearly states that robots should obey all human commands, but must not violate the first law; the third law points out that robots Your own safety should be protected, but the first and second laws shall not be violated.
At the Dartmouth Conference in 1956, scholars elaborated on the concept of "artificial intelligence" for the first time, and then artificial intelligence entered a long and silent period of development until 2016, developed by Google's Deep Mind company. After defeating top Go player Lee Sedol with a 4:1 result, artificial intelligence has become the focus of people's attention again. Governments, enterprises and scientific research institutions in various countries have invested a lot of money in research in order to occupy the technological high point as soon as possible and gain a competitive advantage. In the research and development and application of artificial intelligence, people have gradually discovered that the field of artificial intelligence has the same ethical risks as traditional medicine and experimental animals, and needs further regulation.
On July 8, 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan", requiring the preliminary establishment of artificial intelligence laws, regulations, ethical norms and policy systems to form artificial intelligence security assessment and control capabilities. In October 2018, General Secretary Xi Jinping clearly pointed out during the ninth collective study of the Political Bureau of the 19th Central Committee: "We must strengthen the analysis and prevention of potential risks of the development of artificial intelligence, safeguard the interests of the people and national security, and ensure the safety and reliability of artificial intelligence. , controllable."
On September 26, 2021, the "Ethical Code of New Generation Artificial Intelligence" was released, which put forward six basic ethical requirements, including improving human welfare, promoting fairness and justice, protecting privacy and security, ensuring controllability and credibility, strengthening responsibility, and improving ethical literacy. , and put forward 18 specific ethical requirements for specific activities such as artificial intelligence management, research and development, supply, and use.
On March 20, 2022, the General Office of the State Council issued the "Opinions on Strengthening the Governance of Science and Technology Ethics", requiring "further improving the system of science and technology ethics, enhancing the ability of science and technology ethics governance, effectively preventing and controlling science and technology ethics risks, and constantly promoting science and technology to be good and benefiting mankind. ”, Incorporate artificial intelligence ethics into the comprehensive governance of scientific and technological ethics.
(III) Ethical risks faced by the development of artificial intelligence
1. Algorithm-related AI ethical risks
Algorithm defects: At this stage, artificial intelligence technology mainly uses deep learning and other algorithms to conduct statistical learning of massive structured data. In essence, it is still based on algorithms as the core and data as the driving force. However, algorithms are not always perfect, and algorithm defects in some fields may seriously threaten citizens' right to life and health. Taking the field of autonomous driving as an example, self-driving cars such as Tesla and Uber have been exposed many times that cars were destroyed and killed due to algorithm loopholes.
Algorithm black box: The rules of the algorithm are in the hands of developers. Users do not know the content of the algorithm and can only passively accept the results of the algorithm. In addition, due to the use of complex neural networks and other algorithms, the interpretability of the algorithm is limited, and sometimes developers themselves cannot explain the results of a certain algorithm and its generation mechanism. The problem of algorithm black box seriously infringes on citizens' legal rights such as the right to know and supervision.
Algorithm discrimination: Artificial intelligence cannot judge whether the data is right or wrong, and can only retain or discard data based on fixed programs. Due to the developer's personal preferences, training data selection errors, etc., artificial intelligence may produce discriminatory results. At the same time, algorithms that rely on interaction with users to learn will continue to cycle feedback to increase their discrimination due to the input of discriminatory concepts of some users, resulting in discriminatory results again, infringing on people's equal rights.
2. Data-related AI ethical risks
Artificial intelligence must learn by collecting massive amounts of data, so it is inevitable to touch on data protection issues, especially in terms of personal information protection. In reality, in order to promote technological progress and obtain commercial profits, developers are increasingly inclined to use their dominance to mine user data more widely and in-depth, which directly leads to the proliferation of illegal and irregular acquisition of personal data. Many of the critical opinions recently received are directed at the process of collecting and processing personal information. The "Regulations on the Management of Generative Artificial Intelligence Services (Draft for Comments)" of the State Cyberspace Administration of China also sets up special terms to protect personal information. To this end, the rational use of personal information and privacy protection have become another important part of artificial intelligence ethics.
3. Social and ethical risks brought by artificial intelligence
Artificial Intelligence and Employment: The development of artificial intelligence technology will surely promote a new round of industrial revolution, thus bringing about changes in employment. Labor-intensive industries, low-end manufacturing and other industries that require low innovation will have jobs that will be significantly replaced by artificial intelligence, while industries that require high innovation such as scientific research and design will have arisen. New job demand. The sharp changes in social employment rates and jobs may trigger an unemployment crisis and increase social uncertainty.
Artificial Intelligence and Intellectual Property: Under normal circumstances, intellectual property will of course belong to the organization or individual that creates the fruits of a certain intellectual labor. However, artificial intelligence technology can produce a large number of "creative" results in a very short time based on algorithms and data. Whether artificial intelligence technology can be used as the subject of intellectual property rights and which of the achievements it creates can be used as the object of intellectual property rights is currently indefinitely.
Artificial Intelligence and Responsibility: The fields in which artificial intelligence technology is being applied are becoming more and more widespread, and its independent decision-making capabilities are constantly increasing. However, the results of artificial intelligence decisions are not foolproof, and there is still a certain rate of error. In the case where artificial intelligence replaces human reasoning, decision-making and execution, how to identify the responsible subject, divide the scope of responsibility, and assume responsibilities has brought new challenges to existing laws and regulations. In the long run, once the responsibility ownership is improperly defined, it will cause the risk of liability imbalance, weaken the enthusiasm for artificial intelligence research and development and application, and may ultimately constrain the development of artificial intelligence.
2. The necessity of studying the ethics of artificial intelligence
(I) Improve the theoretical system of science and technology ethics
At present, technological risks and technological ethics are gradually becoming the core topic of concern in society. How to maximize the risk avoidance while obtaining technological welfare has become one of the severe challenges facing the industry. Against this background, scientific and technological ethics itself has become an important part of scientific research. All countries are establishing a theoretical system of scientific and technological ethics, and related academic achievements are constantly being introduced. However, the initial entry point of scientific and technological ethics research is scientific research between humans and animals, and artificial intelligence has become an important source of scientific and technological ethics risks. Research in this area is still in its early stages, especially the connotation and connotation of artificial intelligence technological ethics and Basic requirements, it is urgent to improve relevant theories.
(II) Service science and technology ethics review work
On July 24, 2019, the ninth meeting of the Central Committee for Comprehensively Deepening Reform reviewed and passed the "Plan for the Establishment of the National Science and Technology Ethics Committee". According to the requirements of the document, in October 2019, the General Office of the CPC Central Committee and the General Office of the State Council issued a notice to establish the National Science and Technology Ethics Committee and successively established three sub-committees for life sciences, medicine, and artificial intelligence. The first two sub-committees already have mature operating mechanisms, but the work of the artificial intelligence sub-committee needs to start from scratch and need to study a complete set of working mechanisms to better serve the review of science and technology ethics. Overall, the scientific and technological ethics review of artificial intelligence is still in the exploration stage in my country.
(III) Serving National Supervision
Before the ethical issue of artificial intelligence was raised, my country had already had relevant regulatory systems, typical of which was the security supervision of data that artificial intelligence relied heavily on, and some of the content involved ethical issues. But these two are not the same thing. The "Plan for the Establishment of the National Science and Technology Ethics Committee" points out that it is necessary to promote the construction of a science and technology ethics governance system that covers comprehensively, has clear orientation, is standardized and orderly, and is coordinated. We must speed up the improvement of institutional norms, improve governance mechanisms, strengthen ethical supervision, and refine relevant laws and regulations. and ethical review rules to regulate various scientific research activities. However, the relationship between the ethical supervision of artificial intelligence technology and the existing data security supervision system is not clear at present. To this end, it is necessary to study the framework for the supervision of artificial intelligence ethical risk and coordinate with existing data security and cyberspace governance.
3. Key issues that need to be solved in the review and supervision of artificial intelligence ethics
(I) Review of what kind of artificial intelligence activities
In order to prevent the occurrence of artificial intelligence ethical risk events, it is particularly important to conduct ethical review of artificial intelligence activities in advance. However, the algorithms and technologies used in different types of artificial intelligence activities are extremely different, and their application scope also spans multiple fields such as manufacturing, logistics, and medicine. It is neither realistic nor necessary to conduct an ethical review of all artificial intelligence activities. If the scope of the designated artificial intelligence activities that require ethical review is too wide, it will increase the operating costs of the relevant industries and hinder the development of the industry; if the scope of the designated area is too narrow, it will increase the probability of ethical risk events. How to reasonably determine the areas and types of artificial intelligence activities that need to be reviewed, and grasp the balance point of regulatory ethical risks and promoting the healthy development of the industry has become one of the key issues in artificial intelligence ethical supervision.
This is essentially different from the human and animal ethical review. The review threshold for the latter two is very simple and clear. As long as it is involved, it must be reviewed. Applications to conduct relevant research must have the approval of the ethics committee. But is it possible to require all research involving artificial intelligence to be subject to a pre-ethical review? This obviously cannot be done.
(II) How to conduct an ethical risk assessment of artificial intelligence
Artificial intelligence technology is widely used in many fields such as autonomous driving, the Internet, medical care, and media, and the ethical risks brought about are becoming more and more complex and changeable. Among them, there are both intuitive short-term risks, such as the security risks of algorithm vulnerabilities and the formulation of discriminatory policies that lead to algorithmic bias, and there are also relatively indirect long-term risks, such as the possible impact on property rights, competition, employment market and social structure. In order to pre-evaluate the ethical risks of artificial intelligence that may be involved in scientific and technological activities and avoid potential ethical risks, some scholars in our country have tried to establish ethical risk assessment templates and form an ethical assessment list from the review points such as motivation, process, results, and supervision. The EU's Artificial Intelligence Act, the US's Algorithm Accountability Act, and the UK's "Establishing an Innovative Method to Regulate AI" and other relevant laws, regulations and policy documents in major countries around the world advocate the adoption of risk hierarchical management mechanisms based on different scenarios. . At present, my country's ethical risk assessment standards for artificial intelligence activities are still blank and need to be further refined.
(III) How to deal with the ethical risk events of artificial intelligence
In recent years, artificial intelligence ethical security risk incidents have occurred frequently, AI face swaps have caused privacy controversy, autonomous driving safety accidents have occurred frequently, and intelligent algorithm recommendations have caused information cocoons... Artificial intelligence technology is usually associated with the Internet field, and once ethical risk incidents occur , its impact range will far exceed traditional ethical risk events, while ethical risk events handling measures are still in their infancy and have not formed standardized processes and unified standards, which has led to confusion in handling ethical risk events and caused subsequent controversy Continuous. The incomplete response mechanism for the ethical risk event of artificial intelligence has seriously hindered the future development of the industry. How to use typical artificial intelligence ethical risk events as reference, summarize the highly practical and reference-oriented handling measures, and form standard handling standards for artificial intelligence ethical risk events, has become an urgent task.
(IV) How to deal with the relationship between the ethical limitations of artificial intelligence and technological innovation
Despite the rapid popularity of the world, an open letter called "Suspend Giant AI" is calling for a suspension of development of AI models that are more powerful than GPT-4, because AI labs cannot understand, predict or reliably control their own development for the time being. giant artificial intelligence model. A more powerful AI system can only be developed if humans are convinced that giant AI models will have positive impacts and the risks are controllable.
With the popularization of the concept of artificial intelligence ethics, humans will restrict or suspend the development of more powerful artificial intelligence systems in order to avoid the ethical risks of artificial intelligence, but this limitation may have an impact on potential major technological breakthroughs. When IVF technology was first born, it was controversial due to ethical restrictions, but now it has become one of the mainstream technologies for assisted reproduction. How to avoid repeating the same mistakes in the development of artificial intelligence and correctly handle the relationship between the ethical limitations of artificial intelligence and scientific and technological innovation is an important issue facing the development of artificial intelligence and requires careful consideration.
4. Suggestions on my country's artificial intelligence ethical risk review and supervision system
(I) Enhance the influence on the formulation of global artificial intelligence ethical rules
As a booming emerging technology, countries or regions such as the United States, the European Union, and Japan have invested a lot of manpower and material resources for research and development. Countries have successively issued laws, regulations, policies and technical standards related to artificial intelligence, which on the one hand leads global scientific and technological progress. At the same time, it also competes to gain international voice. In this new international game in the field of artificial intelligence, it is particularly important to control the formulation of the ethical rules of artificial intelligence. While advocating the universal value of "ethics", some countries may try to inject their own ideological concepts and interest demands into the ethical rules of artificial intelligence, and further strengthen the containment of China's high-tech development based on this. Faced with this "a place where military strategists must fight", our country should make a difference, enhance its influence on the formulation of global artificial intelligence ethical rules, and contribute Chinese wisdom and Chinese solutions to the global governance of artificial intelligence.
In November 2022, the Ministry of Foreign Affairs issued the "China's Position Document on Strengthening the Ethical Governance of Artificial Intelligence", which emphasized that "governments in various countries should encourage cross-country, cross-field, and cross-cultural exchanges and collaboration in the field of artificial intelligence to ensure that all countries share artificial intelligence. The benefits of intelligent technology will encourage all countries to participate in the discussion of major issues of international artificial intelligence ethics and rule formulation." The release of this position document is of great significance. If China wants to promote the opening of an international dialogue process in the field of artificial intelligence ethical governance and provide a blueprint for formulating global rules, further action is needed.
(II) Improve the ethical legislation of artificial intelligence
Currently, China's official documents related to artificial intelligence ethics include "Plan for the Development of New Generation Artificial Intelligence", "Ethical Norms for the New Generation of Artificial Intelligence", and "Opinions on Strengthening the Governance of Science and Technology Ethics", etc. However, most of these document types are macro-guided policies and lack more detailed mandatory legal norms. In order to better regulate the development of artificial intelligence, it is recommended to issue laws and regulations on the ethical supervision of artificial intelligence as soon as possible. First, relevant concepts in the field of artificial intelligence ethics should be clarified, clear the boundaries between ethical and non-ethical issues, and promote the healthy development of artificial intelligence; second, summarize the common rules and general principles of artificial intelligence ethical supervision, and make different scenarios Artificial intelligence ethics supervision provides standard management, research and development, supply and use standards; third, improve the artificial intelligence ethics responsibility system and clarify the specific responsibilities of regulatory departments and operating units in the form of legislation.
(III) Propose guidelines for the construction and operation of the Artificial Intelligence Ethics Committee
The "Opinions on Strengthening the Governance of Science and Technology Ethics" requires that units engaged in artificial intelligence science and technology activities should establish a science and technology ethics (review) committee if the research content involves sensitive areas of science and technology ethics. To this end, we should actively play the role of the Artificial Intelligence Ethics Committee, strengthen the daily management of artificial intelligence ethics, and actively analyze and promptly resolve the ethical risks in relevant scientific and technological activities. The Artificial Intelligence Ethics Committee should review its plan before the unit conducts relevant research, and promptly prevent or correct research projects that violate artificial intelligence ethics, and prevent problems before they happen. At the same time, the Artificial Intelligence Ethics Committee can also deal with emergency responses to the situation that has occurred to avoid further spread and deterioration. Given the complexity of the ethical issues of artificial intelligence, it is advisable to provide construction and operation guidance for various types of artificial intelligence ethics committees at the national level, strengthen top-level design, and improve the scientificity and normativeness of artificial intelligence ethical review.
(IV) Establish a coordinated mechanism for data security supervision and artificial intelligence ethical supervision
The ethical governance of artificial intelligence includes two aspects: one is the artificial intelligence ethical review of the unit involved, and the other is the artificial intelligence ethical supervision of the national competent departments. Since the basis of artificial intelligence application is data, there will inevitably be a cross between artificial intelligence ethical supervision and data security supervision. Not only that, data security governance itself also includes concerns about data ethics issues. From the perspective of historical development, safety supervision and ethical supervision have developed along their respective contexts, and no coordination has been achieved in both academic research and practice. At present, my country has not established an artificial intelligence ethics regulatory agency. The relationship with data security supervision must be considered in the next step of work to ensure that the two systems are coordinated and coordinated.
V. Conclusion
The development and application of artificial intelligence technology is not only the common welfare of human society, but also adds new ethical risks. Faced with the in full swing of development of artificial intelligence technology, research on artificial intelligence ethics has lagged significantly and it is urgent to change the status quo. As the name suggests, artificial intelligence ethics research is a typical interdisciplinary, which involves both artificial intelligence science in the field of information technology and ethics in the field of humanities and social sciences. The review and supervision of artificial intelligence ethics is a complex topic that requires a lot of pragmatic basic research, focusing on clarifying the boundaries of related issues. This article has made preliminary thoughts on the focus and direction of research on artificial intelligence ethics, hoping to trigger a broader discussion and promote the progress of research on artificial intelligence ethics.
(This article was published in the 5th issue of "China Information Security" magazine, 2023)
China Security Information magazine recommends it with great strength
“Company Growth Plan”