Ethics First: Continuously Promote The Ethical Governance Of Artificial Intelligence
Ethics First: Continuously Promote The Ethical Governance Of Artificial Intelligence
On October 18, the Cyberspace Administration of China issued the
On October 18, the Cyberspace Administration of China issued the "Global Artificial Intelligence Governance Initiative" (hereinafter referred to as the "Initiative"), which systematically elaborated on the Chinese solution for artificial intelligence governance in three aspects: artificial intelligence development, security and governance. The Initiative puts forward constructive solutions to the development and governance issues of artificial intelligence that are of general concern to all parties. In terms of artificial intelligence ethical governance, it proposes: "Adhere to ethics first, establish and improve the ethical norms, norms and accountability mechanisms of artificial intelligence, form an ethical guide for artificial intelligence, establish a scientific and technological ethical review and supervision system, clarify the responsibilities and power boundaries of relevant artificial intelligence entities, fully respect and protect the legitimate rights and interests of various groups, and promptly respond to relevant domestic and international ethical concerns."
Artificial intelligence is a new field of human development. Artificial intelligence technology is regarded as the core driving force of the fourth scientific and technological revolution, setting off a new wave of industrial transformation around the world, and has had a profound impact on economic and social development and the progress of human civilization. According to the calculations of the China Academy of Information and Communications Technology, the scale of my country's core artificial intelligence industry reached 508 billion yuan in 2022, an increase of 18% year-on-year, with nearly 4,000 enterprises. Domestic artificial intelligence has formed a complete system and has become a new growth engine.
Artificial intelligence technology brings huge benefits to economic and social development, and also brings various unpredictable risks and complex challenges. The uncertainty of the development of artificial intelligence technology, the data dependence of technology applications, and the interpretability of algorithms may lead to problems such as technology out of control, privacy loss, and fairness imbalance, which will impact ethical norms. To promote the healthy development of artificial intelligence, we must pay attention to its impact on social ethics, and prevent and govern its potential ethical risks. In this regard, the Initiative also mentioned: "Research and development entities continuously improve the interpretability and predictability of artificial intelligence, improve data authenticity and accuracy, ensure that artificial intelligence is always under human control, and create auditable, supervised, traceable and trustworthy artificial intelligence technologies."
General Secretary Xi Jinping emphasized: "Technology is a powerful tool for development and may also become the source of risks. We must forward-looking and analyze the rules and regulations, social risks, and ethical challenges brought about by scientific and technological development, and improve relevant laws and regulations, ethical review rules and regulatory frameworks." In 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan", pointing out that by 2025, artificial intelligence laws and regulations, ethical norms and policy systems will be initially established. In 2019, the "Principles of the Governance of New Generation Artificial Intelligence-Developing Responsible Artificial Intelligence" was released, proposing eight principles including harmony, friendship, fairness and justice, and agile governance. In 2020, the National Science and Technology Ethics Committee was officially established, which is a major strategic decision for the national science and technology ethics governance in the new era. In 2021, the "Ethical Specifications for the New Generation of Artificial Intelligence" was released, which respectively elaborated on the management specifications, R&D specifications, supply specifications and use specifications of artificial intelligence. In 2022, the "Opinions on Strengthening the Governance of Science and Technology Ethics" was released, putting forward five basic requirements and five scientific and technological ethics principles, and deploying four key tasks to provide guidance for further improving the scientific and technological ethics governance system. In 2023, the "Interim Measures for the Management of Generative Artificial Intelligence Services" was released, becoming the first regulatory document in my country's artificial intelligence industry. Various measures show that my country continues to make efforts in the ethical governance of artificial intelligence. Based on the content of the "Initiative", we still need to continue to adhere to ethics first in the future, do a good job in risk prevention, and form a broad consensus artificial intelligence governance framework and standard specifications.
1. Determine the ethical risks of artificial intelligence
Controllability risks of artificial intelligence. This risk comes from concerns about the out-of-control of technology, referring to the risks arising from the development of artificial intelligence technology beyond human control. The development of artificial intelligence technology is changing with each passing day. Artificial intelligence algorithms have incorporated personality elements such as autonomy, emotion, and intentionality. In theory, there is a possibility of developing from weak artificial intelligence to strong artificial intelligence and super artificial intelligence. "Artificial life" represented by intelligent robots is common, making humans more worried about technology being separated from control than before. However, in terms of the technical conditions of the existing artificial intelligence achievements, artificial intelligence technology can only be applied to scenarios under the principle of closedness of powerful methods and closedness of training methods, and the emergence of uncontrollable technologies is still a "long-term conscience".
The risk of use of artificial intelligence. This risk refers to the risks associated with the application of artificial intelligence, which mainly stems from the unfair use of artificial intelligence technology, such as technology abuse and technology misuse. Taking facial recognition technology as an example, the facial recognition algorithm can help the police search for fugitive criminals, but whether the facial recognition technology is completely reliable and whether the results are 100% accurate is not guaranteed at this stage. Investigators from the two major states of New York and New Jersey in the United States found someone who highly matched the suspect's photos through facial recognition software, but eventually caught the wrong person and detained the "criminal" for ten days. There is also a research team using facial recognition systems such as lip curvature, eye angle distance, and mouth and nose angle to predict that some people have a tendency to commit crimes, and concluded that the probability of black people commit crimes is much higher than that of others, which caused great controversy. It is very likely that machine learning faces each other to obtain "possible illusions" and "objective fallacies." In addition, deep forgery issues such as "face change" are emerging one after another, and facial information is related to personal privacy and property security. When, where and to what extent facial recognition technology should be used, it still needs to be further standardized.
The social effect risks of artificial intelligence. This risk comes from major social issues related to artificial intelligence governance and is a negative effect of artificial intelligence technology on society. The widespread commercial application of artificial intelligence has brought about a series of industrial changes. When the negative effect exceeds the positive effect, it will have a huge impact on society. For example, in terms of economic employment, it replaces a large amount of labor, leads to a wave of unemployment, and creates a "useless class"; in terms of national governance, the advantages of artificial intelligence technology are used to manipulate public opinion, and network secrets are plagiarized and network connections are threatening political security; in terms of culture and art, works formed by artificial intelligence writing, painting and other technologies are pieced together and plagiarized in many places, which seems to lower the threshold for "creation", but may stop the exploration of art innovation; in terms of ecological environment, data under artificial intelligence technology, as a production factor, can reduce the dependence of traditional factors such as natural resources and capital, but the infrastructure consumption that artificial intelligence relies on has not been reduced.
2. Optimize the ethical governance of artificial intelligence
Improve the ethical risk assessment mechanism of artificial intelligence and enhance risk prevention capabilities. Current artificial intelligence technology has generated or is about to generate various ethical risks, and needs to be taken seriously and taken seriously. On the one hand, face up to the objectivity of ethical risks, correctly understand the ethical risks brought by artificial intelligence technology, do not blindly pessimistic or one-sidedly optimistic, guide rational cognition, strengthen artificial intelligence ethical risks education, and objectively and comprehensively evaluate artificial intelligence ethical risks.
On the other hand, establish an ethical risk assessment mechanism, sort out the source, risk type, and risk intensity, think about the causes of risks and risk solutions, prevent risks that have not occurred, carry out foreseeable risk assessments on the negative impact of artificial intelligence application, and make risk plans in advance. Faced with the uncertainty of artificial intelligence technology, optimizing the ethical risk assessment mechanism of artificial intelligence requires gathering forces from multiple parties and adopting multiple methods to carry out risk assessment work scientifically. Governments, universities, research institutes and other institutions can join forces to establish risk assessment indicators, define risk levels for the application of artificial intelligence technology, and use different security strategies and technologies for different risk levels. To carry out forward-looking research on the prevention of ethical risks of artificial intelligence, we can adopt the method of "technology restriction technology" to establish adversarial learning algorithm research, establish an artificial intelligence ethical risk prevention and control system, and prevent the potential ethical risks of artificial intelligence technology in application scenarios.
Create an artificial intelligence ethical risk governance community and promote collaborative governance between multiple parties. The ethical risks of artificial intelligence involve a wide range of areas and are complex. Risk governance involves many fields such as intelligent technology, ethical philosophy, and public management. It is necessary for multiple subjects such as governments, enterprises, universities, industry organizations, and the public to build a multi-party collaborative organizational network, clarify the rights, responsibilities and obligations of different subjects, build a close relationship between multi-subject cooperative governance, and form a cluster governance effect.
As a management department, the government needs to play a role in policy guidance and supervision and review. For example, when formulating relevant policies on artificial intelligence ethical governance, policy makers can conduct hearings, consultations and consultation activities to the public, enterprises, scientific research groups and other entities to increase the participation of other entities in the policy formulation process. As the frontline of the technology, artificial intelligence enterprises need to improve their self-discipline and autonomy capabilities, enhance their risk awareness and responsibility awareness, set up artificial intelligence ethics committees within the enterprise, train artificial intelligence ethics researchers, carry out self-examination of ethical risks, and implement the credible and reliable requirements of artificial intelligence technology. Artificial intelligence industry organizations need to establish industry ethical norms based on the development of the industry, formulate relevant technical standards under the guidance of the government, and promote enterprises to implement their main responsibilities. Scientific research communities such as universities and research institutes can actively participate in the formulation of ethical norms, study algorithmic tools for the governance of ethical risks of artificial intelligence, and play an educational role and carry out popular science activities of artificial intelligence ethical education and artificial intelligence technology.
Improve the artificial intelligence ethical risk governance system and promote agile governance. In addition to the Initiative, "agile governance" is mentioned in the governance requirements of the "Principles of the Governance of the New Generation of Artificial Intelligence", "Ethical Norms of the New Generation of Artificial Intelligence" and the "Opinions on Strengthening the Governance of Science and Technology Ethical Governance". To promote "agile governance", we must respect the laws of the development of artificial intelligence, continuously optimize management mechanisms, and improve the governance system. On the one hand, improve the ethical risk supervision mechanism of artificial intelligence, integrate ethical risk supervision into the entire life cycle of artificial intelligence, realize full process and full chain supervision, establish an accountability mechanism, and clarify the responsible persons of the problem; on the other hand, standardize and standardize the governance of artificial intelligence ethical risk, form a multi-level system supporting system, and make broad governance principles operational with specific standards.
In addition, it is necessary to dynamically adjust the ethical risk treatment methods based on the development of artificial intelligence technology, and timely formulate ethical risk governance methods for the emerging issues of commercial application of artificial intelligence. The EU proposed the "Artificial Intelligence Law" proposal in 2021, prompting the transformation of artificial intelligence governance from soft constraints to substantial hard requirements. At present, my country has issued special legislation in digital technology fields such as the "Personal Information Protection Law of the People's Republic of China" and the "Data Security Law of the People's Republic of China", filling the legislative gap for the governance of personal privacy and security risks, but the relevant legislation is relatively scattered and has a lag. Therefore, it is also necessary to promote the rule of law in the governance of artificial intelligence ethical risk, enhance the forward-looking nature of legislation, and establish a standardized legal system.
Source丨Guangming.com