The Ethical Challenges Of Artificial Intelligence Have Changed From Theoretical Discussion To Real Risks - What Are The Current Practices? How To Deal With It In The Future?
The Ethical Challenges Of Artificial Intelligence Have Changed From Theoretical Discussion To Real Risks - What Are The Current Practices? How To Deal With It In The Future?
In March 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the
China Metallurgical News China Steel News Network
Reported by reporter Fan Sancai
In March 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the "Opinions on Strengthening the Governance of Science and Technology Ethics", which systematically deployed the work of science and technology ethics, and included artificial intelligence in the key areas of science and technology ethics governance.
What risks and challenges does artificial intelligence ethics face at present? What practices have been carried out at home and abroad? How should we achieve the governance of artificial intelligence ethics in the future? Recently, the "Research Report on the Ethical Governance of Artificial Intelligence (2023)" released by the China Academy of Information and Communications (hereinafter referred to as the China Academy of Information and Communications) conducted research and analysis on related issues.
The report believes that the breakthrough and development of artificial intelligence technology has caused ethical controversy in the application of technology, especially the development and application of generative artificial intelligence technology has caused ethical challenges such as prejudice and discrimination, privacy violations, unclear responsibilities, and dissemination of false content. In the future, the ethical governance of artificial intelligence will be enhanced and coordinated with industrial innovation activities; measures such as multidisciplinary and multi-subject participation, classified and hierarchical governance, and technical tool development will effectively promote the improvement of the ethical governance mechanism of artificial intelligence; the improvement of the ethical literacy of the whole people in science and technology will effectively prevent the ethical risks of artificial intelligence; global cooperation in artificial intelligence ethical governance will also promote artificial intelligence technology to benefit mankind.
(Pictures from the Internet)
Challenge: Covering all stages of the entire life cycle of artificial intelligence
Artificial intelligence ethics is a value concept and behavioral norm that needs to be followed in carrying out scientific and technological activities such as artificial intelligence research, design, development, service and use, and presents three major characteristics: philosophical, technical and global. The report believes that at present, the ethical challenges caused by artificial intelligence have changed from theoretical discussions to real risks. According to statistics, in November 2023 alone, there were more than 280 artificial intelligence incidents. Concerns about the ethical issues of artificial intelligence cover all stages of the entire life cycle of artificial intelligence, and it is necessary to analyze and evaluate the ethical risks of artificial intelligence from the perspective of technology research and development and application deployment.
Among them, in the technical research and development stage, due to the lack of technical capabilities and management methods in data acquisition and use, algorithm design, model tuning, etc., it may cause ethical risks such as bias and discrimination, privacy leakage, misinformation, and inexplicability. In the stage of product research and development and application, the specific fields that artificial intelligence products are targeted, the scope of deployment and application of artificial intelligence systems, etc. will affect the degree of ethical risks of artificial intelligence, and may cause ethical risks such as misuse and abuse, excessive dependence, and impact on education and employment.
"It is necessary to note that ethical risks such as privacy leakage, prejudice and discrimination, false misinformation, and unclear attribution may occur both in the R&D stage, as well as in the application stage, and even the negative consequences of the superposition of the two. Identifying ethical risks of artificial intelligence requires a full life cycle assessment." The report particularly emphasized.
At present, with the development of big models, generative artificial intelligence applications have become a hot topic of artificial intelligence applications in 2023, but using big models to generate content has three prominent ethical risks.
First, the risk of misuse and abuse. Generative artificial intelligence technology is widely popular and has a low threshold for use, and may become a technical tool for producing in-depth forged content, malicious code, etc., causing a large amount of false information to spread and network security issues.
The second is the risk of data leakage and privacy violations. The training data set used by generative artificial intelligence may contain personal information, which is then induced to output relevant information. At the same time, personal information, corporate business secrets, important codes, etc. uploaded during user use may become materials for generative artificial intelligence training, thereby causing the risk of being leaked.
Third, it brings challenges to the intellectual property system. Generative artificial intelligence technology has had an impact on the intellectual property system. In terms of the use of training data, there is still debate on which data can be used for model training. Whether "reasonable use" can be applied to large model data training is still being discussed, and artists have begun to use technical tools to prevent unauthorized model training. In terms of the right to generate content, there is still room for discussion whether artificial intelligence technology can only play the role of tools.
Practice: Countries around the world gradually build a governance mechanism for the orderly development of artificial intelligence
Artificial intelligence has become an important strategy for scientific and technological development in various countries. At present, countries have clarified the basic ethical requirements for the research and development and application of artificial intelligence technology through ethical norms, strengthened global cooperation in artificial intelligence ethical governance, and gradually built a governance mechanism for the orderly development of artificial intelligence.
An international organization's artificial intelligence ethical governance program. At present, the international community is stepping up cooperation in the field of artificial intelligence ethical governance. At the level of consensus principles of artificial intelligence ethics, 193 UNESCO member states reached the "Proposal on the Ethics of Artificial Intelligence" in November 2021, which clearly stipulates four AI values, including respecting, protecting and promoting human rights, basic freedoms, dignity, environmental and ecosystem development, ensuring diversity and inclusion, and living in a peaceful and just Internet society. At the same time, international standardization organizations such as ISO (International Organization for Standardization), IEC (International Electrotechnical Commission), IEEE (Electrical and Electronic Engineers Committee), ITU (International Telecommunications Union) actively promote the development of scientific and technological ethical standards represented by artificial intelligence technology. On May 25, 2023, the United Nations issued the policy brief on "Our Common Agenda" proposed the "Global Digital Contract - Creating an open, free and secure digital future for everyone"; on October 26, 2023, the United Nations Higher Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals Individuals In
The ethical governance mechanism of artificial intelligence in countries and regions outside the region. Among them, there is a market-oriented and innovation-oriented regulatory model represented by the United States, and an active intervention model represented by the European Union.
The United States develops trustworthy artificial intelligence based on encouraging innovation. At the administrative planning level, in October 2023, US President Biden issued the Executive Order on the Safety, Reliable and Trusted Development and Use of Artificial Intelligence, which clearly stipulates the development of responsible artificial intelligence technology. Previously, in May 2022, the Biden administration established the National Artificial Intelligence Advisory Committee; in October 2022, the "Blueprint of the Artificial Intelligence Bill of Rights" was released. At the same time, the US government actively encourages industry self-discipline and promoted 15 companies including Google, Microsoft, Nvidia twice in July and September 2023 to make commitments to developing artificial intelligence that avoids bias and discrimination and protects privacy. In specific areas, different departments of the U.S. federal government also issue corresponding ethical principles and governance frameworks based on their main responsibilities and main business. However, the United States still lacks binding legal norms that use artificial intelligence as the regulatory object.
The EU has implemented the ethical requirements of artificial intelligence through sound regulatory regulations. In terms of ethical frameworks, in April 2019, the EU Senior Expert Group released the "Ethical Guidelines for Trusted Artificial Intelligence", proposing the concepts of trusted artificial intelligence and seven key requirements, including ensuring human supervision, robustness and security, privacy and data governance, transparency, diversity, social and environmental welfare, and accountability. In terms of governance implementation, in December 2023, the European Commission, the European Council and the European Parliament reached a consensus to reach a temporary agreement on the Artificial Intelligence Act, proposing a risk-based regulatory method, dividing the challenges brought by the artificial intelligence system into four levels: unacceptable risk, high risk, limited risk, and basically risk-free, and forming corresponding responsibilities and regulatory mechanisms; at the same time, emphasizing ethical value requirements such as people-oriented and ensuring transparency.
In addition, Germany focuses on the ethical risk regulation of specific application areas of artificial intelligence (artificial intelligence algorithms, data, and autonomous driving, etc.), emphasizing people-oriented and developing responsible and public interest-oriented artificial intelligence. For example, in the field of algorithms, the German Data Ethics Committee divides artificial intelligence systems related to human decision-making into three types: algorithm-based decision-making, algorithm-driven decision-making and algorithm-decided decision-making, and proposes a risk-oriented "regulatory pyramid" of artificial intelligence algorithms, rating the risks of artificial intelligence algorithms at level 1 to 5, and adding specification requirements step by step, including regulatory review, additional approval conditions, dynamic supervision and complete prohibition. Singapore actively explores technical tools for the ethical governance of artificial intelligence, and through the issuance of national policies such as "smart country" and "digital government blueprint", it has improved the level of development and application of artificial intelligence in multiple dimensions, and strives to promote the development of social digital transformation in a direction that is beneficial to people. For example, in December 2023, Singapore released the National Artificial Intelligence Strategy 2.0, proposing the vision of artificial intelligence serving the public interests, and once again emphasized the establishment of a trustworthy and responsible artificial intelligence ecosystem.
At the same time, my country is also actively participating in international cooperation in the ethical governance of artificial intelligence. In August 2023, at the 15th BRICS Leaders' Meeting, the BRICS countries agreed to start the work of the artificial intelligence research group as soon as possible, expand artificial intelligence cooperation, and form a broad consensus artificial intelligence governance framework and standard specifications. In October 2023, my country issued the "Global Artificial Intelligence Governance Initiative", which systematically elaborated on the Chinese solution to artificial intelligence governance around the three aspects of artificial intelligence development, security and governance, proposed to adhere to the principle of ethics first, and proposed to establish and improve the ethical standards, norms and accountability mechanisms of artificial intelligence. At the same time, our country's experts are also actively participating in the construction of artificial intelligence ethical rules for international institutions such as the United Nations and the World Health Organization. In October 2023, two experts from my country were selected as the United Nations Senior Individuals Governance Agency, actively participated in the discussion on artificial intelligence governance at the United Nations level, and put forward suggestions on global artificial intelligence governance.
Outlook: Artificial intelligence ethical governance will be enhanced with industrial innovation activities
The report puts forward four suggestions on the ethical governance of artificial intelligence.
First, coordinate the innovative development and ethical governance of the artificial intelligence industry. On the one hand, we attach importance to innovation in artificial intelligence technology, encourage innovation and breakthroughs in basic artificial intelligence technologies, develop basic core technologies such as artificial intelligence chips, data, and algorithms, strengthen the development of artificial intelligence governance products, and use artificial intelligence technology to prevent artificial intelligence risks; on the other hand, explore the establishment of an agile artificial intelligence ethical governance mechanism, enhance the effectiveness and scientific nature of scientific and technological ethical governance, and promote the coordination of industrial innovation and ethical risk prevention.
The second is to improve the ethical governance measures for artificial intelligence. First, improve the governance system of multi-disciplinary and multi-subject cooperation and form a joint artificial intelligence governance force built by multi-disciplinary. Second, we must establish a classified and hierarchical ethical governance mechanism. It is recommended to determine responsibilities and regulatory rules based on the size and scope of the impact of artificial intelligence ethical risks. For example, for the application of artificial intelligence technology that basically does not have ethical impact, simplify regulatory procedures and encourage technological innovation and development; for artificial intelligence technologies with certain ethical impact, set ethical requirements, establish a risk assessment mechanism, and clarify responsibilities; for high-risk application areas of artificial intelligence with long-term and wide impact, increase supervision efforts and prevent ethical risks through various regulatory means; for artificial intelligence applications with unacceptable ethical impact, consider prohibiting deployment. Third, promote the technical, engineering and standardization of artificial intelligence ethical governance, and form a demonstration of experience in artificial intelligence ethical governance.
The third is to improve the ability of various subjects to respond to the ethical risks of artificial intelligence. Development and application practices that conform to the ethics of artificial intelligence depend on R&D, design and application personnel to assume personal responsibilities, and also require the public to strengthen their awareness of the risks of artificial intelligence ethical risks and use them responsibly. Support colleges and universities to offer science and technology ethics courses, guide students and scientific and technological workers to improve their ethics literacy in artificial intelligence, and carry out science and technology ethics training for employees as a necessary link for enterprises to implement the main responsibility of science and technology ethics management, guide the industry to strengthen self-discipline and cooperation in science and technology ethics, and strengthen science and technology ethics education for the public.
Fourth, strengthen international exchanges and cooperation in the ethical governance of artificial intelligence. Artificial intelligence technology is global and contemporary. We must actively participate in bilateral and multilateral cooperation in global science and technology ethics governance. At the same time, domestic enterprises and experts and scholars are encouraged to participate in international artificial intelligence ethics governance exchanges and cooperation to form a global artificial intelligence ethics governance cooperation ecosystem.
Editor | Lu Lin