Artificial Intelligence Ethical Challenges Have Changed From Theoretical Discussions To Real Risks - What Are The Current Practices? How To Deal With It In The Future?
Artificial Intelligence Ethical Challenges Have Changed From Theoretical Discussions To Real Risks - What Are The Current Practices? How To Deal With It In The Future?
As title
Challenge: Cover all stages of the entire life cycle of artificial intelligence
Artificial intelligence ethics are the values and behavioral norms that need to be followed when carrying out scientific and technological activities such as artificial intelligence research, design, development, service and use. It presents three major characteristics: philosophical, technical and global. The report believes that at present, the ethical challenges caused by artificial intelligence have changed from theoretical discussions to real risks. According to statistics, in November 2023 alone, there were more than 280 artificial intelligence incidents. Concerns about ethical issues in artificial intelligence cover all stages of the entire life cycle of artificial intelligence, and ethical risks of artificial intelligence need to be analyzed and assessed from the levels of technology research and development and application deployment.
Among them, in the technology research and development stage, due to the insufficient technical capabilities and management methods of the artificial intelligence technology development entities in data acquisition and use, algorithm design, model tuning, etc., ethical risks such as bias and discrimination, privacy leaks, misinformation, and inexplicability may arise. In the product development and application stages, the specific fields that artificial intelligence products target, the scope of deployment and application of artificial intelligence systems, etc. will affect the degree of ethical risks of artificial intelligence, and may lead to ethical risks such as misuse, abuse, over-reliance, and impact on education and employment.
"It should be noted that ethical risks such as privacy leakage, bias and discrimination, generation of false and incorrect information, and unclear attribution may occur in the research and development stage, or in the application stage, or even the negative consequences of the superposition of the two. Identifying artificial intelligence ethical risks requires a full life cycle assessment." The report particularly emphasized.
Currently, with the development of large models, generative artificial intelligence applications have become a hot topic in artificial intelligence applications in 2023. However, using large models to generate content has three prominent ethical risks.
One is the risk of misuse and abuse. The application of generative artificial intelligence technology is fast and has low barriers to use. It may become a technical tool for producing deep fake content, malicious code, etc., causing a large number of false information to spread and network security issues.
The second is the risk of data leakage and privacy invasion. Generative AI uses training data sets that may contain personal information and are then induced to output relevant information. At the same time, personal information, corporate trade secrets, important codes, etc. uploaded by users during use may become materials for generative artificial intelligence training, thereby creating the risk of being leaked.
The third is to bring challenges to the intellectual property system. Generative artificial intelligence technology has had an impact on the intellectual property system. Regarding the use of training data, there is still controversy about which data can be used for model training. There is still discussion on whether "fair use" can be applied to large model data training, and some artists have begun to use technical tools to prevent unauthorized model training. In terms of the ownership of the rights to generated content, there is still room for discussion on whether artificial intelligence technology can only play a tool role.
Practice: Countries around the world are gradually building governance mechanisms for the orderly development of artificial intelligence
Artificial intelligence has become an important strategy for the scientific and technological development of various countries. At present, various countries have clarified the basic ethical requirements for the research and development and application of artificial intelligence technology through ethical norms, strengthened global cooperation in ethical governance of artificial intelligence, and gradually established a governance mechanism for the orderly development of artificial intelligence.
Ethical governance programs for artificial intelligence for international organizations. At present, the international community is stepping up cooperation in the field of ethical governance of artificial intelligence. At the level of the consensus principles of artificial intelligence ethics, 193 member states of UNESCO reached the "Recommendation on Ethical Issues in Artificial Intelligence" in November 2021, which clearly respects, protects and promotes human rights, basic freedoms, dignity, environment and ecosystem development, ensures diversity and inclusiveness, and lives in a peaceful and just Internet society. Four artificial intelligence values. At the same time, international standardization organizations such as ISO (International Organization for Standardization), IEC (International Electrotechnical Commission), IEEE (Institute of Electrical and Electronic Engineers), ITU (International Telecommunication Union) actively promote the development of scientific and technological ethics standards represented by artificial intelligence technology. On May 25, 2023, the "Our Common Agenda" policy brief issued by the United Nations proposed a "global digital compact - creating an open, free and secure digital future for all"; on October 26, 2023, the United Nations high-level artificial intelligence advisory body was established; in December 2023, the advisory body released an interim report "Human-Centered Artificial Intelligence Governance".
Artificial intelligence ethical governance mechanisms in foreign countries and regions. Among them, there are both market-oriented and innovation-oriented regulatory models represented by the United States, and active intervention models represented by the European Union.
The United States develops trustworthy artificial intelligence based on encouraging innovation. At the administrative planning level, in October 2023, US President Biden issued an executive order "On the Safe, Reliable, and Trustworthy Development and Use of Artificial Intelligence" to clarify the responsible development of artificial intelligence technology. Previously, in May 2022, the Biden administration established the National Artificial Intelligence Advisory Committee; in October 2022, the "Artificial Intelligence Bill of Rights Blueprint" was released. At the same time, the U.S. government actively encourages industry self-discipline. In July and September 2023, it twice pushed 15 companies, including Google, Microsoft, and Nvidia, to make commitments to develop artificial intelligence that avoids bias and discrimination and protects privacy. In specific areas, different departments of the U.S. federal government have also issued corresponding ethical principles and governance frameworks based on their main responsibilities and main businesses. However, the United States still lacks binding legal norms that regulate artificial intelligence.
The EU implements ethical requirements for artificial intelligence through sound supervision and regulations. In terms of ethical framework, in April 2019, the EU High-level Expert Group released the "Ethical Guidelines for Trustworthy Artificial Intelligence", proposing the concept of trustworthy artificial intelligence and seven key requirements, including ensuring human supervision, robustness and security, privacy and data governance, transparency, diversity, social and environmental welfare, and accountability. In terms of governance implementation, in December 2023, the European Commission, the European Council, and the European Parliament reached a consensus and reached a provisional agreement on the Artificial Intelligence Act, proposing a risk-based regulatory approach, dividing the challenges brought by artificial intelligence systems into four levels: unacceptable risk, high risk, limited risk, and basically no risk, and forming a corresponding responsibility and supervision mechanism; at the same time, it emphasized ethical value requirements such as people-centeredness and transparency.
In addition, Germany pays attention to the regulation of ethical risks in specific application areas of artificial intelligence (artificial intelligence algorithms, data, autonomous driving, etc.), emphasizing people-oriented development of responsible and public interest-oriented artificial intelligence. For example, in the field of algorithms, the German Data Ethics Committee divides artificial intelligence systems related to human decision-making into three types: algorithm-based decision-making, algorithm-driven decision-making, and algorithm-determined decision-making. It also proposes a risk-oriented "regulatory pyramid" for artificial intelligence algorithms, rating the risk of artificial intelligence algorithms from 1 to 5, and gradually increasing regulatory requirements, including regulatory review, additional approval conditions, dynamic supervision, and complete bans. Singapore actively explores technical tools for ethical governance of artificial intelligence, and through the release of national policies such as "Smart Nation" and "Digital Government Blueprint", it improves the development and application level of artificial intelligence in multiple dimensions, and strives to promote the digital transformation of society in a direction that is beneficial to people. For example, in December 2023, Singapore released the National Artificial Intelligence Strategy 2.0, proposing the vision of artificial intelligence serving the public interest, and once again emphasized the establishment of a trustworthy and responsible artificial intelligence ecosystem.
At the same time, our country is also actively participating in international cooperation on ethical governance of artificial intelligence. In August 2023, at the 15th BRICS leaders' meeting, the BRICS countries agreed to launch the work of the artificial intelligence research group as soon as possible, expand artificial intelligence cooperation, and form an artificial intelligence governance framework and standards with broad consensus. In October 2023, my country released the "Global Artificial Intelligence Governance Initiative", which systematically elaborated on China's plan for artificial intelligence governance around the three aspects of artificial intelligence development, security, and governance, proposed the principle of adhering to ethics first, and proposed the establishment and improvement of artificial intelligence ethical guidelines, norms, and accountability mechanisms. At the same time, Chinese experts have also actively participated in the construction of artificial intelligence ethical rules by international organizations such as the United Nations and the World Health Organization. In October 2023, two Chinese experts were selected into the United Nations' high-level artificial intelligence governance body, actively participated in discussions on artificial intelligence governance at the United Nations level, and put forward suggestions for global artificial intelligence governance.
Outlook: Artificial intelligence ethical governance will be more coordinated with industrial innovation activities
The report puts forward four recommendations around the ethical governance of artificial intelligence.
The first is to coordinate the innovative development and ethical governance of the artificial intelligence industry. On the one hand, we should attach great importance to artificial intelligence technology innovation, encourage innovation and breakthroughs in basic artificial intelligence technologies, develop basic core technologies such as artificial intelligence chips, data, and algorithms, strengthen the development of artificial intelligence governance products, and use artificial intelligence technology to prevent artificial intelligence risks; on the other hand, explore the establishment of agile artificial intelligence ethical governance mechanisms, enhance the effectiveness and scientificity of scientific and technological ethics governance, and promote the coordination of industrial innovation and ethical risk prevention.
The second is to improve artificial intelligence ethical governance measures. First, improve the governance system of multi-disciplinary and multi-subject cooperation to form a multi-disciplinary artificial intelligence governance synergy. Second, it is necessary to establish a classified and hierarchical ethical governance mechanism. It is recommended to determine responsibilities, obligations and regulatory rules based on the size and scope of artificial intelligence ethical risks. For example, for artificial intelligence technology applications that have basically no ethical impact, simplify regulatory procedures and encourage technological innovation and development; for artificial intelligence technologies that have certain ethical impacts, set ethical requirements, establish risk assessment mechanisms, and clarify responsibilities; for high-risk artificial intelligence application fields with long-term and widespread impacts, increase supervision and prevent ethical risks through a variety of regulatory means; consider prohibiting the deployment of artificial intelligence applications with unacceptable ethical effects. Third, promote the technicalization, engineering, and standardization of artificial intelligence ethical governance, and form an empirical demonstration of artificial intelligence ethical governance.
The third is to improve the ability of each subject to respond to artificial intelligence ethical risks. Development and application practices that comply with the ethics of artificial intelligence rely on R&D, design, and application personnel to take personal responsibility. It also requires the public to strengthen their awareness of the ethical risks of artificial intelligence and its responsible use. Support colleges and universities to set up science and technology ethics courses, guide students and science and technology workers to improve their artificial intelligence ethics literacy, make science and technology ethics training for employees a necessary part of enterprises to implement the main responsibility of science and technology ethics management, guide the industry to strengthen science and technology ethics self-discipline and cooperation, and strengthen science and technology ethics education for the public.
The fourth is to strengthen international exchanges and cooperation in artificial intelligence ethical governance. Artificial intelligence technology is global and contemporary. It is necessary to actively participate in bilateral and multilateral cooperation in global scientific and technological ethics governance. At the same time, domestic enterprises, experts and scholars are encouraged to participate in international exchanges and cooperation in artificial intelligence ethics governance to form a global cooperation ecosystem in artificial intelligence ethics governance.