AI Ethics

Current Situation Analysis And Countermeasures Of Artificial Intelligence Ethical Issues

Current Situation Analysis And Countermeasures Of Artificial Intelligence Ethical Issues

Current Situation Analysis And Countermeasures Of Artificial Intelligence Ethical Issues

Abstract: Artificial intelligence (AI) is the core of the fourth industrial revolution, but it also brings challenges to ethics and social governance. The article is based on explaining the current ethical risks of artificial intelligence

Artificial Intelligence Ethical Code Governance_Artificial Intelligence Ethics_Artificial Intelligence Ethical Risk Governance

summary:

Artificial intelligence (AI) is the core of the fourth industrial revolution, but it also brings challenges to ethics and social governance. On the basis of explaining the current ethical risks of artificial intelligence, the article analyzes some current consensus on artificial intelligence ethical codes, governance principles and governance approaches, and proposes countermeasures and suggestions for gradually building a multi-dimensional ethical governance system including education reform, ethical norms, technical support, legal regulations, and international cooperation, based on the guiding theory of "co-construction, co-governance and sharing".

Artificial intelligence (AI) is the core technology in the fourth industrial revolution and has received great attention from the world. Our country has also formulated a series of development plans and strategies around artificial intelligence technology, which has vigorously promoted the development of the field of artificial intelligence in our country. However, while artificial intelligence technology brings significant development opportunities to economic development and social progress, it also brings profound challenges to ethical norms and social rule of law.

In 2017, the "New Generation Artificial Intelligence Development Plan" issued by the State Council proposed a "three-step" strategic goal, setting off a new craze for artificial intelligence, and clearly stated that it is necessary to "strengthen research on legal, ethical and social issues related to artificial intelligence, and establish a legal, regulatory and ethical framework to ensure the healthy development of artificial intelligence."

In 2018, when presiding over a collective study session held by the Political Bureau of the CPC Central Committee on the current status and trends of artificial intelligence development, General Secretary Xi Jinping emphasized the need to strengthen the analysis and prevention of potential risks in the development of artificial intelligence, safeguard people's interests and national security, and ensure that artificial intelligence is safe, reliable, and controllable. It is necessary to integrate multidisciplinary forces, strengthen research on legal, ethical, and social issues related to artificial intelligence, and establish and improve laws, regulations, institutional systems, and ethics to ensure the healthy development of artificial intelligence.

In 2019, my country's New Generation Artificial Intelligence Development Planning and Promotion Office established a new generation artificial intelligence governance professional committee, which is fully responsible for research and promotion of policy systems, laws, regulations and ethical norms in artificial intelligence governance. The "14th Five-Year Plan for National Economic and Social Development of the People's Republic of China and the Outline of Long-term Goals for 2035" specifically emphasizes the need to "explore the establishment of regulatory frameworks for autonomous driving, online medical care, financial technology, intelligent distribution, etc., and improve relevant laws, regulations and ethical review rules." These all reflect my country’s close attention to and determination to actively promote the ethics of artificial intelligence and its governance, and also highlight the importance of this issue.

1 Current ethical issues in artificial intelligence

Ethics is the principles and order norms that deal with the relationship between people and the relationship between people and society. In human history, major scientific and technological developments often bring about significant changes in productivity, production relations and superstructure, becoming an important criterion for dividing eras, and also bring about profound reflections on social ethics.

After human society entered the information age in the mid-to-late 20th century, information technology ethics gradually attracted widespread attention and research, including personal information leakage, information gaps, information cocoons, and insufficient regulation of new power structures.

The rapid transformation and development of information technology has led human society to rapidly move towards the intelligent era, which is highlighted by the fact that artificial intelligence algorithms with cognitive, prediction and decision-making functions are increasingly widely used in various social scenarios; the comprehensive application of cutting-edge information technology is gradually developing into a new hardware and data resource network where everything can be interconnected and everything can be calculated, which can provide Massive multi-source heterogeneous data are analyzed and processed by artificial intelligence algorithms; artificial intelligence algorithms can directly control physical equipment, and can also provide auxiliary support for individual decision-making, group decision-making and even national decision-making; artificial intelligence can be used in many scenarios such as smart homes, smart transportation, smart medical care, smart factories, smart agriculture, smart finance, etc., and may also be used in weapons and military.

However, the process of moving into the intelligent era is so rapid that, while the traditional ethical order of information technology has not yet been established, we urgently need to deal with more challenging artificial intelligence ethical issues and actively build an order for an intelligent society.

Moore, the founder of computer ethics, divided ethical agents into four categories:

1. Ethical impact on agents (ethical impact on society and the environment);

2. Implicit ethical intelligence (implicit ethical design through built-in security in specific software and hardware);

3. Demonstrate ethical intelligence (able to take reasonable actions based on changes in the situation and its understanding of ethical norms);

4. Completely ethical intelligence (having free will like humans and being able to make ethical decisions in various situations).

The current development of artificial intelligence is still in the weak artificial intelligence stage, but it also has certain ethical impacts on society and the environment. People are exploring the built-in ethical rules for artificial intelligence, and the realization of artificial intelligence technology also includes the understanding of ethical rules through ethical reasoning.

In recent years, more and more people have called for giving artificial intelligence machines a certain moral subject status, but there is huge controversy over whether machines can become fully ethical agents. Although the functions or behaviors of current artificial intelligence are close to humans in some scenarios, it does not actually have "free will." From the perspective of classic social norm theory, whether one can become a "subject" in the normative sense to assume responsibility does not depend on its function, but is constructed with "free will" as the core. Hegel's "Principles of Legal Philosophy" starts from free will. Therefore, the analysis and solution path construction of artificial intelligence ethical issues at the current stage should mainly focus on the first three types of ethical agents, that is, defining artificial intelligence as a tool rather than a subject.

At the current stage, artificial intelligence has not only inherited the ethical issues of previous information technology, but also has new characteristics due to the opacity, difficulty in explanation, adaptability, and wide application of some artificial intelligence algorithms such as deep learning. It may bring about a series of ethical risks in many aspects such as basic human rights, social order, and national security.

For example:

1. Defects in artificial intelligence systems and value setting issues may pose threats to citizens’ rights to life and health. In 2018, the fatal accident involving Uber's self-driving car in Arizona, USA, was not caused by a sensor failure, but because Uber made the decision to ignore obstacles identified by the artificial intelligence algorithm as leaves, plastic bags, etc. out of consideration for passenger comfort when designing the system.

2. Deviations in target demonstration, algorithmic discrimination, and training data of artificial intelligence algorithms may bring about or expand discrimination in society and infringe on citizens’ right to equality.

3. The abuse of artificial intelligence may threaten citizens’ rights to privacy and personal information.

4. Complex artificial intelligence algorithms such as deep learning can lead to algorithmic black box problems, making decisions opaque or difficult to explain, thus affecting citizens’ right to know, due process and citizens’ right to supervision.

5. The abuse and misuse of artificial intelligence technologies such as accurate information push, automated fake news writing and intelligent targeted dissemination, and deep forgery may lead to information cocoons, the proliferation of false information and other problems, and may affect people's access to important news and democratic participation in public issues; the precise push of false news may also increase people's understanding and opinions of facts, which may incite public opinion, manipulate commercial markets, and influence politics and national policies. Cambridge Analytica uses data from the Internet to analyze users’ political preferences and push targeted information accordingly to influence the U.S. election. This is a typical example.

6. Artificial intelligence algorithms may use algorithmic discrimination, or collude with algorithms to form horizontal monopoly agreements or hub-and-spoke agreements, under circumstances that are less likely to be detected and proven, destroying the market competition environment.

7. The application of algorithmic decision-making in various fields of society may cause changes in the power structure. With its technical advantages in processing massive data and its embedded advantages in ubiquitous information systems, algorithms have a significant impact on people's rights and freedoms. For example, credit evaluation through algorithms in bank credit will affect whether citizens can obtain loans, and social harm assessment through algorithms in criminal justice will affect whether to carry out pretrial detention. These are all prominent manifestations.

8. The abuse of artificial intelligence in work scenarios may affect the rights and interests of workers, and the replacement of workers by artificial intelligence may trigger a crisis of large-scale structural unemployment, bringing risks to labor rights or employment opportunities.

9. As artificial intelligence is increasingly widely used in all aspects of social production and life, security risks such as loopholes and design flaws in artificial intelligence systems may lead to social problems such as personal information leakage, industrial production line shutdowns, traffic paralysis, etc., threatening financial security, social security, and national security.

10. The misuse of artificial intelligence weapons may exacerbate inequality around the world and threaten human life and world peace...

Artificial intelligence ethical risk governance is complex, and a complete theoretical framework and governance system have not yet been formed.

1. The causes of artificial intelligence ethical risks are diverse, including target anomie of artificial intelligence algorithms, algorithm and system flaws, affected subjects’ trust crisis in artificial intelligence, lack of regulatory mechanisms and tools, imperfect liability mechanisms, and weak defense measures of affected entities.

2. With the rapid development of artificial intelligence technology and industrial applications, it is difficult to fully characterize and analyze its ethical risks and provide solutions. This requires us to overcome the lag of the traditional normative system, and adopt a "future-oriented" vision and methodology, actively think about and build a normative framework for the design, research and development, application and use of artificial intelligence, and start from the establishment of soft laws such as ethical codes to lead and standardize the research and development and application of artificial intelligence.

Regarding the development of artificial intelligence, we can neither be blindly optimistic nor stop eating because of choking. We must have a deep understanding of its ability to increase social well-being. Therefore, as human society enters the era of intelligence, it is necessary to guide artificial intelligence from a macro perspective as early as possible to move forward along the scientific path, conduct ethical reflection on it, identify the ethical risks and their causes, and gradually build a scientific and effective governance system so that it can better exert its positive value.

2 Artificial Intelligence Ethical Principles, Governance Principles and Approaches

Currently, global artificial intelligence governance is still in its early exploratory stage. It is starting from the basic consensus on the formation of artificial intelligence ethical standards and gradually deepening the implementation of credibility assessment, operational guidelines, industry standards, policies and regulations, etc., and accelerating the construction of an international artificial intelligence governance framework system.

ethical principles

In recent years, many countries, regions, international and domestic organizations, and companies have issued artificial intelligence ethical guidelines or research reports. According to incomplete statistics, there are more than 40 relevant artificial intelligence ethical guidelines. In addition to differences caused by factors such as culture, region, and field, it can be seen that the current ethical guidelines for artificial intelligence have formed a certain social consensus.

In recent years, relevant Chinese institutions and industry organizations have also been very active in participating. For example: In January 2018, the China Electronics Technology Standardization Institute released the "Artificial Intelligence Standardization White Paper (2018 Edition)", which proposed the principle of human interests and the principle of responsibility as the two basic principles of artificial intelligence ethics; in May 2019, the "Beijing Consensus on Artificial Intelligence" was released, focusing on the three aspects of artificial intelligence research and development, use, and governance, and proposed 15 principles that all participants should follow to build a community with a shared future for mankind and social development; in June 2019, the country's new generation The Professional Committee on Artificial Intelligence Governance issued the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence", which proposed 8 principles for the development of artificial intelligence and outlined the framework and action guidelines for artificial intelligence governance; in July 2019, the Shanghai Artificial Intelligence Industry Safety Expert Advisory Committee released the "Shanghai Initiative for the Safe Development of Artificial Intelligence"; in September 2021, the Zhongguancun Forum released the "New Generation Artificial Intelligence Ethics Code" formulated by the National New Generation Artificial Intelligence Governance Professional Committee, etc. Judging from the released content, all guidelines have achieved a high degree of consensus on values ​​such as putting people first, promoting innovation, ensuring security, protecting privacy, and clarifying responsibilities. However, theoretical research and demonstration still need to be continued to further establish consensus.

governance principles

While vigorously promoting the development of artificial intelligence technology and industry, the United States, Europe, Japan and other countries and regions attach great importance to the safe and healthy development of artificial intelligence and incorporate ethical governance into their artificial intelligence strategies, reflecting the basic principle of equal emphasis on development and ethical safety.

General Secretary Xi Jinping attaches great importance to the construction of the rule of law in the field of scientific and technological innovation, emphasizing that “we must actively promote legislation in important fields such as national security, scientific and technological innovation, public health, biosecurity, ecological civilization, risk prevention, and foreign-related legal rule to ensure the healthy development of new business formats and new models with good laws and good governance.”

In recent years, my country has formed an overall policy of "inclusiveness and prudence" in dealing with the regulation and supervision of new technologies and new business formats. This basic policy was formally proposed in 2017. Article 55 of the "Regulations on Optimizing the Business Environment" that was implemented on January 1, 2020 specifically stipulates the "inclusive and prudent" regulatory principle: "The government and its relevant departments should be tolerant of new technologies, new industries, new business formats, new models, etc. in accordance with the principle of encouraging innovation. Prudent supervision should be carried out to formulate and implement corresponding regulatory rules and standards based on its nature and characteristics, leaving sufficient room for development while ensuring quality and safety. It should not be simply prohibited or not regulated. "This provides the basic principles and methodology for the current ethical governance of artificial intelligence.

On the one hand, we must pay attention to observation and realize that new technologies and new things often have positive social significance and objective laws for their development and improvement. They should be given a certain space to develop and improve, and regulatory methods and measures should be formed where necessary in their development.

On the other hand, we must stick to the bottom line, including the bottom line of protecting civil rights and the bottom line of safety. Important rights and values ​​that have formed a high degree of social consensus and are condensed in the law must be protected in accordance with the law during law enforcement and judicial processes. This is not only a clear requirement of the law for developers and users of relevant technologies, but also a solemn commitment of the law to protect citizens’ rights and interests and promote science and technology for good in the intelligent era.

Governance approach

In terms of overall path selection for artificial intelligence governance, there are two main theories: "opposition theory" and "system theory."

The “opposition theory” mainly focuses on the conflict between artificial intelligence technology and human rights and well-being, and then establishes corresponding review and regulatory systems. From this perspective, some countries and institutions have focused on some ethical principles for the artificial intelligence system itself and its development and application. For example, the 2020 "Rome Initiative for the Ethics of Artificial Intelligence" proposed seven main principles - transparency, inclusiveness, responsibility, fairness, reliability, security and privacy. The European Commission's 2019 "Ethical Guidelines for Trustworthy Artificial Intelligence" proposed that the entire life cycle of artificial intelligence systems should comply with three requirements of legality, ethics and robustness, which all reflect this approach.

“Systems theory” emphasizes the coordinated and interactive relationship between artificial intelligence technology and humans, other artificial agents, law, non-intelligent infrastructure, and social norms. Artificial intelligence ethics involves a socio-technical system that must be designed with care that it is not an isolated technical object, but rather needs to take into account the social organization in which it will operate. We can adjust not only the artificial intelligence system, but also other elements that interact with it in the system; on the basis of understanding the operating characteristics of artificial intelligence, we can consider how to best deploy and manage each element within the entire system. Currently, a certain “systems theory” approach is reflected in some policies and regulations. For example, one of the eight principles proposed in "Ethical Design" issued by IEEE (Institute of Electrical and Electronics Engineers) is "Qualification" (). This principle proposes that system creators should clearly define requirements for operators, and operators should abide by the principle of knowledge and skills required for safe and effective operation. This reflects the system theory perspective of making up for the shortcomings of artificial intelligence from the perspective of user requirements, and puts forward new demands for education and training in the intelligent era.

In the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" released by my country's National New Generation Artificial Intelligence Governance Professional Committee in 2019, it not only emphasized what ethical principles the artificial intelligence system itself should comply with, but also proposed "governance principles" from a more systematic perspective, that is, 8 principles that all parties involved in the development of artificial intelligence should follow; in addition to principles such as harmony and friendship, respect for privacy, security and controllability that focus on the openness and application of artificial intelligence, it also specifically emphasized the need to "improve management methods" ", "strengthen artificial intelligence education and science popularization, improve the adaptability of disadvantaged groups, and strive to eliminate the digital divide", "promote the coordinated interaction of international organizations, government departments, scientific research institutions, educational institutions, enterprises, social organizations, and the public in the development and governance of artificial intelligence" and other important principles. It embodies the "system theory" thinking and the idea of multi-dimensional co-governance including educational reform, ethical norms, technical support, legal regulations, international cooperation and other multi-dimensional governance, and provides a more comprehensive artificial intelligence governance framework and action guide. Based on the particularity and complexity of artificial intelligence governance, our country should systematically think about the governance dimensions of artificial intelligence and build a comprehensive artificial intelligence governance system with multiple co-governance under the guidance of "creating a social governance pattern of co-construction, co-governance and sharing" proposed by General Secretary Xi Jinping.

3 Countermeasures for ethical governance of artificial intelligence in my country

Artificial intelligence ethical governance is an important part of social governance. Under the guidance of the governance theory of "co-construction, co-governance and sharing", "inclusiveness and prudence" as the regulatory principle, and "system theory" as the governance approach, our country should gradually build a multi-subject participation, multi-dimensional, and comprehensive governance system.

education reform

Education is an important way for the intergenerational transmission of human knowledge and the cultivation of abilities. Through a number of measures issued by the State Council and the Ministry of Education, as well as reports such as "Artificial Intelligence in Education: Opportunities and Challenges for Sustainable Development" and "Beijing Consensus on Artificial Intelligence and Education" issued by UNESCO, we can see that both domestic and foreign countries have begun to attach importance to the development and reform of education, which plays an indispensable role in the development and application of artificial intelligence technology.

In order to better support the development and governance of artificial intelligence, it should be improved from four aspects:

1. Popularize knowledge on cutting-edge technologies such as artificial intelligence, improve public awareness, and enable the public to treat artificial intelligence rationally;

2. Strengthen artificial intelligence ethics education and professional ethics training among scientific and technological workers;

3. Provide workers with a continuous lifelong education system to deal with unemployment problems that may be caused by artificial intelligence;

4. Study the changes in youth education, break the limitations of knowledge-based education inherited from the industrialization era, and respond to the demand for talents in the artificial intelligence era.

ethics

my country's "New Generation Artificial Intelligence Development Plan" mentions that "carry out research on artificial intelligence behavioral science and ethics and other issues, and establish a multi-level judgment structure of ethics and morality and an ethical framework for human-machine collaboration." At the same time, it is also necessary to formulate ethics and codes of conduct for artificial intelligence product R&D designers and future users to restrict and guide them from source to downstream.

There are currently 5 key tasks that can be carried out:

1. Research detailed ethical guidelines for key areas of artificial intelligence and form operational norms and suggestions.

2. Provide appropriate guidance at the publicity and education level to further promote the formation of an ethical consensus on artificial intelligence.

3. Promote scientific research institutions and enterprises to understand and practice the ethical risks of artificial intelligence.

4. Give full play to the role of national-level ethics committees and promote the promotion of advanced ethical risk assessment and control experience by formulating national-level artificial intelligence ethics guidelines and promotion plans, regularly assessing ethical risks for new business formats and new applications, and regularly selecting best practices in the artificial intelligence industry.

5. Promote the establishment of ethics committees in artificial intelligence research institutes and enterprises to lead the assessment, monitoring and real-time response to artificial intelligence ethical risks, so that artificial intelligence ethical considerations run through the entire process of artificial intelligence design, research and development and application.

Technical support

Reducing ethical risks by improving technology is an important dimension of artificial intelligence ethical governance. Currently, driven by scientific research, market, law, etc., many scientific research institutions and enterprises have carried out activities such as federated learning and privacy computing to better protect personal privacy technology research and development; at the same time, research on artificial intelligence algorithms that enhance security, explainability, and fairness, as well as technical research on data set anomaly detection, training sample evaluation, etc., have also proposed many model structures for ethical agents in different fields. Of course, the patent system should also be improved to clarify the patentability of algorithm-related inventions and further encourage technological innovation to support the design of artificial intelligence systems that meet ethical requirements.

In addition, the formulation of recommended standards in some key areas cannot be ignored. In the formulation of artificial intelligence standards, we should strengthen the implementation and support of artificial intelligence ethical principles, focus on the formulation of standards for privacy protection, security, usability, explainability, traceability, accountability, evaluation and regulatory support technology, encourage enterprises to propose and publish their own corporate standards, and actively participate in the establishment of relevant international standards, promote the incorporation of my country's relevant patented technologies into international standards, help my country enhance its voice in the formulation of international artificial intelligence ethical principles and related standards, and lay a better competitive advantage for Chinese enterprises in international competition.

Legal regulations

At the legal regulatory level, it is necessary to gradually develop digital human rights, clarify the distribution of responsibilities, establish a regulatory system, and achieve an organic combination of the rule of law and technical governance. At the current stage, we should actively promote the effective implementation of the "Personal Information Protection Law" and "Data Security Law" and carry out legislative work in the field of autonomous driving; we should also strengthen research on algorithm supervision systems in key areas, distinguish different scenarios, and explore the necessity and prerequisites for the application of artificial intelligence ethical risk assessment, algorithm audits, data set defect detection, algorithm certification and other measures, so as to prepare theoretical and institutional suggestions for the next step of legislation.

international cooperation

Currently, human society is entering the era of intelligence, and the rules and order in the field of artificial intelligence worldwide are in the formative stage. The EU has conducted many studies focusing on artificial intelligence values ​​and hopes to transform Europe's human rights tradition into new advantages in the development of artificial intelligence through legislation and other means. The United States also attaches great importance to artificial intelligence standards. Trump issued an executive order on the "U.S. Artificial Intelligence Plan" in February 2019, requiring government agencies such as the White House Office of Science and Technology Policy (OSTP) and the National Institute of Standards and Technology (NIST) to formulate standards to guide the development of reliable, robust, trustworthy, safe, simple and collaborative artificial intelligence systems, and called for leading the formulation of international artificial intelligence standards.

Our country is at the forefront of the world in the field of artificial intelligence technology. It needs to be more proactive in responding to the challenges posed by ethical issues in artificial intelligence and assume corresponding ethical responsibilities in the development of artificial intelligence; actively carry out international exchanges, participate in the formulation of relevant international management policies and standards, and grasp the right to speak in scientific and technological development; occupy the commanding heights of development among the most representative and breakthrough scientific and technological forces, and make positive contributions to the realization of global governance of artificial intelligence.

More