Ethical Issues And Governance Principles Of Artificial Intelligence
Ethical Issues And Governance Principles Of Artificial Intelligence
In recent years, artificial intelligence has been applied in more and more fields and scenarios such as autonomous vehicles, medical care, media, finance, industrial robots and Internet services, and its impact is becoming wider and wider. Industrial giants from various countries have invested a lot of energy and funds to carry out research and product development related to the research and application of key artificial intelligence technologies, and have launched different artificial intelligence platforms and products. On the one hand, these developments have brought about improvements in efficiency and reduced costs, and on the other hand, they have also brought new ethical issues to society.The most famous prototype of artificial intelligence ethics in public discussion comes from the three laws of robots proposed by science fiction author Asimov. Today we know that the three laws of Asimov robots cannot establish reasonable constraints on artificial intelligence once and for all, but its true value is to raise
In recent years, artificial intelligence has been applied in more and more fields and scenarios such as autonomous vehicles, medical care, media, finance, industrial robots and Internet services, and its impact is becoming wider and wider. Industrial giants from various countries have invested a lot of energy and funds to carry out research and product development related to the research and application of key artificial intelligence technologies, and have launched different artificial intelligence platforms and products. On the one hand, these developments have brought about improvements in efficiency and reduced costs, and on the other hand, they have also brought new ethical issues to society.
The discussion on the ethical issues of artificial intelligence includes both metaphysical research and discussion on seeking to solve practical problems and seeking social consensus. How to discover ethical consensus under the new technological conditions of artificial intelligence has far-reaching significance for economic, social and political purposes. At present, various countries, industry organizations, social groups and commercial companies in the field of artificial intelligence have put forward ethical standards for artificial intelligence to regulate the artificial intelligence technology itself and its applications. The Chinese government regards artificial intelligence as the main driving force for industrial upgrading and economic transformation, and encourages, supports and promotes the development of artificial intelligence. In a critical period in my country to promote the development of artificial intelligence, it is of great significance to carry out discussions on the ethical issues of artificial intelligence. Against this background, this article focuses on short-term and long-term ethical issues arising from artificial intelligence and corresponding governance measures, in order to truly benefit mankind.
Artificial Intelligence Ethical Issues and Sources
In the works of art, many people are familiar with the image of Frankenstein - a monster born in thunder and lightning combined with machines. People often feel fear of these powerful but not good forces. Will artificial intelligence be the same as the monster Frankenstein? Will we create a technology that will ultimately destroy us humans? A group of technical leaders, including Elon Musk, publicly raised the question, which quickly attracted public attention. In the context of artificial intelligence technology, the autonomy of machines has exceeded the scope expected by people before, which of course requires the construction of a new responsibility system.
The most famous prototype of artificial intelligence ethics in public discussion comes from the three laws of robots proposed by science fiction author Asimov. Today we know that the three laws of Asimov robots cannot establish reasonable constraints on artificial intelligence once and for all, but its true value is to raise a possibility. This possibility is the technology we create - the "autonomous" decision-making subject who is faster than us in dealing with certain problems and stronger than us in mechanical and physical strength - not only will it not harm humans, but can benefit humans instead society. The core issue that the three laws need to deal with is the issue of human subjectivity, which is also the core issue in exploring the ethics and governance of artificial intelligence. Regarding the ethics and governance of artificial intelligence, whether it is related to algorithmic decision-making, data and privacy-related issues, and social impact related issues, it all involves the subjectivity of human beings.
From the perspective of the existing capabilities/technical potential of artificial intelligence and the negative consequences it brings to human society, there will be two major problems: 1. Artificial intelligence is entrusted with the ability to make decisions on human affairs, but it does the decision-making process. The ethical judgment ability of the results is insufficient; 2. Humans lack the ultimate ethical norms to guide artificial intelligence to play a role.
The first type of problem comes from our concerns about the lack of judgment on the ethical significance of artificial intelligence systems on their decision-making results. Artificial intelligence is often used to solve a specific problem, and can only make decisions through existing limited data, and often cannot understand a broader social and ethical context like humans. Therefore, it is completely understandable that we are afraid of the lack of awareness of the ethical significance of decision-making consequences. When the consequences of AI decisions involve comparisons between one result and another, unpredictable consequences are often caused. For example, a human might give an artificial intelligence system an instruction to obtain food, but the system kills a human pet. This is because artificial intelligence cannot fully understand the ethical meaning of a result, so that the instructions are executed incorrectly. Our concerns about artificial intelligence’s lack of ethical judgment of decision-making results are even more serious when artificial intelligence technology itself lacks transparency (black box problem). Machine learning adopted by artificial intelligence often cannot trace back to the specific mechanisms of the machine's decisions due to algorithms (such as machine learning) and computing power limitations. The inability to backtrack brings limitations to our ability to predict the consequences in advance and make corrections afterwards, leading us to hesitate to decide whether to apply artificial intelligence technology.
The second type of problem comes from our concerns about the potential of artificial intelligence. Artificial intelligence may become participants and influencers of all human decisions, but we do not know that no known ethical norms can guide the above behavior. The "God" created by humans is unable to care for the world, which shocks us. We are worried that with the development of artificial intelligence, it will lead to further deterioration of existing social problems, and may bring about new social problems.
Starting from the above preposition, the author proposes two basic directions to think about artificial intelligence ethics (ethics embedded in machines) from the perspectives of purpose and means: technology must promote human goodness (embodied in the principle of fundamental interests of human beings); The autonomy of a developed machine confirms human subjectivity (embodied in the principle of responsibility). In other words, recognizing the characteristics of the new technology itself and its potential social impact, we see that the AI ethics should emphasize: (1) People can use AI to gain greater capabilities (do good/harm), and therefore have greater Responsibility should be emphasized; (2) Artificial intelligence must obey ethical rules set by humans. This is also the basic basis for the two basic principles that should be followed in the design and application of artificial intelligence in the "White Paper on Standardization of Artificial Intelligence (2018). Artificial intelligence that violates the principle of human fundamental interests, whether it is a marketing algorithm used to defraud customers, a judicial decision-making system used to discriminate against some citizens in justice, or excessive collection and abuse of personal information, violates the ethical principles of artificial intelligence.
According to the specific nature and characteristics of the ethical risks of artificial intelligence, the risks of artificial intelligence can be sorted out from three aspects: algorithm, data and application. The governance of ethical risks requires legislation and policies to clarify the responsibilities of each relevant entity, including information providers, information processors and system coordinators. In addition, artificial intelligence may also pose long-term risks to society, such as challenges to existing legal systems such as employment, market competition order, property rights, and even fundamental changes in production methods. We classify them into long-term and indirect In the midst of ethical risks.
Algorithm
The risks in algorithms mainly include algorithm security issues, algorithm interpretability issues, algorithm discrimination issues and algorithm decision-making dilemma. Algorithm security issues arise from the challenge of algorithm vulnerabilities being hacked and maliciously exploited. At the same time, algorithms face trustworthiness problems from design, training to use and algorithms are ready to be used to bring challenges to reliability.
The interpretability of algorithms involves the informed interests and subjective status of human beings, and is of great significance to the long-term development of artificial intelligence. The State Council promulgated the "Plan for the Development of New Generation Artificial Intelligence". At the same time, Academician Pan Yunhe mentioned that one issue that needs to be paid attention to in the application of artificial intelligence is the inexplicability of algorithms. The problem of algorithm interpretability has also attracted the attention of the media and the public abroad. For example, the White Paper "Ethical Guidelines for Artificial Intelligence Design" launched in 2016 and 2017 has put forward requirements for the ability to interpret artificial intelligence and automation systems in multiple parts. The American Computer Association's Public Policy Committee issued the "Actualization Transparency and Accountability Statement" in early 2017, proposing seven basic principles, one of which is an "interpretation", hoping to encourage systems and institutions that use algorithmic decision-making. , provides explanations for algorithmic processes and specific decisions. In 2017, the University of California, Berkeley released the "Berkeley Perspective on Challenges on Artificial Intelligence Systems", which summarized nine challenges and research directions based on the development trend of artificial intelligence. One of them, the third term, is to develop interpretable decisions so that people can identify which characteristics of the input of the AI algorithm cause a specific output.
What often occurs at the same time as the interpretability problem is the problem of algorithm discrimination, that is, in seemingly neutral algorithms, due to the fact that there is a certain bias in the cognition of the algorithm designer, or the training algorithm uses a problematic data set, etc., Here, the decision-making of artificial intelligence systems has discriminatory results. Such examples of media have been reported from time to time, such as "reducing credit scores for vulnerable groups" in the financial field, "refuse to loans to 'people of color'", "advertising companies are more inclined to display high-interest loan information to low-income groups", etc. .
Algorithm discrimination is mainly divided into three categories: "artificial discrimination", "data-driven discrimination" and "discrimination caused by machine self-learning". Artificial discrimination refers to the introduction of discrimination or prejudice into the decision-making process due to human reasons. Data-driven discrimination refers to the bias of the original training data, which leads to the algorithm's execution of discrimination into the decision-making process. The algorithm itself does not question the data it receives, but simply searches for and mines the patterns or structures implicitly behind the data. If the data has some choice bias or preference from the beginning, the algorithm gets output similar to human bias. Discrimination caused by machine self-learning refers to the machine learning that different multi-dimensional features of data are learned by itself during the learning process. Even if certain features of the data set are not artificially assigned to the data set, or programmers or scientists deliberately avoid inputting some sensitive data, In the process of self-learning, machines will still learn other characteristics of the input data, thereby introducing certain biases into the decision-making process. This is the discrimination caused by machine self-learning.
The dilemma of algorithmic decision-making stems from the unpredictability of algorithmic results caused by the self-learning ability of artificial intelligence. To reduce or eliminate algorithm decision-making dilemma, in addition to improving the interpretability of the algorithm, corresponding algorithm termination mechanisms can also be introduced.
Data
Data risks mainly include the risk of privacy violations and the risk of identification and protection of personal sensitive information. In modern society, privacy protection is the foundation of trust and personal freedom, and it is also the basic way to maintain civilization and dignity in the era of artificial intelligence. In the era of artificial intelligence, there is greater risk of invasion of privacy and more victims.
Traditional legal norms protect privacy focus on the protection of individuals' activities in the private sphere and private space, as well as the protection of personal private and non-public information. On the basis of personal information, laws and regulations distinguish between ordinary personal information and personal sensitive information. Laws usually provide higher protection for personal sensitive information. For example, the processing of personal sensitive information requires express consent from the subject of personal information, or the needs of major legitimate interests or public interests, and strictly limit the automated processing of personal sensitive information, and It is required to store it encryptedly or take more stringent access control and other security protection measures. Personal sensitive information spreads outside the scope of authorized consent, or the spread of personal information exceeds the control of the organization and organization that collects and uses personal information, and users exceed the authorized use (such as changing the purpose of processing, expanding the scope of processing, etc.), which may all be subject to individuals. The rights and interests of information subjects bring significant risks.
The application of artificial intelligence technology has greatly expanded the scenarios, scope and quantity of personal information collection. Artificial intelligence technologies such as image recognition, speech recognition, and semantic understanding realize the acquisition of massive unstructured data, and the combination of artificial intelligence and Internet of Things devices enriches the scenarios of offline data acquisition. For example, various smart home devices such as home robots, smart refrigerators, smart speakers, etc. enter people's living rooms and bedrooms to collect information such as people's living habits, consumption preferences, voice interactions, videos, etc. in real time; various smart assistants are providing users with the While providing more convenient services, it also provides comprehensive access and analysis of user browsing, search, location, itinerary, email, voice interaction and other information; a surveillance camera that supports facial recognition can be used in public places without the personal knowledge. Next, identify individual identities and realize continuous tracking of individuals. All of these require further regulation by law.
Social aspects
Ethical issues related to society mainly include algorithm abuse and misuse. Algorithm abuse and misuse refer to the situation in which people use algorithms to conduct a series of activities such as analysis, decision-making, coordination, and organization, and deviations in their purpose, usage methods, scope of use, and causing adverse effects or adverse consequences. For example, face recognition algorithms can improve public security and speed up the discovery of criminal suspects, etc., but if face recognition algorithms are applied to discover potential criminals or determine whether someone has criminal potential based on their face shape, it is a typical algorithm abuse . Due to the automation attributes of artificial intelligence systems, algorithm abuse will amplify the error effects generated by the algorithm and continuously strengthen it to become an important feature of a system.
The abuse of algorithms is mainly caused by algorithm designers' manipulation behavior for economic interests or other motivations, excessive reliance on algorithms by platforms and users, blindly expanding the application of algorithms to areas that have not been considered in algorithm design, etc. The algorithm designers of e-commerce platforms recommend products that do not meet the interests of users, or the entertainment platform induces users' entertainment or information consumption behaviors for their own commercial interests, causing users to be addicted to, etc., are all manifestations of the manipulation behavior of algorithm designers. In the medical field, excessive reliance on image reading diagnosis by artificial intelligence platforms leads to misdiagnosis, as well as problems caused by misjudgment in the security field and crime are directly related to the personal safety and freedom of citizens.
It should be noted that ethical issues related to society have the following characteristics: First, they are closely related to personal interests, such as the application of algorithms in situations such as crime assessment, credit loans, and employment assessments that concern personal interests, and the personal interests of the individual. Wide impact. Secondly, the problems they bring are usually difficult to deal with in a short time. For example, deep learning is a typical "black box" algorithm. If the models established based on deep learning have discrimination, it is difficult to find out the cause when dealing with it. Third, when such problems arise in commercial applications, the public's rights and interests are easily infringed due to the profit-seeking nature of capital.
Principles and Practice of Artificial Intelligence Governance
The characteristics of artificial intelligence technology and its ethical challenges have brought problems to social governance. Traditionally, governance presupposes the subject (agent) that can follow the rules, that is, the person himself. Today we recognize that AI is characterized by its high degree of autonomy, that is, its decisions no longer require further instructions from the operator, and considering that such decisions may produce unexpected results, designers and applications of artificial intelligence technology Those who have to implement ethical principles in all aspects of the research and development and application of artificial intelligence technology in order to achieve effective governance of artificial intelligence.
In the traditional field of technology, a common way to prevent damage is to intervene after the injury is caused. However, it is often too late to consider intervention when the AI system causes damage. A better approach is to take immediate and continuous ethical risk assessment and compliance system construction as an integral part of the system's operation, instantly and continuously assess whether there is ethical risk in AI systems, and before and when there is little damage. It is handled through the compliance system. Instant and continuous risk assessments are much more effective in ensuring artificial intelligence systems than pressing the "emergency button".
Therefore, when discussing the ideas and logic that artificial intelligence governance should follow, we must be wary of the limitations of industry self-discipline and the lag of legislation. Asimov and other thinkers of scientific and technological ethics realize that ethics must be clarified at the technical level in order to ensure the effectiveness of governance. Building ethical standards for artificial intelligence is an indispensable aspect of governance. In addition, formulating laws, improving policies, and establishing regulatory agencies based on the characteristics of the law and the policy itself is also a method that governance must be implemented.
Explorations in artificial intelligence governance at home and abroad deserve our attention and reference. For example, the EU has embodied the cutting-edge exploration of designing governance systems based on artificial intelligence ethics through robot regulation. The strategic document issued by the United States in 2016 proposes to understand and solve the ethical, legal and social impacts of artificial intelligence. The British government has proposed to deal with the legal, ethical and social impacts of artificial intelligence in several artificial intelligence reports released by it. The most typical example is the 180-page report "The Development of Artificial Intelligence in the UK" issued by the British Parliament in April 2018. Plan, Ability and Ambition.
The United Nations released the Robot Ethics Report in September 2017, proposing the formulation of ethical norms at the national and international levels. The Institute of Electrical and Electronics Engineers (IEEE) launched the Global Initiative on the Ethics of Autonomous/Intelligent Systems in 2016 and began organizing ethical guidelines for artificial intelligence design. Under the auspices of the Future Life Institute (FLI), nearly 4,000 experts from all walks of life signed and supported 23 basic principles of artificial intelligence.
Our country has also carried out exploration and practice in this aspect. The "New Generation Artificial Intelligence Development Plan" released in 2017 proposed China's artificial intelligence strategy, and the formulation of laws, regulations and ethical norms to promote the development of artificial intelligence was proposed as an important guarantee measure. On January 18, 2018, the "Artificial Intelligence Standardization White Paper (2018)" was released at the founding conference of the National Artificial Intelligence Standardization General Group and the Expert Advisory Group. The white paper discusses the security, ethics and privacy issues of artificial intelligence, and believes that setting the ethical requirements of artificial intelligence technology should rely on the in-depth thinking and broad consensus of the ethics of artificial intelligence by society and the public, and follow some consensus principles.
The development and application of artificial intelligence technology profoundly changes human life, which will inevitably impact the existing ethical and social order, and cause a series of problems. These problems may manifest as intuitive short-term risks, such as the security risks of algorithm vulnerabilities and the formulation of discriminatory policies that lead to algorithmic bias, etc., and may also be relatively indirect and long-term, such as the impact on property rights, competition, employment and even social structure. Although short-term risks are more specific and tangible, the social impacts of long-term risks are more extensive and far-reaching and should also be taken seriously.
There is inevitably a contradiction between the rapid development of artificial intelligence technology and the relative stability of the governance system, which requires us to clarify the basic principles of dealing with artificial intelligence. Comparing internationally, the basic principles of artificial intelligence ethics are the "Asiloma Artificial Intelligence Principles" (AI) proposed at the "Asiloma Artificial Intelligence" (AI) conference held in Asiloma in January 2017, and Electricity The work on the ethical standards of artificial intelligence organized by the Institute of Electronic Engineers (IEEE) has received the most attention. Previously, countries have also proposed robot principles and ethical standards. The author believes that the research and application of artificial intelligence in my country should follow two basic principles of artificial intelligence ethics, namely the principle of fundamental interests and responsibility of human beings.
The fundamental principle of human interests
The principle of fundamental interests of human beings, that is, artificial intelligence should take the ultimate goal of realizing fundamental interests of human beings. This principle embodies respect for human rights, maximizing benefits to humans and the natural environment, and reducing technological risks and negative impacts on society. The fundamental interests of humans require:
(1) In terms of its impact on society, the research and development and application of artificial intelligence aims to promote humans to be good (), which also includes the peaceful use of artificial intelligence and related technologies to avoid the arms race of deadly artificial intelligence weapons.
(2) In terms of artificial intelligence algorithms, the research and development and application of artificial intelligence should conform to human dignity and protect human basic rights and freedoms; ensure transparency of algorithm decision-making, ensure algorithm settings avoid discrimination; promote the benefits of artificial intelligence in the world Distribution within the scope is fair and narrowing the digital divide.
(3) In terms of data use, the research and development and application of artificial intelligence should pay attention to privacy protection, strengthen the control of personal data, and prevent data abuse.
The principle of responsibility
The principle of responsibility is to establish a clear responsibility system in both aspects of the development and application of artificial intelligence related technologies, so that when the results of artificial intelligence application lead to conflicts between human ethics or laws, people can face artificial intelligence technology developers or The design department holds accountable and establishes a reasonable responsibility system at the level of artificial intelligence application. Under the principle of responsibility, the principle of transparency should be followed in the development of artificial intelligence technology; in the application of artificial intelligence technology, the principle of consistency between rights and responsibilities should be followed.
Transparency principle
The principle of transparency requires that in the design of artificial intelligence, humans ensure that humans understand the working principles of autonomous decision-making systems, and thus predict their output, that is, humans should know how and why artificial intelligence makes specific decisions. The implementation of the transparency principle depends on the interpretability, verifiability, and predictability of artificial intelligence algorithms.
The principle of consistency of rights and responsibilities
The principle of consistency in rights and responsibilities means that accountability should be ensured in the design and application of artificial intelligence, which includes: retaining accurate records of relevant algorithms, data and decisions in the design and use of artificial intelligence in order to generate harmful results. can review and determine the ownership of responsibility. The realization of the principle of consistency between rights and responsibilities requires the establishment of a public review system for artificial intelligence algorithms. Public review can increase the possibility that artificial intelligence algorithms adopted by relevant governments, scientific research and commercial institutions will be corrected. Reasonable public review can ensure that on the one hand, necessary commercial data should be reasonably recorded, corresponding algorithms should be supervised, and commercial applications should be reasonably reviewed. On the other hand, commercial entities can still use reasonable intellectual property rights or trade secrets to protect the company's Benefit.
It should be made clear that the ethical principles of artificial intelligence we are talking about should not only be followed by human subjects in the research and development and application of artificial intelligence systems (including technology companies and scientific and technological workers in research institutions, industry fields), but these principles should also be embedded in artificial intelligence. The system itself. Some people still question how machines follow ethical rules. The typical view is that ethical rules are only for people, and it is not possible to set ethical rules for artificial intelligence systems (including robots). Indeed, traditionally ethical principles are directed at the agent, that is, the person himself, who can follow these principles. However, considering that artificial intelligence is characterized by the "simulation, extension and expansion" of the machine's intelligence on humans, that is, its decisions do not require step-by-step instructions from the operator, and at the same time, such decisions may produce unexpected results for humans, Intelligent systems should also be regulated by ethical rules.
in conclusion
Society must trust that the benefits that artificial intelligence technology can bring to people outweigh the harm in order to support the continued development of artificial intelligence. This trust requires us to understand and explore ethical and governance issues in the field of artificial intelligence, and to use it consciously in the early stages of the development of artificial intelligence technology. Today, scholars, scientific and technological workers and society have a basic consensus that the human subject responsible for the research and development and application of artificial intelligence systems, including science and technology enterprises and scientific and technological workers in research institutions and industry fields, should obey some basic ethical principles. The two basic ethical principles proposed in this article are the summary and sublimation of domestic thinking in this regard. In addition to the basic ethical principles of artificial intelligence, another inspiration given to us by our predecessors is that artificial intelligence ethics should be embedded in the system itself. As we increasingly rely on robots to make decisions on our behalf, we should embed ethical thinking in this decision-making process, rather than waiting for the decision-making results to have negative impacts on us before correcting them.
This article hopes to look at the ethics and governance issues of artificial intelligence from a clearer perspective. Scholars and the public need to discuss together: Is it possible for us to prevent the damage caused by artificial intelligence to individuals and society? Only when this problem is thought about and properly solved can the development of artificial intelligence have a real foundation.