AI Ethics

The Path And Enlightenment Of The Ethics And Governance Of The European Union

The Path And Enlightenment Of The Ethics And Governance Of The European Union

The Path And Enlightenment Of The Ethics And Governance Of The European Union

Author | Cao Jianfeng Senior Researcher at Tencent Research Institute. Assistant Researcher at the Law Research Center of Fang Lingman Tencent Research Institute. The EU actively promotes people-oriented artificial intelligence ethics and governance. As early as January 2015, the European Parliamentary Legal Affairs Committee decided to set up a working group specializing in legal issues related to the development of robots and artificial intelligence. In May 2016, JURI issued the Draft Report on Legislative Recommendations to the European Commission on Civil Law Rules, calling on the European Commission to evaluate the impact of artificial intelligence and formally submitted it in January 2017. The robot civil legislation has made extensive suggestions and proposed to formulate a

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics_Artificial Intelligence Ethics Consensus

Author | Cao Jianfeng Senior Researcher at Tencent Research Institute

Assistant Researcher at the Law Research Center of Fang Lingman Tencent Research Institute

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics Consensus

The EU actively promotes people-oriented artificial intelligence ethics and governance

The development and penetration of digital technologies have accelerated the transformation and transformation of society and economy, and artificial intelligence (AI) as its core driving force has created more possibilities for social development. Generally speaking, AI refers to an algorithm or machine that realizes independent learning, decision-making and execution based on the investment of certain information content. Its development is based on the improvement of computer processing capabilities, algorithm improvements and exponential growth of data. From machine translation to image recognition to the synthesis and creation of art works, various applications of AI have begun to enter our daily lives. Today, AI technology is widely used in different industries (such as education, finance, construction and transportation, etc.), and is used to provide different services (such as autonomous driving, AI medical diagnosis, etc.), which is profoundly changing human society. At the same time, the development of AI has also raised challenges to law, ethics, society, etc., bringing problems such as fake news, algorithmic bias, privacy violations, data protection, and network security. Against this background, artificial intelligence ethics and governance are increasingly valued. From the government to the industry to the academic community, a global wave of exploring the formulation of artificial intelligence ethical standards has been set off. Since 2015, the EU has been actively exploring the ethics and governance measures of artificial intelligence. Although it has not taken the lead in the development of AI technology, it has been at the forefront of the world in terms of AI governance.

As early as January 2015, the European Parliamentary Legal Affairs Committee (JURI) decided to set up a working group specializing in legal issues related to the development of robots and artificial intelligence. In May 2016, JURI issued the Draft Report on Legislative Recommendations to the European Commission on Civil Law Rules, calling on the European Commission to evaluate the impact of artificial intelligence and formally submitted it in January 2017. The robot civil legislation has made extensive suggestions and proposed to formulate a "Robot Charter". [1] In May 2017, the European Economic and Social Commission (EESC) issued an opinion on AI, pointing out the opportunities and challenges that AI brings to 11 fields such as ethics, security, and privacy, and advocates the formulation of AI ethical norms. Establish a standard system for AI monitoring and certification. [2] In October of the same year, the European Council pointed out that the EU should have a sense of urgency to deal with new trends in artificial intelligence, ensure high-level data protection, digital rights and the formulation of related ethical standards, and invited the European Commission to propose to respond to artificial intelligence in early 2018. New trend approach. [3] In order to solve the ethical problems caused by the development and application of artificial intelligence, the EU has established AI ethics and governance as the key content of future legislative work.

On April 25, 2018, the European Commission issued a policy document "EU Artificial Intelligence" (for), and the EU's artificial intelligence strategy was late. The strategy proposes a people-oriented AI development path, aiming to improve the EU's scientific research level and industrial capabilities, respond to the technical, ethical, legal challenges brought by artificial intelligence and robots, and allow artificial intelligence to better serve European society and economy. develop. The EU's artificial intelligence strategy includes three pillars: First, improve technology and industrial capabilities and promote the wide penetration of artificial intelligence technology into all walks of life; Second, actively respond to social and economic changes, so that the education and training system can keep up with the pace of development of the times, Closely monitor changes in the labor market, provide support for transitional workers, and cultivate diversified and interdisciplinary talents; third, establish appropriate ethical and legal frameworks, clarify the application of product rules, and draft and formulate an ethical guide for artificial intelligence (AI) . [4] In June of the same year, the European Commission appointed 52 representatives from academia, industry and civil society to form a High-Level Group on AI (AI HELP) to support the European artificial intelligence strategy execution.

In January 2019, the Industry, Research and Energy Commission under the EU Parliament issued a report calling on the EU Parliament to formulate a comprehensive EU industrial policy on artificial intelligence and robots, including legal frameworks, ethics and governance of cybersecurity, artificial intelligence and robots. wait. [5] In April 2019, the EU issued two important documents - the Trusted AI Ethics Guide (for AI, referred to as the "Ethics Guide") [6] and the "Algorithmic Responsibility and Transparent Governance Framework" (A for , referred to as "governance framework") [7], is the specific implementation of the requirement of "establishing an appropriate ethical and legal framework" proposed by the EU's artificial intelligence strategy, providing reference for the formulation of relevant rules in the future, and representing the latest efforts of the EU to promote AI governance. .

Artificial Intelligence Ethics Consensus_Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics Consensus

Construction of an ethical framework for artificial intelligence: an ethical guide to trusted AI

In order to balance technological innovation and human rights protection, the construction of an artificial intelligence ethical framework is essential. The ethical framework provides principled guidance and basic requirements for the design, research and development, production and utilization of artificial intelligence, ensuring that its operation complies with legal, safety and ethical standards. The Ethical Guide was drafted and published by AI HELP and is not mandatory and binding. The EU encourages all stakeholders to actively implement the Ethical Guidelines to promote the formation of international consensus on AI ethical standards. Overall, in addition to formulating pan-EU ethical norms, the EU hopes that the ethical governance of artificial intelligence can be guaranteed at different levels. For example, member states can establish an artificial intelligence ethical monitoring and supervision agency, encourage enterprises to set up ethical committees and formulate ethical guidelines when developing artificial intelligence to guide and constrain their AI developers and their R&D application activities. This means that the ethical governance of artificial intelligence cannot stay at the level of abstract principles, but needs to be integrated into practical activities of different subjects and levels to become a living mechanism.

According to the Ethical Guidelines, trustworthy AI must have but is not limited to three characteristics: (1) Legality, trustworthy AI should respect people's basic rights and comply with existing laws; (2) Conforming ethical, complying with ethical Trust AI should ensure that ethical principles and values ​​are adhered to and conform to "ethical purpose"; (3) Robustness, that is, from the perspective of technological or social development, the AI ​​system should be stable and reliable, because even if the AI ​​system meets ethical purpose, Without reliable technology support, it may still cause harm to humans inadvertently. Specifically, the ethical framework of trusted AI includes the following three levels:

(I) The foundation of trustworthy AI

Among the basic rights stipulated in international human rights law, the EU Charter and related treaties, the main requirements that can be used as AI development include: personal dignity, personal freedom, democracy, justice and law, equality, non-discrimination and unity, and legal rights of citizens. Many public and private organizations draw inspiration from basic rights to develop ethical frameworks for artificial intelligence systems. For example, the European Group on Ethics of Science and New Technology (EGE) has proposed nine basic principles based on the values ​​in the EU Charter and related provisions. Based on the vast majority of existing principles, the "Ethics Guide" further summarizes and summarizes four ethical principles that meet the requirements of social development, and uses them as the foundation of trustworthy AI to provide guidance for the development, deployment and use of AI .

These principles include: (1) Respect for the principle of human autonomy. Humans interacting with AI must have sufficient and effective self-determination capabilities. AI systems should follow the people-oriented concept to serve humans, enhance human cognition and enhance human skills. (2) Principle of prevention of damage. AI systems cannot have negative impacts on humans. The AI ​​systems and their operating environment must be safe. AI technology must be robust and should be ensured not to be used maliciously. (3) The principle of fairness. The development, deployment and use of AI systems must adhere to both substantive fairness and procedures, and ensure equal distribution of interests and costs, and the protection of individuals and groups from discrimination and prejudice. In addition, individuals affected by decisions made by AI and its operators have the right to object and seek relief. (4) Explainable principle. The functions and purposes of AI systems must be open and transparent, and the AI ​​decision-making process needs to be explained to those directly or indirectly affected by the decision results.

Artificial intelligence ethics consensus_Artificial intelligence ethics_Ethical artificial intelligence mind map

(II) Implementation of Trusted AI

Under the guidance of AI ethical principles, the "Ethics Guide" proposes seven key requirements that should be met for the development, deployment and utilization of AI systems. Specifically, in the "Ethical Guidelines", the four ethical principles as top-level ethical values ​​will play the most basic guiding role in the research and development and application of trustworthy AI, but the seven key requirements are ethical requirements that can be implemented. This means that artificial intelligence ethics is a governance process from macro top-level values ​​to meso-ethical requirements to micro-technical implementation.

1. Human initiative and supervision

First, AI should help humans exercise their fundamental rights. When AI has the possibility of damaging basic rights due to the limitation of technical capabilities, the basic rights impact assessment should be completed before the development of the AI ​​system, and the possible impact of the AI ​​system on basic rights should be understood through the establishment of an external feedback mechanism. Secondly, AI should support individuals to make smarter decisions based on goals, and individual autonomy should not be affected by the AI ​​automatic decision-making system. Finally, establish appropriate supervision mechanisms, such as "human-in-the-loop" (that is, human intervention can be done in every decision cycle of the AI ​​system), and "human-on-the-loop" (that is, in the AI ​​system design cycle to conduct manual interventions, and to "human-in-" (supervising the overall activity and impact of AI and deciding whether to use it).

2. Technology robustness and safety

On the one hand, we must ensure that the AI ​​system is accurate, reliable and can be repeated experiments, improve the accuracy of AI system decision-making, improve the evaluation mechanism, and promptly reduce the unexpected risks brought about by system error predictions. On the other hand, strictly protect the AI ​​system to prevent vulnerabilities and malicious hackers’ attacks; develop and test security measures to minimize unexpected consequences and errors, and have an executable backup plan when there is a problem with the system.

3. Privacy and data governance

User privacy and data must be strictly protected throughout the entire life cycle of the AI ​​system to ensure that the collected information is not illegally used. While eliminating errors, inaccurate and biased components in the data, the integrity of the data must be ensured and the entire process of AI data processing must be recorded. Strengthen the management of data access protocols and strictly control the conditions for data access and flow.

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics Consensus

4. Transparency

The traceability of the data set, processes and results of AI decision-making should be ensured, so that the results of AI decision-making can be understood and tracked by humans. When the decision-making results of the AI ​​system have a significant impact on an individual, the decision-making process of the AI ​​system should be explained appropriately and promptly. Improve users' overall understanding of the AI ​​system, let them understand the interactive activities with the AI ​​system, and truthfully inform the AI ​​system of the accuracy and limitations.

5. Diversity, non-discrimination and fairness

To avoid prejudice and discrimination against vulnerable and marginalized groups, AI systems should be user-centered and allow anyone to use AI products or accept services. Follow universal design principles and relevant accessibility standards to meet the widest range of user needs. At the same time, diversity should be promoted and relevant stakeholders should be allowed to participate in the entire AI life cycle.

6. Social and environmental welfare

AI systems are encouraged to assume the responsibility of promoting sustainable development and protecting the ecological environment, and use AI systems to research and solve global concerns. Ideally, AI systems should benefit the present and future generations. Therefore, the development, utilization and deployment of AI systems should fully consider its impact on the environment, society and even democratic politics.

7. Accountability

First, an accountability mechanism should be established to implement the responsible parties in the entire process of AI system development, deployment and use. Second, establish an audit mechanism for AI systems to achieve evaluation of algorithms, data and design processes. Third, identify, record and minimize the potential negative impact of the AI ​​system on individuals, and take appropriate remedial measures in a timely manner when the AI ​​system produces unfair results.

It is worth noting that different principles and requirements may have essential tensions between each other because they involve different interests and values. Therefore, decision makers need to make trade-offs based on actual conditions, while maintaining a continuous record and evaluation of the choices made. and communicate. In addition, the Ethical Guidelines also propose some technical and non-technical methods to ensure that the development, deployment and use of AI meet the above requirements, such as researching and developing interpretable AI technologies (AI, referred to as XAI), training monitoring models, and building AI Supervise the legal framework, establish and improve relevant industry standards, technical standards and certification standards, and educate to enhance public ethical awareness.

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics_Artificial Intelligence Ethics Consensus

(III) Evaluation of credible AI

Based on the above seven key requirements, the Ethics Guide also provides a list of credible AI evaluations. The evaluation list is mainly suitable for AI systems that interact with humans. It aims to provide guidance for the specific implementation of 7 key requirements and help different levels of the company or organization such as management, legal department, R&D department, quality control department, HR, procurement. , daily operations, etc. jointly ensure the realization of trusted AI. The Ethical Guidelines state that the list of evaluation matters in this list is not always exhaustive, and the construction of trustworthy AI requires continuous improvement of AI requirements and new solutions to problems. Stakeholders should actively participate to ensure that the AI ​​system is Operate safely, robustly, legally and ethically throughout the life cycle and ultimately benefit mankind.

It can be seen that in the EU's view, artificial intelligence ethics is a systematic project that requires the coupling between ethical norms and technical solutions. Most of the construction of artificial intelligence ethics in other countries and the international community may still be at the stage of extracting abstract values ​​and building consensus, but the EU has taken it a step further and began to explore the construction of a top-down framework for artificial intelligence ethics governance.

Artificial Intelligence Ethics Consensus_Ethics Artificial Intelligence Mind Map

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics Consensus

Policy recommendations for the governance of artificial intelligence algorithms:

Algorithm responsibility and transparent governance framework

The Governance Framework is a systematic study on algorithmic transparency and responsibility governance published by the European Parliament’s Team on Future and Science and Technology (STOA). Based on a series of real cases, the report clarifies the causes of unfair algorithms and their possible consequences, as well as the obstacles to achieving algorithmic fairness in a specific context. On this basis, the report proposes to use algorithm transparency and responsibility governance as tools to solve the problem of algorithm fairness. Realizing algorithm fairness is the purpose of algorithm governance, and emphasizes that the "responsible research and innovation" (RRI) method is used to promote the realization of algorithm fairness. Function and significance. The core of RRI is to achieve inclusive and responsible innovation with the participation of stakeholders.

The report proposes systematic policy recommendations for the governance of algorithmic transparency and accountability based on analyzing the challenges brought by algorithmic systems to society, technology and regulatory. The report starts from a high-level perspective of technical governance, discusses various types of governance choices in detail, and finally reviews the specific suggestions for algorithmic system governance in the existing literature. Based on extensive review and analysis of existing algorithmic system governance recommendations, the report puts forward four policy recommendations at different levels.

(I) Improve the public's algorithmic literacy

The prerequisite for achieving algorithm accountability is algorithm transparency. Algorithm transparency does not mean that the public can understand the various technical characteristics of the algorithm. The report points out that a broad understanding of algorithm functions has little effect on implementing algorithm accountability, and the disclosure of brief, standardized and information that may affect public decision-making or enhance public understanding of the overall algorithm system is more effective. In addition, investigative news reports and revelations also play an important role in exposing the improper use of algorithms and achieving algorithm transparency and accountability. For example, the New York Times once reported that Uber used certain algorithmic technology to mark and avoid cities' regulators (this news was revealed by former Uber employees). The report immediately attracted widespread attention from the media and society, and regulators also investigated the company. action. In addition to playing a supervisory role, news reports are committed to improving the public's understanding of algorithms in simple and easy-to-understand language. News investigations can also stimulate extensive social dialogue and debate and trigger new academic research. For example, a nonprofit report on "machine bias" in a crime risk assessment algorithm system used by some U.S. courts has triggered a series of studies on algorithmic fairness.

Based on this, the Governance Framework puts forward several policy suggestions on improving public algorithm awareness: (1) Educate the public to understand the core concepts of algorithm selection and decision-making; (2) Mandatory disclosure of standardized algorithms; (3) To carry out " (4) For public interest, the disclosure person is exempted from liability in case of violation of the terms of service or infringement of intellectual property rights.

Artificial intelligence ethics consensus_Artificial intelligence ethics_Ethical artificial intelligence mind map

(II) Public departments establish an algorithmic accountability mechanism

Today, more and more public sectors are beginning to use algorithmic systems to improve office efficiency, support complex office processes and assist policy-making activities. If the algorithm has flaws, it may have an immeasurable impact on the vulnerable groups in society. Therefore, the public sector needs to establish a complete algorithm transparency and accountability mechanism. One of the governance mechanisms that can be considered is to learn from the Data Protection Impact Assessment (DPIA) mechanism in the Data Protection Law to establish an algorithmic impact assessment (ie, AIA) mechanism. This mechanism allows policy makers to understand the usage scenarios of algorithm systems, evaluate the expected uses of algorithms, and make relevant suggestions, helping to establish an algorithm accountability mechanism. According to the Governance Framework, AIA's processes mainly include: publishing the public sector's definition of "algorithm system", publicly disclosing the purpose, scope, expected uses, relevant policies or practices of the algorithm, implementing and publishing the self-assessment of the algorithm system, and public participation , publish algorithm evaluation results, regularly update AIA, etc.

More