Technology Ethics | Ethical Anchor Points Under The AI Wave: Interpretation Of "Credited Artificial Intelligence Ethical Guidelines"
Technology Ethics | Ethical Anchor Points Under The AI Wave: Interpretation Of "Credited Artificial Intelligence Ethical Guidelines"
Ethical anchor points under the AI wave: Interpretation of
Ethical anchor points under the AI wave:
Interpretation of "Credited Artificial Intelligence Ethical Guidelines"
In the wave of artificial intelligence technology reshaping the world, the absence of ethics and governance is becoming the "Sword of Damocles" hanging above humans. From discrimination caused by algorithmic bias to the collapse of trust by deep forgery technology, the "double-edged sword" effect of AI has put the world in reflection: How to find a balance between technological innovation and human value? The EU's answer is to take the lead in putting "ethical reins" on AI. Since the release of the Trusted Artificial Intelligence Ethical Code (for) Act in 2019, the EU has embarked on a journey of exploration from ethical principles to legal constraints as a "rule maker". The official account will trace back to the evolution logic of the EU's trusted AI ethical framework and introduce the main content of the bill in batches. This time, the content of Chapter 1 of the bill will be introduced.
Version evolution
In April 2019, the "High-Level Expert Group of Artificial Intelligence" (AI HLEG) under the European Commission released an initial version of the guidelines, aiming to provide ethical guidance for trusted AI in response to the social challenges brought about by rapid technological development. Its core content includes three basic conditions: legality (), ethics (), and robustness (), four ethical principles of respecting human autonomy, injury prevention, fairness and interpretability, as well as seven key elements covering human supervision, technical security, privacy protection, transparency, fairness, social welfare, and accountability mechanisms. This version emphasizes "full life cycle" governance, covering all stages of AI development, deployment, and use, but it is a voluntary guide, lacks legal binding force, and relies on corporate self-discipline.
Two years later, the EU began to promote the transformation of ethical principles into legal rules based on the initial version, and proposed the draft Artificial Intelligence Act (AI Act) in 2021, including the governance requirements of "high-risk AI systems" in ethical standards into the legislative framework. In this stage, new risk hierarchical management was added, explicitly prohibiting the application of "unacceptable risks", requiring compulsory compliance of high-risk systems, and also proposed basic rights impact assessment ( ), requiring enterprises to systematically evaluate the basic rights that may affect AI systems. However, some scholars criticized it for simplifying ethical principles to a "compliance list", which may lead to the risk of "de-ethicalization".
The Artificial Intelligence Act in 2024 is officially passed, which marks the legalization of ethical norms. Its core change is to transform the ethical principles in the 2019 guideline into mandatory legal obligations. At the same time, the implementation mechanism was refined, the "algorithm impact assessment" system was introduced, and enterprises were required to conduct a comprehensive assessment of the social, environmental and democratic impacts of the AI system. It also established a unified EU AI regulatory agency to coordinate compliance reviews among member states. In addition, a new transparency obligation (noting the content generated by AI) is added to generative AI. However, the bill also has limitations, and specific implementation rules (specific criteria for determining "fairness") still need to be supplemented later.
Regarding the evolution direction after 2024, projects such as "pop AI" funded by the EU in 2024 proposed to include social disputes in policy design, such as balancing efficiency and privacy rights in police AI applications, and promoting the practice of "embedded ethics" ( ) . At the same time, the EU actively promotes mutual recognition of governance standards with other countries (such as the United States and Canada) to reduce the compliance costs of multinational enterprises. In addition, in response to the rapid iteration of AI technologies (such as quantum machine learning, brain-computer interface), the EU plans to update the evaluation framework every two years to ensure the timeliness of the guidelines.
Overall, the development path of the EU's "Ethical Code of Trusted Artificial Intelligence": establish a core framework with voluntary ethical guidelines in 2019; pass the Artificial Intelligence Act from 2021 to 2024 to achieve legalization of ethical principles; focus on coordination of dynamic governance and globalization after 2024, and respond to emerging technology challenges.
Trusted AI framework
Trusted AI framework diagram
The guidelines are divided into three chapters:
Chapter 1 The Foundation of Trusted AI: Explain the Foundation of Trusted AI by listing methods based on Basic Rights 12. It identifies and describes the moral principles that must be followed to ensure the ethics and robustness of AI.
Chapter 2 Implementing Trusted Artificial Intelligence: Transform these ethical principles into seven key requirements that AI systems should implement and meet throughout their life cycle. In addition, it provides technical and non-technical methods that can be used for its implementation.
Chapter 3 Evaluation of Trusted AI: List a specific and non-exhaustive list of trusted AI evaluations to implement the requirements of Chapter 2 to provide practical guidance to AI practitioners. This evaluation should be customized for the application of a specific system.
The last part of the guideline lists examples of useful opportunities and key issues raised by AI systems.
Chapter 1: The foundation of trustworthy artificial intelligence
This chapter explains the basis of trustworthy artificial intelligence, based on basic rights and reflected through four ethical principles. To ensure that artificial intelligence is ethical and robust, these principles need to be followed. This chapter draws heavily on knowledge in the field of ethics. Artificial intelligence ethics is a branch of applied ethics, focusing on ethical issues arising from artificial intelligence in the development, deployment and use of artificial intelligence. Its core focus is to clarify how artificial intelligence can promote a better life in one's own life, or in what aspects it poses challenges to a better life in one's own life, whether from the perspective of quality of life or from the aspects of human autonomy and freedom required by a democratic society.
Ethical thinking on artificial intelligence technology has multiple significance. First, it can prompt people to think about the need to protect individuals and groups at the most basic level. Second, it can inspire new innovations aimed at fostering ethical values, such as those that contribute to the achievement of the United Nations Sustainable Development Goals, which are deeply integrated into the upcoming EU Agenda 2030. Although this article mainly revolves around the first meaning mentioned above, the importance of ethics at the second meaning should not be underestimated. Trusted AI can enhance individual development and collective well-being by creating prosperity, value and maximizing wealth. It can also promote the construction of a equitable society by promoting equal distribution of economic, social and political opportunities and improving citizens' health and well-being.
Therefore, we must clarify how to best support the development, deployment and use of AI to ensure that everyone can thrive in an AI-based world and create a better future while enhancing global competitiveness. Like any powerful technology, the application of artificial intelligence systems in society also brings some ethical challenges, such as its impact on humans and society, decision-making capabilities, and security. If we increasingly use AI systems to assist or entrust decisions, we must ensure that the impact of these systems on people’s lives is fair, that they conform to uncompromising values and can act accordingly, and that they must have appropriate accountability mechanisms to safeguard this.
Europe needs to clarify what kind of AI is expected to be integrated into the normative vision of the future and understand what AI concepts should be studied, developed, deployed and used in Europe to achieve this vision. Through this article, we hope to contribute to this and propose the concept of trustworthy artificial intelligence, which we believe is the right direction to build the future of artificial intelligence. In such a future, democracy, the rule of law and fundamental rights are the cornerstones of artificial intelligence systems, and these systems will continue to improve and defend democratic culture while also creating an environment in which innovative and responsible competition can flourish. No matter how coherent, mature and meticulous the future version of an ethical code in a particular field is, it can never replace ethical reasoning itself, because ethical reasoning must always be sensitive to specific situational details that cannot be covered in the general code. In addition to setting up a set of rules, ensuring AI is trustworthy requires us to construct and maintain an ethical culture and mindset through public debate, education and practical learning.
1. Basic rights as moral and legal rights
We believe that the ethics of artificial intelligence should be based on the basic rights established in the EU Treaty, the EU Charter of Basic Rights, and international human rights law. Respecting fundamental rights within the framework of democracy and the rule of law provides the most reliable basis for determining abstract ethical principles and values that can be implemented in the practical application of artificial intelligence.
The EU Treaty and the EU Charter of Basic Rights stipulate a series of basic rights, and EU member states and EU institutions are legally obliged to respect it when implementing EU laws. The EU Charter of Basic Rights explains these rights in terms of dignity, freedom, equality, solidarity, civil rights and justice. The common basis for unifying these rights can be understood as a result of respect for human dignity, which also reflects what we call a “people-centered approach” in which people enjoy a unique and inalienable primary moral status in the civil, political, economic and social spheres.
Although the rights set forth in the EU Charter of Basic Rights are legally binding, it is important to recognize that fundamental rights do not provide comprehensive legal protection in all circumstances. For example, for the EU Charter of Basic Rights, it must be emphasized that its scope of application is limited to areas covered by EU law. International human rights law, especially the European Convention on Human Rights, is legally binding on EU member states, including areas outside the scope of EU law. At the same time, fundamental rights are granted to individuals (and to some extent to groups) by virtue of human moral status, and have nothing to do with the legal effect of these rights.
Therefore, basic rights, as a legally enforceable right, belong to the first component of trustworthy artificial intelligence (legal artificial intelligence) to ensure compliance with the law. And as the rights of everyone who is derived from the inherent moral status of human beings, they are also the basis for the second component of trustworthy AI (ethical AI) that deal with ethical norms that are not necessarily legally binding but are crucial to ensuring trustworthiness. Since this article does not aim to provide guidance on the former, for these non-binding guidelines, the reference to fundamental rights refers to the latter.
2. From basic rights to ethical principles
2.1 Basic rights are the basis for trustworthy artificial intelligence
Among the series of inseparable rights stipulated in international human rights law, the EU Treaty and the EU Charter of Basic Rights, the following categories of basic rights apply especially to artificial intelligence systems. In certain circumstances, many of these rights are legally enforced in the EU and therefore compliance with its terms is legally mandatory. But even if compliance with fundamental rights with legal enforcement has been achieved, ethical thinking can help us understand how the development, deployment and use of AI systems may involve fundamental rights and their potential value, and provide more detailed guidance when seeking to determine what we should do, rather than (currently) what technology can do.
Respect human dignity. Human dignity involves the idea that everyone has “intrinsic value”, which should not be weakened, damaged or suppressed by others, nor should it be influenced by new technologies such as artificial intelligence systems. In this case, respecting human dignity means that all people should be respected as moral subjects, not just as objects that can be screened, classified, scored, driven away, restricted or manipulated. Therefore, the development of artificial intelligence systems should respect, serve and protect human physical and mental health, personal and cultural identity, and meet their basic needs.
Personal freedom. Humans should be free to make life decisions for themselves. This means not only freedom from improper interference from sovereignty, but also requires intervention from governments and non-governmental organizations to ensure that individuals or those at risk of being excluded can enjoy equal benefits and opportunities from AI. In the context of artificial intelligence, individual freedoms, such as demanding mitigation (directly or indirectly) irreparable coercion, threats to mental autonomy and mental health, unreasonable surveillance, deception and unfair manipulation. In fact, personal freedom means a commitment to giving individuals greater control over their lives, including (among other rights) the protection of freedom of business, freedom of art and science, freedom of speech, private life and privacy, and freedom of assembly and association.
Respect democracy, justice and the rule of law. In constitutional democratic countries, all government powers must be authorized in accordance with the law and subject to legal restrictions. Artificial intelligence systems should help maintain and promote democratic processes and respect the diversity of personal values and life choices. Artificial intelligence systems must not undermine the democratic process, human prudent thinking or democratic voting system. Artificial intelligence systems must also be committed to ensuring that the way they operate does not undermine the basic norms of the rule of law, mandatory laws and regulations, and ensure that due process and everyone is equal before the law.
Equality, non-discrimination and solidarity – including the rights of those at risk of being excluded. Equal respect for the moral values and dignity of all people must be ensured. This goes beyond the scope of non-discrimination, which allows differentiation between situations on an objective and reasonable basis. In the context of AI, equality means that the operation of the system cannot produce unfair biased results (for example, the data used to train an AI system should be as inclusive as possible, representing different populations). This also requires full respect for potentially disadvantaged groups such as workers, women, people with disabilities, minorities, children, consumers or other groups at risk of being excluded.
Civil rights. Citizens have broad rights, including the right to vote, the right to obtain good administration or access to public documents, and the right to petition the administration. Artificial intelligence systems have great potential in improving the scale and efficiency of governments providing public goods and services to the society. At the same time, civil rights may also be negatively affected by artificial intelligence systems and should be protected. The use of the term "civil rights" here does not deny or ignore the rights of third-country nationals and unconventional (or illegal) personnel in the EU, who also enjoy rights under international law and should therefore also receive attention in the field of artificial intelligence systems.
2.2 Ethical principles in the context of artificial intelligence systems
Many public, private and civil organizations draw inspiration from fundamental rights to develop ethical frameworks for AI systems. In the EU, the European Group on Ethics of Science and New Technology (“EGE”) has proposed a set of nine basic principles based on the basic values set out in the EU Treaty and the EU Charter of Basic Rights. We further expand on this work, recognizing most of the principles proposed by the groups so far, while clarifying the goals that all principles are intended to cultivate and support. These ethical principles can inspire new specific regulatory tools, help explain fundamental rights as the social and technological environment evolves, and guide the basic principles of the development, deployment and use of artificial intelligence systems, and dynamically adjust as society develops.
Artificial intelligence systems should enhance individual and collective well-being. This section lists four ethical principles derived from fundamental rights that must be followed to ensure the development, deployment and use of AI systems in a trustworthy way. They are seen as ethical requirements and AI practitioners should always strive to follow. Without priorities, we list the following in the order in which fundamental rights appear in the EU Charter of Fundamental Rights on which these principles are based:
(i) Respect human autonomy
(ii) Prevent injury
(iii) Fairness
(iv) Interpretability
Many of these principles have been largely reflected in existing legal requirements that must be followed and therefore fall into the category of the first component of trustworthy artificial intelligence, namely legal artificial intelligence. However, as mentioned above, while many legal obligations reflect ethical principles, adherence to ethical principles is not merely formally in line with existing laws.
Respect the principle of human autonomy. The fundamental rights on which the EU was founded were intended to ensure respect for human freedom and autonomy. People who interact with artificial intelligence systems must be able to maintain sufficient and effective self-decision on themselves and be able to participate in democratic processes. Artificial intelligence systems should not unreasonably put human beings in a subordinate position, coerce, deceive, manipulate, restrict or drive away human beings. Instead, its design should enhance, complement and enhance human cognitive, social and cultural skills. Functional distribution between humans and artificial intelligence systems should follow the people-centered design principles and leave meaningful space for human choices. This means ensuring human supervision of the working process of artificial intelligence systems. Artificial intelligence systems may also fundamentally change the field of work, which should support humans in their work environment and be committed to creating meaningful work.
Principles of injury prevention. Artificial intelligence systems should not cause or aggravate harm, nor should they adversely affect humans in other ways. This includes protecting human dignity and physical and mental health. Artificial intelligence systems and their operating environment must be safe and reliable. They must be technically robust and should be ensured not to be exploited maliciously. Vulnerable groups should receive more attention and be included in the development, deployment and use of artificial intelligence systems. Special attention must also be paid to situations in which AI systems may have or exacerbate adverse effects due to power or information asymmetry (such as between employers and employees, businesses and consumers, or governments and citizens). The prevention of harm also requires consideration of the natural environment and all living things.
The principle of fairness. The development, deployment and use of artificial intelligence systems must be fair. Although we acknowledge that there are many different interpretations of fairness, we believe that fairness includes both substantive and procedural dimensions. Substantive dimensions mean commitment to: ensuring equal and just distribution of benefits and costs, and ensuring individuals and groups are protected from unfair prejudice, discrimination and stigmatization. If unfair prejudice can be avoided, artificial intelligence systems can even improve social equity. Equal opportunity in access to education, goods, services and technology should also be promoted. Furthermore, the use of artificial intelligence systems should never lead to people being deceived or unreasonably damaged in terms of freedom of choice. In addition, fairness means that AI practitioners should respect the principle of proportionality between means and purpose and carefully consider how to balance conflicting interests and goals. The procedural dimension of fairness means the ability to question and seek effective remedies for decisions made by AI systems and their operators. To this end, it is necessary to be able to identify entities responsible for decision making and the decision process should be interpretable.
Interpretability. Interpretability is crucial to establishing and maintaining user trust in AI systems. This means that the process needs to be transparent, the functions and purposes of the AI system should be communicated publicly, and where possible, the decisions should be explained to those who are directly and indirectly affected. Without this information, it is impossible to properly question decisions. Explaining why the model produces a specific output or decision (and which combination of input factors contributes to the result) is not always feasible. These situations are called "black box" algorithms and require special attention. In this case, other interpretability measures (such as traceability, auditability, and transparent communication about system functions) may be required, provided that the entire system respects fundamental rights. The extent of explanation required depends largely on the specific situation and the severity of the consequences if the output is incorrect or inaccurate.
2.3 Conflict between principles
There may be conflicts between the above principles, and there is no fixed solution to this. In line with the EU’s fundamental commitment to democratic participation, due process and open political participation, a responsible deliberation approach should be established to address these conflicts. For example, in various application areas, the principle of injury prevention and the principle of human autonomy may conflict. Taking the example of artificial intelligence systems for “predictive policing”, this may help reduce crime, but may be achieved through surveillance activities that violate individual freedom and privacy. In addition, the overall benefits of AI systems should significantly outweigh foreseeable personal risks. While the above principles undoubtedly provide guidance for solutions, they are still abstract ethical provisions. Therefore, it is not expected that artificial intelligence practitioners can only find the right solution based on the above principles, but they should deal with ethical dilemmas and trade-offs by thinking based on rationality and evidence rather than intuition or arbitrary judgment.
However, there may be some situations in which ethically acceptable trade-offs are not found. Certain basic rights and related principles are absolute and cannot be weighed (such as human dignity).