Artificial Intelligence Ethics | Ethical Artificial Intelligence Cannot Be Guaranteed By Principles Alone
Artificial Intelligence Ethics | Ethical Artificial Intelligence Cannot Be Guaranteed By Principles Alone
Recently, comparative research on artificial intelligence ethics initiatives and medical ethics has gradually emerged. Latest research shows that many artificial intelligence ethics initiatives have formed a set of core guidelines that are highly similar to the four classical principles of medical ethics.
Recently, comparative research on artificial intelligence ethics initiatives and medical ethics has gradually emerged. Latest research shows that many artificial intelligence ethics initiatives have formed a set of core guidelines that are highly similar to the four classical principles of medical ethics. This discovery was subsequently recognized by the Organization for Economic Cooperation and Development (OECD) and the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEG), which proposed four guiding principles for the development of “trusted artificial intelligence”: respect for human autonomy, harm prevention, fairness and interpretability.
The convergence between artificial intelligence ethics and medical ethics principles is timely because historically, medical ethics is the most influential and research-based branch of applied ethics. "Principleism" is derived from the field of medicine. This theoretical moral framework combines traditional ethical standards with practitioner needs, research ethical decision-making requirements of research ethics committees and medical institutions. Principleism proposes four core principles, which require concrete interpretation and dynamic balance in different decision-making situations. Principleism in medical ethics provides a common language for identifying and conceptualizing ethical challenges and provides guidance for the formulation of health policies and clinical decision-making; while principled approaches to artificial intelligence ethics aim to embed normative considerations into the technological design and governance system. Both paths are committed to solving the problem of how to incorporate ethical principles into professional practice. Therefore, principleism provides a useful theoretical reference for evaluating the potential of artificial intelligence ethics to achieve substantial change in AI development deployment.
Although the analogy with medical ethics initially gave it credibility, we have reason to worry about the future impact of artificial intelligence ethics. There are important differences between medicine (and other traditional industries) and the development of artificial intelligence, which suggest that the adoption of principle-based approaches in the field of artificial intelligence may not achieve comparable results as in the field of medicine.
This paper critically evaluates the strategies and suggestions proposed by existing AI ethics initiatives. By systematically sorting out the results of existing artificial intelligence ethics initiatives, the research aims to clarify its implementation path for embedding ethical principles into the artificial intelligence development and governance system. Based on the preliminary research results of the practical effects of principledism in the field of medicine, this paper uses a critical perspective to analyze the possible actual impact of the principle-oriented ethical framework of artificial intelligence.
Principles-based Ethical Methods of Artificial Intelligence
Challenges
Four characteristics of the development of artificial intelligence suggest that principle-based approaches may have limited impact on design and governance. Compared with the medical field, the development of artificial intelligence has the following shortcomings: (1) lack of common goals and fiduciary responsibilities, (2) lack of professional history and industry norms, (3) lack of effective methods to transform principles into practice, and (4) a complete legal and professional accountability mechanism has not yet been established.
Common goals and fiduciary responsibilities
Medicine generally follows a common goal: to promote the health and well-being of patients. A decisive feature of a profession is that its practitioners form a "moral community" with common goals, values and training systems. The pursuit of common goals helps to form principled ethical decision-making methods. Despite many differences regarding the definition of “health” and its best practice paths, there is always a consistency in the interests of patients and medical staff at the fundamental level, which promotes solidarity and trust between the two parties. Some people believe that practitioners have a moral obligation to represent the interests of patients against institutional interests.
In the field of artificial intelligence development, comparable solidarity and collaboration are not taken for granted. Artificial intelligence is mainly developed by the private sector and is used in the public (such as criminal sentencing) and private (such as insurance). The fundamental goals of developers, users and affected groups may not be consistent. Developers are often in a working environment where "continuously face the pressure of reducing costs, increasing profits and delivering better systems" and are under the pressure of decision-making by management to prioritize company interests. Although medical practitioners undoubtedly face similar institutional pressures, the degree cannot be equivalent. Unlike the medical field, artificial intelligence development does not have a role similar to that of “patient”—its interests are given primary position in ethical decision-making. This lack of common goals has transformed ethical decision-making from a collaborative process to a competitive process, making it more difficult to balance public interests and private interests in practice.
The implicit solidarity in medicine is formally recognized through the establishment of professional codes of conduct and regulatory frameworks for fiduciary responsibilities to patients. The difference between formal occupations and other occupations is the fiduciary responsibilities derived from the service object-practice relationship, which are regulated by common goals and values within the occupation and are implemented through sanctions and autonomy mechanisms (see Table 1). These characteristics require practitioners to maintain the highest interests of the service objects, thus providing a principled framework for ethical decision-making.
Artificial intelligence development is not a formal profession. Artificial intelligence developers in the private sector do not have a reciprocal trust relationship, nor do they lack corresponding complementary governance mechanisms. Artificial intelligence developers do not need to commit to "public services" - in other professional fields, this commitment requires practitioners to safeguard the public interest when business interests conflict with management interests. For AI or software deployed by the public sector, such commitments may be implicit in institutional or political structures. But the private sector is very different. Enterprises and their employees mainly bear fiduciary responsibilities to shareholders, and public interests do not take precedence over commercial interests.
Some people believe that many medical institutions and practitioners are also facing similar pressures. Taking hospitals as an example, they must ensure the sustainability of healthcare services models and balance the interests of individual patients with public health goals. Similarly, many new therapies are also developed by the private sector. However, these surface similarities are actually superficial. Both public and private healthcare facilities need to operate under strict regulatory frameworks that ensure that the health and well-being of patients and study participants is not sacrificed in the pursuit of sustainability or profit. Artificial intelligence agencies are of course regulated in certain areas, such as data protection and privacy laws that restrict the processing of personal data required for model training. But the impact of such regulatory frameworks is varied and limited in scope. At present, there is a lack of a unified regulatory framework in the field of artificial intelligence to establish clear fiduciary obligations to data subjects and users. If such frameworks can be derived from the ethics of artificial intelligence, they can be regarded as successful practice of principled methods. But in the absence of strong regulation to establish fiduciary obligations or priority of data subjects and users’ core interests, we cannot assert that there is a comparable value alignment in the field of artificial intelligence.
The lack of trust relationships in the field of artificial intelligence means that users cannot be sure that developers will act in the best interests of users when practicing ethical principles. Reputation risks may prompt companies to focus on ethical issues, but the impact of these risks only exists during public attention. The personal moral beliefs of developers may also drive them to act "good" - a recent protest within Google has provided some hope for this. However, existing incentives often discourage "whistleblowing" behavior or put public interests above corporate interests, indicating that moral good deeds often require high costs for individuals. This situation is obviously unacceptable when the core rights of users and related parties such as privacy, autonomy, identity rights, etc. can only be taken by the developer's personal ethics, the company's concerns about reputation loss or public opinion pressure to be taken.
Professional history accumulation and industry specifications
The second flaw of the principled method of artificial intelligence ethics is the relative lack of professional history accumulation and clearly defined "good" behavioral norms. Thanks to the long historical heritage and the shared professional culture across cultures and professionalisms, the discussion on the moral obligations and virtues of the medical industry has been developing for centuries. In the field of Western biomedicine, long-term standards derived from historical documents such as the Hippocratic Oath, the Geneva Declaration, and the Helsinki Declaration not only provide a basic basis for clinical decision-making and research ethics, but also promote the development of other professional ethics standards.
These standards and discussions about becoming an "outstanding" doctor have not completely eliminated ethical abnormalities in clinical practice and medical research. However, they provide a historically sensitive framework for occupational responsibility, whereby negligence and improper practice can be identified. Looking at the history of medical development, when such negligence occurs repeatedly, and when new technologies, new theories and changing social values continue to impact existing codes of conduct, ethical standards are also corrected.
The evidence of these historical lessons can be seen in modern codes of conduct and ethical norms. For example, the American Medical Association's Medical Ethics Code is an extremely detailed document that systematically explains professional opinions, behavioral norms and standards covering many medical practices and technical fields. The first edition of the 1847 rule only focuses on professional behavior, emphasizing that doctors should uphold parental obligations and strive to maximize the interests of patients and minimize harm. As the times evolve, the standards of the medical community have shifted from this narrow perspective of emphasizing the principles of doing good and professional ethics to attach importance to patients' other obligations—most notably the growing respect for autonomy. Major ethical aberrations often become catalysts of change: human experiments, forced sterilization and euthanasia conducted by Nazi Germany during World War II directly gave birth to the systematic establishment of ethical principles in the Nuremberg Code in 1947. Coincidentally, the formulation of the United States' National Research Act in 1974 and the Belmont Report in 1978 are the institutional responses to ethical issues exposed in human studies such as Tuskegee's syphilis experiment.
Principleism then emerged to revise the "minimal and unsatisfactory professional ethics" formed by the Hippocrates tradition. To transcend this tradition, it is necessary to “clearly recognize fundamental ethical principles that help identify morally questionable or unacceptable behaviors in various clinical practices and human experiments”. By regulating behavior and practice, principleism has established a common moral language to identify and respond to problem behavior and practice. The framework was then incorporated into the curriculum and training system for medical students, practitioners and policy makers, and continued to influence ethical norms and decision-making processes in the medical field. Therefore, principleism has a normative impact on the behavioral ethics of medical practitioners and institutions.
The development of artificial intelligence lacks a comparable historical evolution, homogeneous professional culture and identity, and has not formed a similar and mature professional ethics framework. This field has not yet gone through a transformational stage that is sufficient to clearly identify ethical responsibilities and translate them into specific practice guidelines and optimal solutions. Compared with the relatively clear goal orientation of the medical industry (which is conducive to the establishment of standardized practices and norms), artificial intelligence can in principle be applied to any scenario involving human expertise. This difference is reflected in: Artificial intelligence developers come from diverse disciplines and professional fields that are incompatible with historical origins, cultural backgrounds, incentive structures and moral responsibilities. It is oversimplified to simplify this field to a single occupation or type of expertise.
Software engineering can be said to be the closest analogy field to it, but due to the lack of a licensing system and a well-defined professional "obligation standard", the industry has never been recognized by law as a profession that requires fiduciary responsibility to the public in history. This status is directly reflected in that the current discussion on what professional qualities should be possessed by "outstanding" artificial intelligence developers or software engineers are still weak; although the two major professional associations in this field have issued and revised ethical guidelines, these documents still have limitations such as short space, strong theoreticality, lack of specific behavioral norms and practical suggestions.
Of course, stronger professional standards for AI development (and supporting organizations) can be formulated in the future. But unfortunately, the job is not simple. AI systems are usually developed by large, interdisciplinary, and multinational teams. The impact of clinical decisions is often (although not absolutely) immediately visible and observable, while the impact of decisions made when designing, training, and configuring AI systems for different purposes may never be noticed by developers. This phenomenon is worrying, as research shows that a sense of distance from potential victims can fuel immoral career behavior. The risks to medical ethics mainly stem from interventions implemented in (or inappropriate) the human body. In contrast, ethical risks in the AI field are persistent and boundless, and data subjects may not directly perceive these risks. The "opacity" of a system is also reflected in that no individual can fully understand the design or function of a system, nor can he predict its behavior. Even if problems are found, they are rarely traced back to individual team members or behaviors; responsibilities must be shared by the entire network of actors involved in system design, training, and configuration. This characteristic that cannot be reliable predicted for the development selection effect has shaken the theoretical basis for formulating "excellent" AI developer standards or "excellent" AI requirements.
The Artificial Intelligence Ethics Initiative bridges this gap by defining widely recognized principles that aim to provide guidance to the people and processes responsible for AI development, deployment and governance in different application scenarios. However, at this abstract level, it may be difficult to form substantial action guidelines. The high diversity of stakeholders and the difference in interest demands will inevitably lead to the exploration of common values and norms towards a highly abstract. Its results are often reflected in statements of principle or value commitments based on abstract fuzzy concepts, such as ensuring "fairness" of artificial intelligence or respecting "human dignity" that are unable to effectively guide practical actions due to lack of concreteness.
Statements that rely on fuzzy normative concepts obscure the essence of political and ethical conflict. Abstract concepts such as "fairness" and "dignity" are typical examples of "essentially controversial concepts". These concepts have many possible conflicting interpretations and need to be contextualized through political and philosophical beliefs in individual contexts. These different interpretations that can be held rationally and sincerely, lead to the requirement of substantial differentiation in practice—this difference will only manifest when principles or concepts are transformed into practice and tested. From a positive perspective, this conceptual ambiguity provides space for the contextualized interpretation of the ethical requirements of artificial intelligence; from a negative perspective, it obscures fundamental principles and pushes artificial intelligence ethics toward moral relativism. At least for now, any compromise reached around the core principles of artificial intelligence ethics fails to reflect the substantive consensus on the common practice direction of "excellent" artificial intelligence development and governance. We must be wary of mistaken high-level compromises as a prior consensus.
The really difficult part of ethics—that is, how to translate normative theories, concepts, and values into “good” practices that AI practitioners can adopt—was kicked to the roadside like the jar that proverb says. Developers can only transform principles and define concepts that are inherently controversial based on their own understanding, and lack a clear roadmap for unified implementation. This process is likely to encounter incomparable moral norms and frameworks that will trigger real moral dilemmas that principledism cannot solve. A mature career field will be best able to conceptualize and debate these challenges (even if not completely solved) by drawing on rich historical accumulation, ethical culture, and "good" practice norms. Unfortunately, there are still obvious shortcomings in the development of artificial intelligence in this regard.
Methods to transform principles into practice
The third weakness of the principled approach to artificial intelligence ethics is the lack of proven methods that translate principles into practice. The universality of concepts that are inherently controversial in artificial intelligence ethics raises the question: how do normative differences about the “correct” norms of these concepts be resolved?
Principles cannot be automatically transformed into practice. Throughout its historical course, medicine has developed effective methods to transform high-level commitments and principles into practical requirements and norms for "good" practice. Professional societies and committees, ethical review committees, certification and licensing systems, peer self-management, code of conduct and other mechanisms supported by strong institutions help determine the ethical acceptability of daily practice by evaluating difficult cases, identifying negligent behaviors, and sanctioning bad actors. The formal norms and informal norms that govern medical practice have been extensively tested, studied and revised, and their recommendations and norms (and potential principles) are constantly evolving to maintain their relevance. High-level principles are rarely clearly reflected in clinical decision-making, and are replaced by constraints of institutional policies that include sexual considerations. In practice, precedents or norms related to cases, which use the moral language provided by principleism, are more common. To sum up, institutional and clinical decision-making is essentially a "coherent" approach, taking into account high-level and fundamental considerations.
The field of artificial intelligence development lacks an empirically comparable methodology that can transform principles into real-world development practices. This is a methodological challenge involving multiple levels. The transformation process requires the high-level principles to be refined into middle-level specifications and bottom-level requirements. If specific elements such as technical characteristics, application scenarios, usage environments or related regional specifications are not considered, specific specifications and operational requirements cannot be directly derived from the middle-level principle. Normative decisions must be made during each level of transformation process and the consistency between the system of principles, specific norms/rules and case facts is ensured. It can be seen that the legitimacy of the broad consensus reached on certain common principles cannot automatically extend to the middle-level norms and underlying requirements derived from it. Transformation and refinement at each level requires independent justification. There is currently no globally recognized principle priority system to resolve normative conflicts.
This observation reveals the scope of work that has yet to be completed in the field of artificial intelligence ethics. Although the high-level consensus is encouraging, it has little impact on the arguments of normative basis and practical requirements proposed in specific application scenarios. Due to the necessity of localization rules, the prominent position of the concept of essentially controversial in artificial intelligence ethics, and the relatively lack of binding professional historical traditions in the field (see Section 2.2), conflicts of practical requirements are almost inevitable in different industries and application scenarios that adopt principled artificial intelligence ethical methods.
Another methodological challenge remains: normative practice requirements must be embedded into the development process and implanted into design requirements at the functional level. Existing research reveals the difficulty of integrating ethical values and principles into the technical design and development cycle. There are several such approaches, including participatory design, reflective design, "value in the game" (@Play) and value-sensitive design, but these approaches have been implemented and studied so far in academic contexts—a academic community has a higher acceptance of normative issues than in the business environment. The value awareness method also focuses mainly on procedural rather than functional. Overall, these methods introduce values, normative issues and relevant stakeholders into the development process, but cannot "inject" specific values into the system design, and it is difficult to quantify the extent to which the final product reflects a specific value or technical specification.
A design framework that focuses on value awareness faces more challenges in the commercial development process. Ethical considerations require cost investment. The development of artificial intelligence is often in a state of "closed-door" and lacks participation from public representatives. Gathering stakeholder opinions, equip ethicists in development teams, and coordinating conflicts between concepts that are inherently controversial across specifications, all increase workload and cost expenditure. When ethical considerations conflict with commercial interests, it is not surprising that the former is abandoned. In the commercialization process of pursuing efficiency, speed and profit, it cannot be taken for granted that the value awareness framework will be implemented substantively.
Legal and Professional Accountability
The fourth flaw of the principle-oriented approach to artificial intelligence ethical principles lies in the relative lack of legal and professional accountability mechanisms. The field of medicine is bound by legal and occupational frameworks that maintain occupational standards through mechanisms such as the Medical Malpractice Law, Practice License and Certification System, Ethics Committee and Professional Medical Committee, and provide patients with relief for negligent behavior. Regulated healthcare facilities help ensure that these standards are implemented. The legally supported accountability mechanism provides external driving force for medical practitioners to perform their fiduciary responsibilities by establishing a clear link between "bad behavior" and professional punishment (such as revocation of the practice license), strengthens the complementary form of self-regulation, stipulates professional care standards, and allows patients to make claims against practitioners with negligence.
In addition to certain types of risks (such as privacy violations regulated by data protection laws), the field of artificial intelligence development has not yet established a comparable professional or legally recognized liability mechanism. This poses a serious problem. A long-term and serious commitment to a self-regulatory framework is by no means a matter of course. Existing empirical research on the impact of ethical norms on professional behavior shows contradictory results. Practitioners tend to follow the provisions mechanically rather than implementing the spirit, or regard the guidelines as an "checklist" rather than as an integral part of critical reflection practice. Latest research on the ACM ethics code shows that the code has little impact on the daily decision-making of software engineering professionals and students. Research on corporate and professional ethical standards outside the field of computing science also reports similar conclusions. A recent meta-analysis on the impact of guidelines on professional behavior found that the simple existence of guidelines has no significant inhibitory effect on immoral behavior; practical effects can only be produced when the guidelines (and their core concepts) are integrated into the organizational culture and actively implemented. To effectively influence practitioner behavior and stimulate peers’ self-discipline, norms must be clearly defined and highly visible. The current governance structure of artificial intelligence enterprises has significant shortcomings in this regard.
External sanctions against codes of conduct are equally critical to compliance with norms and achieve effective autonomy. Compared with the medical field, the information industry lacks sanctions that can affect the livelihoods of practitioners. Although a software engineering degree is now certified, a license is not required to pursue a profession except in a few countries. Professional institutions like IEEE and ACM only have the informal sanctions right of expulsion - due to the lack of a licensing system, this punishment does not affect actual qualifications.
Although stricter legal and professional accountability mechanisms can be adopted, there is less chance of achieving this in the short term. Artificial intelligence development is not a standardized professional field with a long history and unified goals. Artificial intelligence developers do not formally provide public services, which means that the public interest does not need to be placed first. The application of artificial intelligence spans multiple industries, so any newly established legal or professional mechanism must take into account countless potential pros and cons and integrate with existing industry specialized regulations. In addition, the proposal to implement a professional punishment and licensing system for computer professional practitioners is not a new thing, but has achieved little success so far. These shortcomings of existing AI legal and career accountability mechanisms raise a thorny question: Is it enough to define “good intentions” and hope for the best results? If there is a lack of a supplementary punishment mechanism and governance organization that can "intervene" when autonomy fails, principled methods may only provide false ethical guarantees and cannot truly realize trustworthy artificial intelligence.
Where should artificial intelligence ethics go
Although principleism has undoubtedly had a profound impact on medical ethics through the four major characteristics of the above analysis, its effectiveness is not blameless. The relative defects exposed by artificial intelligence development along this path are therefore worthy of attention. First, artificial intelligence development is not a formal career field with the public interest as its purpose. Secondly, developers lack the historically accumulated "excellent AI developers" code of conduct as the normative basis. Furthermore, outside the academic environment, the AI field still lacks a mature mechanism for transformation of ethical principles and practices. Finally, when developers violate these vague definition requirements, there is no effective punishment mechanism and it is difficult to find a channel for correction and remediation. Signing self-discipline guidelines with unclear obligations and limited binding power will help developers benefit instantly in terms of credibility and reputation. Together, these defects reveal the major challenges facing the implementation of artificial intelligence ethics.
Therefore, we must take a careful look at the consensus reached around high-level principles that obscure deep political and normative differences. Common principles alone are not enough to ensure the credibility and ethics of future artificial intelligence. If the regulatory system does not undergo a fundamental change, the process of transforming principles into practice will always be competitive rather than cooperative. This constitutes a serious problem - principles can only be free from emptyness through practical testing, and then the real cost and value of the principled methods of artificial intelligence ethical principles will be revealed. It is inevitable to interpret conflicting concepts of essential disputes, and resolving these contradictions is the real starting point of artificial intelligence ethics. The core question has always been: how should the government, industry and civil society work together to support this basic work?
Define a sustainable path to achieve influence
A principle-based approach requires collaborative supervision to ensure that the translated specifications and requirements always fit the goals and continue to have an impact. In the future, the long-term goals and paths for achieving influence of principled initiatives must be more clearly defined (see Table 2 for key issues). At the industry and organizational level, binding and highly transparent accountability mechanisms need to be established, as well as clear implementation and review processes. Professional and institutional norms can be established through the formulation of clear requirements such as inclusive design, transparent ethical review, documenting models and data sets, and independent ethical audits.
Support the "bottom-up" artificial intelligence ethics construction of the private sector
Due to the high diversity of technologies classified as "artificial intelligence", adopting a "top-down" approach to AI ethical governance faces unique challenges. In this highly heterogeneous field, a general top-level design must be supplemented with bottom-up case studies of actual AI production systems. Through collaborative evaluation of local practices, ethical principles can be refined and jurisprudences can be established to promote the development of industry standards. Emerging cases continue to reveal new challenges in the field of AI ethics. These studies are crucial to breaking out old case paradigms and developing guidelines and technical solutions for specific industries and scenarios, while building an empirical knowledge base that records the impacts and harms of production-based AI technologies. Most current "bottom-up" research focuses on the academic environment, mainly developing technical solutions and evaluation indicators for quantifiable ethical concepts (such as fairness). More support and development environment access should be provided for multidisciplinary bottom-up R&D of AI ethics, especially for commercial development scenarios that are currently closed to external reviews.
Developer License for High Risk Artificial Intelligence
To promote long-term recognition of ethical commitments, it is necessary to formally establish artificial intelligence development as a professional field that is equal to other high-risk industries. There is an anomaly in the current regulation: we will qualification the professions that provide public services, but we do not implement the same norms for industries responsible for developing technical systems to enhance or replace human professional capabilities and decision-making behaviors. The risks of the licensed industry have not disappeared, but have shifted to the field of artificial intelligence. To address the major challenges facing the certification of this diverse group of practitioners, relevant measures can first launch pilot projects for high-risk system developers or R&D personnel in public sector dedicated systems (such as facial recognition systems designed for police).
Turning from professional ethics to organizational ethics
The output of many artificial intelligence ethics initiatives is often similar to professional ethics codes that target design norms and specific professional codes of conduct and values. The legitimacy of specific applications and the commercial and organizational interests behind them is rarely questioned substantively. This idea cleverly leads discussions to individuals' behaviors that violate ethics rather than collective ethical imbalances in organizational structure and business models. Developers are always limited by the constraints of their institutions. To truly work, the ethical challenges of artificial intelligence cannot be simply attributed to personal faults. Looking ahead, artificial intelligence ethics must also develop into an ethical normative system for enterprise and organizational levels.
Think of ethical pursuit as a process, not
Technical Solutionsism
Several initiatives have shown that ethical challenges are most appropriately addressed through "technology and design expertise" and focus on concepts that seem to be modified by technical means (such as privacy protection, fairness), but rarely propose technical definitions or explanations. Although initiatives such as IEEE "Ethical Alignment Design" do exist, the impact and practical application of such work in the business environment remains to be seen. Its core concept can be summarized as: insufficient ethical considerations will lead to poor design decisions, which will in turn give birth to systems that harm users' interests.
This attitude is wrong. The potential of artificial intelligence is largely due to its apparent ability to replace or enhance human expertise. This plasticity means that AI will inevitably be entangled with the ethical and political dimensions in the career and practice it is embedded. Artificial intelligence ethics is essentially a microcosm of the political and ethical challenges facing society. Blaming ethical challenges on design flaws actually ensures that they “stabilize their technicality and thus avoid democratic interventions.” It is foolish to think that technical patching or “good” design can solve ancient and complex normative problems. The risk is that complex and difficult ethical debates will be oversimplified to enable relevant concepts to be computable and operational in a straightforward but shallow way.
Ethics is by no means easy, nor is it a formulaic dogma. We should foresee and welcome those tricky, principle-based differences, because they embody both serious ethical thinking and the diversity of thought. These differences are not a representation of failure and do not need to be "resolved". Ethics is a continuous evolutionary process, not an ultimate destination. The real work of artificial intelligence ethics has just begun now: transforming our lofty principles into practice, and gradually understanding the real ethical challenges brought by artificial intelligence in the process.