AI Ethics

Artificial Intelligence Ethics, Laws And Policy | Scholarship@WashULaw

Artificial Intelligence Ethics, Laws And Policy | Scholarship@WashULaw

Artificial Intelligence Ethics, Laws And Policy | Scholarship@WashULaw

Since each strategy has its advantages and limitations, to effectively prevent the social harm caused by artificial intelligence applications, these three tools must be comprehensively used through complementary methods.

Developing ethical norms for artificial intelligence is an important first step to get rid of technological determinism. This requires us to realize that artificial intelligence systems are not independent technological products, but part of social and technological systems. The type of development of artificial intelligence and its application direction are not necessarily established. The creation and deployment process essentially reflects and is subject to social processes, and these applications will have far-reaching social impacts. The ethical principles of artificial intelligence are an important understanding and warning of this fact.

At the same time, the actual influence of ethical principles may be limited. Such principles are inherently vague because they are intended to guide behavior in various situations. Its universal advantage lies in its wide range of application and also increases the possibility of gaining wide recognition. But the disadvantage of this universality is that these principles often become empty clichés and it is difficult to truly deal with complex ethical problems. No one would deny that “justice and fairness” is a worthwhile goal, but what exactly does it mean when designing a system that will influence or decide who will get out of jail or get a job? Precisely because people have broad differences on the connotation of justice in specific scenarios, universal principles often find it difficult to help us make difficult choices in the design or application of artificial intelligence tools.

General statements about morality may also obscure significant fundamental differences in the meaning of the values ​​set out. Almost all the core principles of artificial intelligence include fairness and non-discrimination, but the connotation of these terms is always controversial. These debates mean that the moral principle of non-discrimination is uncertain when applied to artificial intelligence systems. Fairness may depend on the input parameters of the system, such as models that do not contain racial or gender traits (and alternative indicators of these traits) will be considered fair. Another scenario is that fairness may require a review of the output—specifically, whether predictions from AI systems are fair to different groups. The review of output results raises further questions about how fairness is measured: is it required to have equal accuracy, false positive/false negative rates, consistent results, or other criteria? Various formal methods for measuring group equity have been proposed in the computer science literature, but they are often difficult to satisfy all. Since ethical statements cannot resolve these disputes, they cannot effectively constrain the development and application of the technology.

Moral codes may also obscure conflicting ways between higher principles, thereby avoiding resolving these contradictions. These guiding principles often regard privacy protection and discrimination prevention as important governance principles, but rarely acknowledge that there are sometimes contradictions between the two. One of the reasons predictive AI can generate bias is the lack of sufficient data support for some subgroups. As a result, efforts to reduce such bias often lead to an expansion of data collection, which may in turn exacerbate monitoring of vulnerable and marginalized groups. In addition, depending on the different scenarios of the application of technology, more accurate and less biased artificial intelligence may cause disproportionate damage to these groups.

Another criticism of the ethical principles of artificial intelligence points to that these principles tend to simply attribute solutions to technical solutions, while ignoring a broader social context. Statements from private institutions are particularly prone to stressing the use of technical means to deal with ethical challenges. Green et al. argue that such statements “borrow the language of critics and incorporate them into a narrow, technologically deterministic, expert-led framework for ethical AI/machine learning definitions.” “For example, the Global AI Policy Recommendation issued by the International Telecommunication Union (ITU) emphasizes the adoption of privacy-enhancing technologies such as anonymization, pseudonymization, de-identification, “ensure that data can be used to train algorithms and perform AI tasks without infringing on privacy.” This suggestion emphasizes expert-led responses, which are actually targeting controversial socio-political issues. It assumes that personal privacy interests only involve information not being disclosed, limiting ethical exploration to technologically feasible solutions rather than a broader discussion—that is, whether businesses and governments should be allowed to collect various personal data and what restrictions should be imposed on the way they collect data.

Therefore, if moral principles exist alone, their impact may be very limited. Not only are they vague and difficult to implement, but the guidance provided is not mandatory. They are just ideal goals, not binding rules.

III. Law

Although law is more effective than moral principles, it also has significant limitations in shaping the development of artificial intelligence that is beneficial to society. First, there is a huge gap between legality and morality and social responsibility. Activities that produce significant harm and unpopular social impacts may still be legally permitted, so the fact that a business fully complies with the existing law does not answer the question of whether its activities are ethical or guarantee that the technology they develop is beneficial to society.

The gap between law and ethics may be caused by a variety of factors, but regulatory lag is often the main reason. New technologies and new practices are emerging one after another, but their potential risks and harms are often difficult to immediately appear. When the problem is discovered, it takes time to sort out how existing laws apply and determine whether these regulations are sufficient to deal with these hazards. If the current law fails to adequately respond to risks, legislation may be required to fill the gap. However, making new laws is not easy. In the face of complex and ever-evolving technologies, it is not easy to clearly define which behaviors are legal and which are violated. In addition, lawmakers often lack expertise and are susceptible to industry lobby groups that often try to circumvent regulations. As a result, lawmakers may not fully consider the interests of dispersed, unorganized public, or marginal groups such as criminal defendants or low-income workers who may be negatively affected by the application of AI.

Concerns about the risks of discrimination and privacy violations are a vivid portrayal of these challenges. Although the current anti-discrimination laws can be applied to automated decision-making systems and provide a legal framework for challenging the discriminatory nature of AI, there are still many loopholes and blurred areas since these regulations were originally tailored for human decision-makers. In terms of privacy protection, technology development speed far exceeds the existing legal framework, resulting in significant gaps in protection measures - especially in the US legal system. Although some states have begun to regulate artificial intelligence, these regulations lack national unity and are relatively limited in binding on preventing discriminatory behavior and privacy violations.

Although no laws specifically targeting artificial intelligence have been issued at the U.S. federal level, many states have passed relevant regulations. These bills not only involve data privacy protection, but also directly target artificial intelligence technologies, especially generative AI, and a large number of related proposals are being brewed. Overall, these bills mainly focus on regulating the disclosure requirements and risk assessment of content generated by artificial intelligence rather than implementing substantial supervision of AI applications. The EU's Artificial Intelligence Act further explicitly prohibits certain artificial intelligence applications with unacceptable risks. However, this type of use seems rather narrow, with a more moderate regulatory approach for most AI uses, including those with "high risk".

Figure 2: On June 14, 2023 local time, members of the European Parliament attended a voting meeting on the Artificial Intelligence Act. Source/Visual China.

In short, traditional forms of legal supervision have advantages and disadvantages in shaping the development of artificial intelligence. Unlike ethical principles, legal rules are more likely to be specific, specific and enforceable. However, at the same time, the law often lags far behind social and technological developments.

However, the dilemma of relying on legal supervision does not stem from time lag. Legislation needs to clearly define which activities are allowed and which are prohibited—this is undoubtedly a daunting task given the complexity and rapid evolution of AI. This challenge is even more serious because the application scenarios of artificial intelligence are spread across all areas of society and have a variety of forms. The degree of tolerant risks, and how to weigh potential benefits and risks, will vary depending on the specific situation and the severity of the potential harm. Due to the inherent limitations of traditional direct regulatory forms, researchers and policy makers are increasingly turning to broader policy tools to manage the development of artificial intelligence.

IV. Policy

Compared with traditional legislation, policies have a broader perspective, and their core goal is to promote social welfare rather than compensate for personal losses. The legal liability system often adopts an after-remediation model, and only intervenes after the damage occurs, mainly solving problems such as causal relationship, responsibility determination and compensation. Policy formulation focuses on forward-looking goals and promotes change through strategies to achieve social goals such as preventing environmental deterioration and improving population health. Policies are more flexible than laws, and they use various means to guide behaviors and ultimately achieve the expected results.

Laws and policies are not mutually exclusive. Laws that hold people accountable for harming behaviors can be part of an overall policy strategy to prevent negative social behavior. Instead, many policy levers require legal action—for example, providing public funding for research, creating incentives that encourage engaging in certain activities, or establishing government agencies with oversight. Despite the overlap between law and policy, it is beneficial to distinguish policy approaches from laws that traditionally focus on retrospective liability.

Some of the reasons for the use of policy tools to regulate the development of artificial intelligence have been mentioned earlier. AI tools are technically complex and their internal operating mechanisms are often opaque. Due to system complexity and the need for deep technical expertise to assess risks and trade-offs, it is difficult for legislators to define in advance what uses are permitted and what are prohibited. In addition, the black box characteristics of many AI systems make it extremely difficult to clarify the issue of causality and responsibility ownership—the core elements of these retrospective responsibility systems. Individuals who are compromised by AI technology may lack the necessary resources or expertise to effectively deal with such legal disputes.

Another reason for turning to policy is that individual rights-based programs are not sufficient to recognize and address a wide range of social harms. Legal liability often focuses on the impact of a particular behavior on an identifiable individual, but the most worrying side effects of artificial intelligence tend to have systemic characteristics. When prediction algorithms systematically and unreasonably affect certain groups when allocating benefits or opportunities, these effects are difficult to manifest through individual case litigation. Similarly, those privacy rights that emphasize whether to share information through personal choices have failed to touch a larger context: the actions of countless other people or subjects collectively determine the individual's cognitive boundaries.

Due to the limitations of law as a solution, other forms of governance schemes have gradually received more attention. In the United States, the National Institute of Standards and Technology (NIST) has published a risk management framework that provides voluntarily following guidelines for organizations developing artificial intelligence. President Biden signed an executive order aimed at formulating policies to ensure the "safe, reliable and credible" development of artificial intelligence, and his administration also released a "blueprint" aimed at guiding the design and deployment of artificial intelligence to protect civil rights and privacy. President Trump revoked these initiatives and issued an executive order on artificial intelligence that prioritizes rapid development of technology, the U.S. dominance in the global field of artificial intelligence, and the removal of regulations and guidelines that may hinder technology development. Therefore, policy leverage – depending on who is using them – can effectively reduce the risk of social harm or downplay concerns about these risks.

A comprehensive discussion and evaluation of these various policy tools that manage the development of artificial intelligence is beyond the scope of this chapter. However, overall, policy methods have unique advantages and limitations compared to traditional legal liability systems. Its advantage lies in the fact that policy tools are more open and experimental and can flexibly respond to challenges brought by technological development. Government agencies can conduct research on the diversified application scenarios, future development trends and possible impacts of artificial intelligence. Through this mechanism, technical experts can be absorbed and public opinions can be widely solicited, thus breaking through the common stakeholder limitations in legislative procedures and expanding the scope of participation in policy formulation.

Policy tools themselves also have flaws, and relying solely on them does not ensure that artificial intelligence technologies can develop in a way that benefits society. The effectiveness of policy initiatives depends to a large extent on the selected strategy and its specific implementation. If policy leverage is overly dependent on procedural requirements such as assessment reports, it may lead to a formal compliance attitude in the enterprise and ignore the actual measures to prevent risks. Similarly, if a system based on corporate self-regulation lacks clear compliance evaluation standards and effective execution mechanisms, it may become a useless shell. Giving government agencies the power to promulgate and enforce appropriate regulations may be more effective; however, there is still a risk of regulation being countered, that is, actors in regulated industries are influenced or even dominated by the government agencies responsible for overseeing them.

V. Conclusion

Although artificial intelligence technology brings many conveniences, it may also have a significant negative impact on human health, safety, well-being and basic rights. While ethical, legal and policy means can provide solutions to these problems, each method has limitations and cannot provide a complete solution alone. Although ethical statements can clarify high-level values ​​and goals to guide behavior, they are often vaguely expressed, difficult to implement in detail, and lack a compulsory enforcement mechanism. By contrast, legal rules can be enforced through judicial procedures and provide compensation for the actual injured person. However, the complexity and opacity of artificial intelligence bring challenges to the legal liability system, making it difficult to clarify issues such as causality and responsibility identification. In addition, legal rules may struggle to keep up with the rapid development of artificial intelligence technology. Policy tools are more flexible and forward-looking and can help predict and guard against potential hazards. However, if there is a lack of a sound standard system or if the entities that develop and deploy artificial intelligence tools cannot be required to assume practical responsibilities, these tools may have little effect. Since each strategy has its advantages and limitations, to effectively prevent the social harm caused by artificial intelligence applications, these three tools must be comprehensively used through complementary methods.

Compilation | Fan Wei

Review | Han Zhijie

Final review | One pound

©Theoretical Journal

More