AI Ethics

The Paper | The Ethical Challenges Of Artificial Intelligence Weapons; Cambridge Liberal Arts Ph.D. Suffers From Cyber ​​Storm

The Paper | The Ethical Challenges Of Artificial Intelligence Weapons; Cambridge Liberal Arts Ph.D. Suffers From Cyber ​​Storm

The Paper | The Ethical Challenges Of Artificial Intelligence Weapons; Cambridge Liberal Arts Ph.D. Suffers From Cyber ​​Storm

The US ASIMOV program and the ethical challenges of artificial intelligence weapons December 19, 2024

U.S. Plans and the Ethical Challenges of Artificial Intelligence Weapons

On December 19, 2024, the U.S. Defense Advanced Research Projects Agency (DARPA) released a new plan related to artificial intelligence-the "Autonomous Standards and Ideals with Military Combat Value" (The and with, referred to as) project. The launch of the project has further raised concerns about artificial intelligence, autonomous weapons systems and their ethical assessment.

In an official news release titled "DARPA Explores Methods to Assess the Ethics of Autonomous Weapons," DARPA stated: The program aims to develop benchmarks for future autonomous systems, objectively and quantitatively measuring the ethical difficulty of future autonomous use cases, and the readiness of autonomous systems to execute these use cases in the context of the value of military operations. DARPA also claims that the program does not build weapons, does not directly develop autonomous systems or algorithms for autonomous systems, does not develop standards for all autonomous systems (its focus is autonomous weapons systems), and that it is not focused on developing qualitative benchmarks. The goal is to create a "common language of ethical autonomy" (the ) to enable the developmental test/operational test (DT/OT) community to effectively assess the ethical complexities of specific military scenarios and the ability of autonomous systems to comply with human ethics in those scenarios. Participating teams will develop prototype generative modeling environments to rapidly explore scenario iterations and variables within an increasing range of ethical complexity. Researchers will assess the ability of autonomous weapons systems to adhere to human ethics in a variety of ways. If successful, it will lay the foundation for defining benchmarks for future autonomous systems.

Artificial Intelligence Ethics_ASIMOV Plans Artificial Intelligence Weapon Ethical Challenge

Define "military operational values" as principles, standards, or qualities considered important in combat operations that guide the actions and decisions of military personnel. Following the commander's intent is one of the key aspects of development. DARPA believes that the quantitative approach pursued will have broader implications for the entire autonomous systems community. In addition, the program will include an Ethical, Legal and Social Impact Group to advise project participants and provide guidance throughout the project process. DARPA also announced seven contract partners for the program: CoVar, LLC; , Inc.;; RTX; SAAB, Inc.; & , LLC and the University of New South Wales.

According to a report on the PR website on December 20, 2024, CoVar, a developer of artificial intelligence and machine learning solutions and one of the program's collaborators, will be responsible for developing an autonomous system ethics testing infrastructure called "GEARS" ( ). “GEARS” will define a new ethical mathematics by representing ethical scenarios and commanders’ intentions through a knowledge graph suitable for human and machine understanding, from which quantifiable ethical challenge ratings for specific scenarios can be derived. "If this work is successful, it will be the first ELSI (legal and legal)-based quantitative assessment framework suitable for testing ethical standards for autonomous systems," said Dr. Peter Torrione, Chief Technology Officer at CoVar. "This will enable the U.S. Department of Defense to deploy autonomous systems with AI/ML capabilities and gain a clear understanding of not only the technical capabilities of the system, but also the ethical standards of its behavior."

In fact, Military & Aerospace Electronics (&) magazine mentioned this collaboration in its November 22, 2024 article "COVAR explores the ethical use of artificial intelligence and machine autonomy in military applications." A two-phase, 24-month program, the Air Force Research Laboratory awarded CoVar the $8 million contract in October on behalf of DARPA. The plan will use the "Responsible Artificial Intelligence Strategy and Implementation Path" (AI and ) released by the U.S. Department of Defense in June 2022 as a guideline to develop a responsible military artificial intelligence technology benchmark. The document outlines five ethical principles for “responsible AI” for the U.S. military: responsible, fair, traceable, reliable and governable.

The project's name comes from science fiction writer Isaac Asimov. He proposed the famous "Three Laws of Robotics" in his novel "Turning in Circles" () published in 1942:

1. Robots must not harm humans, or cause harm to humans through inaction.

2. Robots must obey human orders unless those orders conflict with the First Law.

3. A robot must protect its own existence, as long as this protection does not conflict with the first or second law. These three laws set the basic ethical principles for robots.

DARPA believes that Asimov delves into the limitations of these laws and the extreme cases that can cause intentions to fail, often ending with adverse consequences for humanity. "The challenges and opportunities Asimov predicted in his work remain profound today. As autonomous technologies and artificial intelligence rapidly advance and spread across civilian and military domains, we urgently need a robust and quantitative framework for assessing not only the technical capabilities of these systems, but more importantly their ability to comply with human ethical expectations."

Dr. Klausutis (TJ), the project leader, said, "Ethical issues are inherently challenging, and quantifying ethics is even more difficult. We are solving an extremely complex problem with infinite variables." "We don't know if what we are trying to do is actually feasible, but we do know that assessing whether autonomous weapons systems are consistent with human ethics is a conversation that must begin, and the sooner the better. Our goals are extremely ambitious - through the plan, DARPA Hoping to lead an international conversation about the ethics of autonomous weapons systems, DARPA is said to have allocated $5 million to the program in its 2024 budget request, with $22 million to be allocated in 2025.

On January 2, "The New Republic" magazine published a review article titled "The Plan to Teach the Laws of War". The author Rebecca McCarthy ( ) interviewed relevant scientists, researchers and managers around the plan and discussed topics related to the ethics of artificial intelligence weapons.

One concern raised by the plan is whether, in the future, military and technology companies will determine their own ethical terms for weapons. In this regard, Peter Asaro, co-founder and vice-chairman of the International Robot Weapons Control Committee and associate professor of media studies at The New School University in New York (The New School), believes that "they will do it anyway." Asaro's research focuses on the ethical issues of artificial intelligence and robotics, including the social, legal and ethical aspects of military robots and drones. He has written papers on lethal robots from the perspectives of just war theory and human rights, and studied human rights issues in targeted killing of drones and arms control issues for autonomous lethal robots. In his view, there seems to be some obvious connection or a self-explanatory answer between "DARPA has provided a huge grant to Raytheon to study the ethical issues of artificial intelligence weapons (RTX's predecessor is Raytheon and is now Raytheon's parent company)" and "Raytheon will build artificial intelligence weapons."

The ethical challenges of artificial intelligence and autonomous weapons are also a philosophical question for McCarthy. In it, whose moral standards are we using? How are these standards chosen? These questions are both critical and complex. "After all, individuals' definitions of moral behavior vary widely, and the idea of ​​putting ethical standards into practice seems absurd. Moral dilemmas are moral dilemmas precisely because they are fundamentally painful and difficult to solve." McCarthy believes that it is difficult to imagine an untethered artificial intelligence that can be ethical - "Can you teach morality to a technology that does not even have the ability to question?" Peggy Wu, a scientist at RTX Company, expressed a more optimistic attitude. He believes, "This is arguably the first step towards self-reflection or introspection. ... For example, if the system can realize, 'Hey, I could have done something else,' then it can start to reason about the next step - 'Should I have done something else?' ... For us, the concept of 'doubt' is really more like probability. These kinds of problems can quickly become extremely computationally complex."

The philosophical issues involved in moral “doubt” about artificial intelligence are complex. According to Peggy Wu, this problem seems to be transformed into probability and calculation. In Asaro's view, the operating mechanism of human morality is fundamentally different from the calculation and quantification of artificial intelligence. He points out, "You can use artificial intelligence to iterate and practice something billions of times. But that's not how ethics works. It's not quantified... Your moral character is developed over a lifetime by making the occasional wrong decision, learning from it, and making better decisions in the future. It's not like playing chess." Trade-offs in ethics involve a lot of complex factors, and it is not the same as a "calculation of pros and cons" that can be clearly quantified. McCarthy also believes that teaching ethics to artificial intelligence is unfeasible. "Doing the right thing is often painful—unrewarded, thankless, and sometimes at great personal cost. How do you teach that knowledge to a system that has no real stake in the world, nothing to lose, and no sense of guilt? And, if you could actually give a weapons system a conscience, wouldn't it eventually stop obeying orders?"

Overall, interviewees expressed a mixture of optimism and misgivings about the plan and the ethical assessment of artificial intelligence it involves. But most respondents also said it was at least encouraging that the military was considering developing an ethics code for automated war tools. Rebecca Krutov, a professor at the University of Richmond and a DARPA visiting scholar, said, "This is not to say that DARPA thinks we can use computers to capture morality. (But) it would be meaningful to show more clearly whether we can do this."

McCarthy mentioned that many people in the interview pointed out that humans have always done immoral things. Those who hold this view may be more inclined to support a vision of a future in which artificial intelligence conducts moral assessments. One applicant for the program pointed out, “Theoretically, there is no reason why we can’t program artificial intelligence that is better than humans at complying with the Law of Conflict of Arms.” However, when faced with the complex picture of the real world, there are still huge doubts about whether artificial intelligence can be competent. McCarthy emphasized, "While this theoretical possibility is compelling, it is completely unclear what this will look like in the actual practice of warfare. As it stands, artificial intelligence still has huge challenges in handling nuances. Even if it improves in the future, leaving ethical decisions to machines remains a deeply troubling idea."

Doubts about the plan are not only about whether artificial intelligence can "learn" moral judgment and ethical evaluation, but also about whether humans should let artificial intelligence make such judgments and decisions. Jeremy Davis, another applicant for the program and a professor of philosophy at the University of Georgia, said, "The scary thing is that soldiers will say: 'Okay, I killed this guy because the computer told me to do it.'" We have seen the imagination of similar scenarios in many science fiction works, in which the killer is a machine or a human, or a conspiracy between machines and humans, which has become a moral dilemma with no standard answer. In addition, the question is about whether humans really believe that artificial intelligence can make more "rational" or "better" choices, or are humans actually passing the buck to technology or an abstract system?

The New Republic article briefly mentioned the historical origins of DARPA and the history of artificial intelligence as a discipline, both of which occurred in the 1950s when the United States and the Soviet Union were engaged in the Cold War and the space race. From a historical perspective, the focus on planning will inevitably connect the relationship between artificial intelligence technology and international politics. On DARPA’s official website, we see its description of its mission as – “Creating technological surprises for national security.” A few days ago, Trump promised to revoke many regulations of the Biden administration aimed at controlling the use of artificial intelligence. Lee Zeldin (Lee), the new head of the U.S. Environmental Protection Agency and former Republican congressman selected by Trump, also said that one of his top priorities is to "make the United States a global leader in artificial intelligence" and "help release U.S. energy dominance and make the United States the AI ​​capital of the world." In McCarthy’s view, signs of related deregulation illustrate that “it is obvious that under Trump, artificial intelligence will be unfettered.”

In August last year, the Wall Street Journal published an article describing how U.S. technology companies and entrepreneurs use cutting-edge technologies to support the defense sector, especially in potential military competition. The article pointed out that artificial intelligence technology is regarded as the core of future military strategy, and technology companies are increasingly cooperating with the defense sector, especially in the development and deployment of autonomous systems. Competition in technological fields such as autonomous unmanned systems, automated defense systems, and artificial intelligence decision support is considered a key variable in future wars. Additionally, this article explores issues similar to the program's goal of ensuring that autonomous systems make ethical decisions in complex military environments. Because the development of related smart technologies has triggered ethical and policy controversies, such as the legality and potential risks of abuse of autonomous weapons systems. To do this, collaboration between governments and technology companies will need to find a balance between technological advancement and ethical responsibility.

“Only when we reduce the entire system to allow human choice, human intervention, and human goals that are completely different from the purposes of the system itself, can the real advantages brought by science and technology be preserved.” This sentence comes from the end of the article “Totalization Technology and Democratic Technology” (and) published by American sociologist, philosopher of technology, and urban planning scholar Lewis Mumford (Lewis) in 1964 in the magazine “Technology and Culture”. In this article, Mumford distinguishes between two main modes of technological development: "centralized technology" and "democratic technology", and warns that modern technology is too biased towards the centralized mode and may threaten democratic values ​​and human freedom. In the two volumes "The Myth of the Machine" (The Myth of the Machine) published in 1967 and 1970, he continued to critically explore the anti-organic nature of modern technology with "giant machines" as its essence, and advocated a "theory of mind" - it is not technology that determines the mind, but the mind that determines technology.

There is no doubt that Mumford’s criticism of technological utopianism, his emphasis on the non-neutrality of technology or the constructivist perspective of technology, his analysis of the relationship between technology centralization and decentralization, his thinking on technology governance, etc., have provided profound ideological heritage for the development of contemporary philosophy of technology. Sixty years later, in the face of more emerging artificial intelligence technology and autonomous weapons, we seem to be in a situation where we have to consider these issues more urgently, especially the difficult debate on the relationship between technology and ethics.

As Mumford said, "The question we must ask is not what is good for science and technology, let alone what is good for General Motors, Union Carbide, IBM, or the Pentagon, but what is good for mankind. This does not refer to mass-man controlled by machines and disciplined by systems, but to living people as individuals, who can move freely through every area of ​​life."

From doctoral thesis to personal attack: The dilemma of liberal arts scholarship in the era of hatred and anti-intellectualism

On November 28, 2024, like every young scholar who opens a social media account, Ally Louks, a major in English literature at the University of Cambridge, was very excited to announce the good news on her personal However, this paper, titled "The Ethics of Smell: The Politics of Smell in Modern and Contemporary Prose" (: The of Smell in and Prose), put Lukes in a ridiculous situation for the next month: a large number of comments questioning the "value" of her paper appeared in her post reposts and comment areas. If slightly reasonable questioning is still a normal academic exchange, and it is also a challenge that must be met when wearing the title of "Cambridge Doctor of Literature", then a large number of comments targeting Lukes' gender, connecting it with "woke" politics, and openly ruling that all liberal arts academic production is garbage, have turned Lukes's announcement post into a grand show for hate politics and anti-intellectual remarks on the current Internet.

Artificial Intelligence Ethics_ASIMOV Plans Artificial Intelligence Weapon Ethical Challenge

After a selfie with the title of the paper ignited public opinion, in order to better explain the focus of his paper, Lukes also posted an abstract of the paper, from which we can also have a clearer understanding of the main purpose and arguments of the paper. Lukes's abstract notes that her thesis "aims to provide an intersectional and broad-based study of olfactory oppression by uncovering the underlying logic of smell's use in creating and subverting power structures of gender, class, sexuality, race, and species." To this end, the thesis "focuses primarily on the modern and contemporary period "A prose novel to trace the historical inheritance of olfactory bias and locate its contemporary relevance." Obviously, even when translated into Chinese, these expressions are too technical and have a strong flavor of "academic jargon" (), which is undoubtedly the reason for many criticisms against Lukes.

In the face of negative criticism, Lukes has always been firm. She clarified that the abstract of this doctoral thesis was written for experts and scholars in her field, not for lay readers. She would not choose to express the research content of her research to general readers in this way. But such statements and supplementary explanations obviously failed to completely silence the doubters - the latter's attack logic is also very clear. First, they believe that Lukes reveals the arrogance of the ivory tower elite: How can Lukes so easily separate the academic world from "ordinary people"? Is she looking down on so-called ordinary people? In addition, there are also critics who criticize this kind of research that can only be used for self-entertainment within the discipline. Why does it need to exist? A biotechnologist even more sensationally stated when criticizing Lukes: "Academia is dead." And major media outlets such as The Economist also pointed out in a commentary article titled "Academic writing is becoming increasingly difficult to read - especially in the humanities" that the abstract attached by Lukes was the trigger for the escalation of the public opinion campaign.

Of course, social media is not a place for rational discussions among academics, and Lukes faced far more than the above-mentioned negative comments, but a large number of emotional vents and attacks. These comments also have obvious ultra-conservative and misogynistic characteristics. One factor that irritates these attackers may be the “olfactory oppression” mentioned in Lukes’ research, and its corresponding issues of gender, class, and race. On the one hand, in the current field of Western humanities, if you want to write a thesis that is enough to make people ecstatically announce that they have received a doctorate, it must be "in place" in the current Western cultural and political context and even ideological politics. From this point of view, the gender, race, and class issues that Lukes strives to grasp in her research are also manifestations of the current academic political climate. But on the other hand, this research orientation has been roasted on social media. One reason is that it "want to progress too much" - the "anti-woke" ultra-conservatives will obviously not let Lukes and her research go easily. Moreover, this topic has quite a few elements that can be packaged and criticized: academic jargon, the value of the topic is far away from the public, etc.

Soon, the negative comments on Lukes's doctoral thesis gradually moved from doubt to criticism, and then to personal attacks full of misogyny. Although Lukes acknowledged that most comments about her research were "very friendly," negative attention arose after her post was "retweeted by several far-right accounts," according to the BBC. Lukes said in an interview: "I received a lot of rape threats, even death threats, and a lot of people imposed their views on me about what a woman should look like or what she should do, and these views had nothing to do with the subject (of the paper)." Cambridgeshire police confirmed that they had begun investigating "hate incidents" against Lukes. A police spokesman said they received reports of a hate incident, including a threatening email sent to a woman, known as Lukes, at 22:47 GMT on December 1, 2024. Regarding the injustice and harassment suffered by Lukes in public opinion, Cambridge University officials also issued a statement in support of Lukes on social media, writing: "When online trolls attack Ellie's doctoral thesis topic, her educational background, her achievements, and her gender, these attentions are already a manifestation of harassment and misogyny."

According to a certain flow of thinking, Lukes can be said to have become famous in the past month or so after receiving her doctorate. Of course, she does not reject this situation. On the one hand, as she said, many people did express goodwill towards her results, as well as academic colleagues or readers outside the ivory tower. He was interested in her research topic and wrote to express his desire to read the paper; on the other hand, as mentioned above, Lukes herself also had a strong inner state. Her attitude towards these negative comments may have the inherent arrogance of academic elites, but it was also decent and courageous. Regarding the criticism his paper encountered, Lukes was not averse to contact with the media. He even published articles in two media outlets. He not only tried to explain the nature of his research in more plain language, but also frankly explained the public barriers that academic writing must face. In Lukes's eyes, she is lucky enough to have such an opportunity to face the public, although in the eyes of us onlookers, this "luck" seems too funny.

In one of her responses, Lukes described herself as "an introverted nerd" after an "uncomfortable week in the spotlight." "But I'm very relieved to see so many people engaging with my work, and I appreciate the humor of so many commenters." In another short post, Lukes pointed out one of the reasons why she was attacked by negative public opinion in more detail: some comments consciously omitted the complexity of her research, and simplified and misled its logic as "smell is a symbol of racism." ” – In fact, even if this paper is inevitably packaged in a lot of academic language and even slang so that it can pass the doctoral thesis review and defense at the world’s top universities, it still makes very important points, including what Lukes said: “There is ample evidence that smell has been used to justify expressions of racism, classism and sexism, and researchers have been evaluating the moral impact of smell-related ideas and stereotypes since the 1980s.”

Now that the Internet farce is winding down, Lukes's current worry is that too many people are applying to the library to read her thesis. But this incident is more like a stew of various morbid phenomena in the current Internet ecology, from the elucidation of the futility of liberal arts, the barriers and contradictions inside and outside the ivory tower, the mutual struggle between political correctness and anti-awakening in academia, to the misogyny complex buried under the "likezhong", every "ingredient" seems to grow from the self-enclosed soil and watered with the water of arrogance. The rationality, openness and critical thinking that should be shared and empowered by science and the humanities are missing.

Li Siyang, Zhuang Muyang

More