Ethical Dilemma Under The Technological Singularity: Should Artificial Intelligence Be Given Moral Status?
Ethical Dilemma Under The Technological Singularity: Should Artificial Intelligence Be Given Moral Status?
On June 4, 2025, the Brookings Institution published a commentary article by Mark MacCarthy, "Does artificial intelligence systems have moral status?"
On June 4, 2025, the Brookings Institution (Brookie Institution) published a commentary article by Mark McCarthy, "Does artificial intelligence systems have moral status?" 》 (Do AI have moral ?), systematically discusses the complex issues of whether artificial intelligence should be given moral status, awareness and even rights with the rapid development of artificial intelligence technology. The article analyzes the current progress of artificial intelligence in "thinking" ability, reviews the discussions around classic thought experiments such as Turing test and Serl's "Chinese room", and lists various conditions required to judge moral status. The author believes that although the long-term issue of artificial intelligence welfare cannot be ignored, given the more urgent practical harms such as bias, national security, copyright, and false information faced by the field of artificial intelligence, it is too early to invest a large amount of resources to the speculative discussion of the moral status of artificial intelligence. Qiyuan Insight compiled and compiled the core content for readers' reference.
Do artificial intelligence systems have moral status?
In 2024, the company appointed Kyle Fish as the first artificial intelligence welfare researcher to study “ethical issues regarding awareness and rights of artificial intelligence systems.” Five months later, the company announced a “research program designed to investigate and prepare for model welfare.” New York Times columnist Kevin Roose expressed concern.
Critics such as cognitive scientist Gary Marcus believe the move is an artificial intelligence hype. As early as 1984, computer scientist Ezhel Dextra () once said: "The question of whether machines can think about it is as irrelevant as the question of "whether submarines can swim." The author believes that it is too early to invest resources in these speculative issues in the face of urgent issues in the fields of artificial intelligence such as bias and national security.
Nevertheless, it is necessary for policy makers and AI companies to assess the likelihood of an ethical status in AI models. "Humans have poor performance in expressing sympathy to beings that do not look and behave exactly like us, especially in profitable situations. As artificial intelligence systems become increasingly integrated into society, it will be very convenient to treat them as mere tools."
Historically, slave production has existed as a solid system for thousands of years, and robot slavery could make a comeback if policymakers and companies ignore moral requirements.
In the dystopian novel Never Let Me Go, Kazuo Ishiguro describes scientists creating clones as organ donors before solving the problem of clones’ moral personality. Even when clones are believed to be treated equally morally, organ donation services have become deeply rooted in society as a way to restore disease and extend life span, even the government still allows their organs to be harvested to this day.
Computer scientist Jerry Kaplan suggested that AI models may face similar situations: "The most likely scenario is that after a large language model (LLM) patiently explains why AI models believe they have perception, humans will only continue to use it as a tool for their own benefit, and will not even frown."
This attitude should be vigilant. Before the utilitarian nature of robot slaves becomes an inherent feature of society, it is crucial to promptly solve the moral status of artificial intelligence models.
Can computers think?
A complete moral status seems to require thinking ability and awareness experience, which leads to the problem of universal artificial intelligence (AGI). When an AI model can perform multiple cognitive tasks, it demonstrates universal intelligence. Legal scholars Jeremy Baum and John pointed out that universal intelligence "exist on a continuum." At some point, AI models show a wide enough general cognitive ability, or can be considered to be thinking.
In his 1950 article, Alan believed that when artificial intelligence models can pass language behavior tests (Turing tests), they should be considered thinking. The test requires the tester to have a five-minute conversation with the program. If the program successfully convinces the tester that it is human, it will pass the test.
In March 2025, cognitive scientists R. Jones and Benjamin Bergen (K.) reported that LLMs had passed this test, and GPT-4.5 led testers to believe that they were humans at a 73%. But critics think it's just five minutes of chatting, not really ongoing conversation.
Turing test is not a necessary condition to confirm machine intelligence. But if the AI model can continue to talk about common topics and show intelligence and understanding, what else can prove its consciousness and thinking ability?
Turing once pondered others’ objections to their own tests. Among them, one objection is that machines must "because of the thoughts and emotions they feel... to create", Turing responded that without behavioral testing, it will lead to "self-centeredness", and we adopt "polite agreements", that is, all humans who seem to be communicating are regarded as thinking, and the same should be true for machines. Another objection is that thinking requires a soul, and God has not given a machine soul. Turing responded that if a machine is created that is advanced enough, God would “consider the right time to give it to his soul.”
John Sell's response
In 1980, philosopher John refuted the computer dialogue in the "Chinese Room" thought experiment by imitating only and does not involve real thinking or understanding. Even if a computer system can respond in a way that is indistinguishable from humans, it cannot prove that it is conscious.
Sell's imaginary system includes a person who only knows English, using the English rule manual to process the entered Chinese symbols and output Chinese. From the outside, the system understands Chinese, but the people in the room do not understand Chinese, and the rest are inanimate objects. Sell's example captures the idea that computers only mimic human thinking, and its "bionaturism" believes that the brain produces ideas, but computers do not. Thoughts are based on life, and artificial intelligence models are not living bodies.
But this biological argument seems arbitrary. If neurons can produce ideas, why can't silicon chips? Philosopher David called this "a puzzle of consciousness" and believed that consciousness transcends physical or biological materials.
Mind-body is a long-term problem, whether the body is made of silicon or carbon. These puzzles cannot imply that AI models cannot be conscious in principle.
What are the conditions for moral status?
Assuming computers have a conscious experience, is this enough to give them moral status? Philosopher Nick Bostrom believes that perception is necessary but not sufficient. Insects have experience, but have low moral status. Artificial intelligence models need to demonstrate "wisdom" (), that is, abilities related to higher intelligence, such as self-awareness and the ability to act as rational response subjects.
Law professor F. Patrick Hubbard (F.) supplemented the “ability to live in a community based on the common interests of others” as a standard of moral status; philosopher Seth Lazar supplemented rational autonomy as a condition of moral personality, that is, AI models need to have the “ability to determine goals and commit to achieving them” and “a sense of justice and the ability to resist unfair norms.”
This provides a list of moral status:
1. General intelligence: the ability to engage in broad cognitive tasks;
2. Consciousness: the ability to be aware and experience;
3. Reasoning: the ability to connect premises and conclusions and make inferences;
4. Self-awareness: Awareness of oneself as an independent existence with history and identity;
5. Agency: the ability to set goals and execute them;
6. Social Relationship: The ability to interact with other conscious entities in the community.
Do they reason?
A standard puzzle for testing reasoning ability is: "Julia has two sisters and one brother. How many sisters does her brother Martin have?" Today's reasoning models can give the correct answer and show the logical steps.
This looks like reasoning. But is the model just pretending? They generate sequences of seemingly chains of thought through learning. Philosopher Shannon Waller () believes that this is "meta-imitation", imitating the thinking chain in the training data, and not really solving the problem.
The key to the problem is whether the system truly understands logical connection words. Do they intuitively feel the traction of reasoning? Are you aware that the conclusion must be drawn from the premise? The only way is to further study the universality of this ability to “reason”. If it is true reasoning, it is universal; if it is purely imitation, it will fail on issues outside the training dataset.
Can they show real agency?
Today’s AI models are lazy, do not show true initiative, and only do what is told - no human input is still. Except for the goals given by humans, there is no goal of its own.
In Ishiguro Kazuo's dystopian novel "Klara and the Sun", Clara, a conscious care robot, only organizes memories in the garbage dump after being discarded.
Humans have not given artificial intelligence free will or true autonomy. They cannot set their own goals and can only act on goals given outside. Moore of Carnegie Mellon University once expressed doubts about the ability of self-directed machines, calling it "real science fiction."
Even if the artificial intelligence system completes the task itself after receiving instructions, and sometimes its behavior exceeds the original settings, its autonomy cannot be determined based on this. Nick Bostrom pointed out that the chess computer moves beyond the chess that programmers have unexpected moves, not because they have their own goals, but because their purpose is programmed to win. For example, taking out chess moves that humans would not expect will not independently develop the goal of winning chess, which is given by developers.
Unexpected behaviors that may arise from hackers or programming/task errors, and AI systems may adopt sub-targets that are different from those expected by the designer in order to achieve their final goals. Humans cannot clearly specify tasks and goals to ensure that AI acts as they wish, because there are problems with alignment difficulties. But the loss of control of artificial intelligence does not mean its autonomy, it only means that humans have not learned how to clearly command.
Immanuel Kant regards rational autonomy as the key to human dignity. It believes that rational autonomy refers to the ability to choose goals and take rational steps and act according to the laws given by freedom, which is the basic composition of moral personality. If this is missing from the artificial intelligence model, it is still unclear whether it can have a complete moral status.
Moral personality is a continuum. The AI model may be among animals and below humans, which is a challenge to ethics, laws and policies.
Should humans take action now?
Philosopher Robert Long believes that "the building blocks of conscious experience may appear naturally" because artificial intelligence systems are developing "perception, cognition and self-modeling" and so on. He believes that even unconsciousness may have agency, because artificial intelligence models can "set and modify advanced goals, long-term plans, etc. His conclusion is that “some AI systems will be worthy of moral consideration in the near future”, so “we need to start preparing now.” He suggested that researchers evaluate the agency and awareness of AI models by observing calculations.
This is a question of judgment, but the author believes that it seems unreasonable. Sooner or later policy makers and companies may face these problems. As Ishiguro's dystopian scenario suggests, waiting too long can be dangerous. But current behavioral evidence shows that humans are not yet mature in developing the personality continuum, and AI models with real initiatives seem too far away. Even if we agree that the basic goal of AI research is to avoid cruel treatment of AI models, it is not necessary to invest in the topic of “AI Welfare” to study whether AI is conscious or independent agency. Other issues are more worthy of human investment in scarce resources.