AI Ethics

Ethical Thought On "Whether Artificial Intelligence Should Have A Legal Subject"

Ethical Thought On "Whether Artificial Intelligence Should Have A Legal Subject"

Ethical Thought On "Whether Artificial Intelligence Should Have A Legal Subject"

[Academic Controversy] Author: Li Ling (Associate Researcher at the Institute of Marxism, Fudan University) With the popularization of the application of large language models, generative artificial intelligence has shown increasing autonomy, which not only allows people to see the possibility of strong artificial intelligence

With the popularization of the application of large language models, generative artificial intelligence has shown increasing autonomy, which not only allows people to see the possibility of strong artificial intelligence, but also once again stimulates people's discussion of the dominant position of this human creation. Recently, Guangming Daily's theoretical edition has published several articles to discuss whether artificial intelligence should become the legal subject, which has a relatively comprehensive viewpoint of the academic community. Among them, the two articles "There is no theoretical obstacle to becoming a legal subject" and "Limited Legal Subject: A Reasonable Choice of the Legal Status of Artificial Intelligence" are argued from the perspective of philosophy and ethics, and propose that the view that artificial intelligence becomes a legal subject is not contradictory to the understanding of human subjectivity elements in the field of philosophy, will not belittle human subject status, and will not damage the human-centered subject system. In this regard, the author believes that these arguments do not grasp the ontological essence of why the subject is the subject and why the personality is the personality. Even from the bottom line of humanism, human beings, as the master of all things, have a different personal dignity and subjective status than all things. Giving artificial intelligence the dominant position not only damages human personality dignity and dominant position, but also does not facilitate the actual attribution and responsibility.

Recently, Guangming Daily's theoretical edition has published a series of controversial articles on "Whether artificial intelligence should have a legal subject". Data picture

The presence of human beings is the fundamental condition that constitutes the "subject", and artificial intelligence can only serve as an object

The prerequisite for discussing whether artificial intelligence should have a legal subject is whether artificial intelligence can constitute a subject or whether artificial intelligence is subjective. The subject is a philosophical concept with specific references. If artificial intelligence cannot be philosophically proved to be a subject, it will be difficult to qualify it from a legal relationship.

However, even generative artificial intelligence that has strong autonomy and independence and shows a certain emotional consciousness is far from having a subjective position. Although the subject and subjectivity have different connotations for different philosophers: Aristotle regards the subject as the recipient, Descartes regards the subject as a thinker with self-consciousness, and Kant defines the subject as a rational being - but it is not just related to singular or plural people. Marx even pointed out directly: the subject is a person, the object is nature, and "man is always the subject." It can be seen that only people, and those who purposely and consciously understand or transform the world, constitute the subject. The subject that comes from people can be either individuals, groups, organizations, or even the entire society, but there must be specific and practical people present.

The most essential determination of a person as a subject is subjectivity, and the most important content of subjectivity is human creativity and practicality, that is, subjective initiative or self-consciousness. This is the most fundamental feature of human beings and human beings as the subject. So far, although artificial intelligence, including generative artificial intelligence, has shown increasingly powerful learning ability and certain independent behavioral ability, the problems it solves are still only computational solutions in closed scenarios, and cannot set goals or plans for the external environment and provide autonomous and active feedback. It is far from "evolving" self-awareness or initiative. Therefore, artificial intelligence does not have the subjectivity unique to people, and cannot form a subject.

Furthermore, artificial intelligence cannot constitute a legal subject or a limited legal subject. "Limited Legal Subjects: Reasonable Choice of the Legal Status of Artificial Intelligence" proposes that the historical evolution of civil subjects from "people can be non-human" to "people can be non-human" reflects the characteristics of civil subjects' defacement and de-ethicalization. But the foundation of any fictional subject can be traced back to the existence or presence of human beings. This not only does not contrary to the theory that only people are the subject, but instead strengthens the concept that only people can be the subject. On the one hand, the construction of legal persons such as companies and societies can be regarded as a collection of plural people. The core element of the legal person as the legal subject is still the person who enjoys rights, obligations and assumes certain responsibilities; on the other hand, the philosophical foundation of non-human organizations that constitute the legal subject do not advocate a strong anthropocentrism, but only emphasizes the bottom-line humanism, that is, the existence or presence of people. Giving legal subjective status to fully automated, separated from or independent of humans fundamentally deviates from this philosophical purpose.

Artificial intelligence is essentially a tool for serving human beings, and the concept of "personality" fundamentally rejects the value of the tool.

Compared with arguments or rebuttals at the subject-level, the debate over whether artificial intelligence has the legal subject status is more concentrated on the personality theory level. Scholars who hold affirmative theory mainly build new personality types for artificial intelligence by putting forward views such as legal personality expansion theory, electronic personality theory, instrumental personality theory, and limited personality theory, so as to prove the legal subject status of artificial intelligence. However, like "subject", "personality" is also a concept with special connotation and value. Artificial intelligence does not enjoy personal dignity. Giving artificial intelligence corresponding personality may instead threaten the protection and realization of human dignity.

The concept of personality and its dignity is a modern product of uplifting human nature and pursuing civilization and progress since the Enlightenment. It marks the uniqueness of human beings from the perspectives of transcendence, abstraction and universality that human beings are different from animals or other things. As Kant said, some beings, although their existence is not based on our will but on nature, if they are irrational beings, they only have relative value as means, and therefore are called things. On the contrary, rational beings are called personality; people and generally every rational being exist as an end, and their existence itself has an absolute value. The concept of personality and dignity demonstrates the intrinsic and absolute value of human beings as means or tools for ends themselves rather than other ends. Therefore, it not only becomes the most important source of value in human society, but also constitutes an important foundation for human rights and becomes the legislative basis for the Charter of the United Nations and the constitutions of countries around the world.

However, artificial intelligence, as a human creation, not only does not enjoy personality as a purpose itself and intrinsic value, but also starts to threaten or undermine human dignity due to false or improper use. On the one hand, artificial intelligence is an intrinsic tool invented and created by humans to expand human freedom and improve human capabilities and efficiency. It serves people throughout the life cycle from birth to operation and then to death. Therefore, it only has instrumental relative value from beginning to end, and cannot have absolute value like human beings, and will not enjoy personal dignity. Even if strong artificial intelligence with self-awareness appears in the future, it cannot abandon the positioning of tool value; on the other hand, the uncontrollable development of artificial intelligence, through large-scale collection and calculation of human body, identity and behavior data, has led to moral abnormalities such as privacy violations, control of spirit, induce consumption, fraud and deception, and has threatened human subjective status and personal dignity to a certain extent.

Since "personality" fundamentally rejects the value of instruments, for example, the article "Limited Legal Subject: A Reasonable Choice of the Legal Status of Artificial Intelligence" puts forward a limited instrumental personality proposition. The word formation method that binds personality to tools, finite and other words is neither rigorous nor beneficial. It is just an over-imagination made by posthumanism through literary mechanisms such as analogy and metaphor. It essentially only gives artificial intelligence economic property rights, which is far from the true personality rights. Personal dignity demonstrates the uniqueness of human beings, gives artificial intelligence personal dignity, and transforming non-human entities or existences into existences that are as important as human beings, is not conducive to human rights protection nor helps artificial intelligence develop to goodness. The ultimate result is to continuously eliminate the uniqueness of human beings and the personal dignity formed on this basis.

Becoming a legal subject does not help solve the attribution dilemma of artificial intelligence, but instead creates a more complex situation

Another reason for giving artificial intelligence subject status comes from the real development needs to be said, that is, due to the large-scale application of artificial intelligence and the increasing degree of intelligence, the existing legal framework has experienced real dilemma such as not being able to find the legal subject, being unable to attain liability or being unable to hold accountable during the implementation of the existing legal framework. For example, in the field of contract law, it is common for intelligent robots to sign contracts on behalf of others, but the legal mechanism is still unclear whose "autonomousness" is expressed by the sales contract concluded based on intelligent programs. For example, in the field of tort liability law, if an accident occurs during driving, causing injury or infringement, how to attribute liability has become a difficult problem. Whether the intelligent programmer, car manufacturer, user or victim bears the responsibility, the current legal system is difficult to make an effective judgment. For example, generative artificial intelligence will lead to significant intellectual property problems, but granting robots intellectual property rights fundamentally violates the original legislative intention of protecting innovation.

Therefore, some scholars proposed from the perspective of practical needs that it is urgent to give artificial intelligence subject status or legal personality, and clarify and establish a responsibility sharing mechanism for artificial intelligence. "There is no theoretical obstacle to becoming a legal subject" proposes that "there can be used to refer to the capital system of the company's legal person and protect the liability property of artificial intelligence by setting compulsory liability insurance from the factory." The article "Limited Legal Subject: Reasonable Choice of the Legal Status of Artificial Intelligence" also proposes "Unified opening of corresponding trust accounts for artificial intelligence, purchasing insurance, etc.", so that artificial intelligence can participate in civil legal relations as the bearer of special rights and obligations, and solve the dilemma of attribution and attribution required by practice. However, the design of these property rights does not need to be elevated to the height of giving artificial intelligence subject status or personality rights, but only needs to appropriately supplement and adjust the property system of natural persons or legal persons related to artificial intelligence.

Giving artificial intelligence the position of legal subject not only does not help get out of the attribution dilemma caused by artificial intelligence, but will instead cause a more complex attribution situation due to the introduction of new unnecessary "legal subjects". Indeed, artificial intelligence technology is no longer as simple as a tool in the agricultural era or a machine in the industrial era. It is a complex system that is deeply coupled with people like the "giant machine" mentioned by American scholar Mumford or the "mount frame" mentioned by Heidegger. People play a role in different identities at different links and promote the realization of the functions of artificial intelligence technology, forming a "distributed responsibility" situation between multiple responsible parties and complex interactive behaviors. However, distributed responsibility only lengthens the causal chain between behaviors and increases the difficulty of attribution, and will not lead to the disappearance or transfer of responsibility. As creators or users of artificial intelligence, people have the responsibility to sort out the distribution of responsibilities in various links and mechanisms in complex systems and make clear attribution of responsibilities. Even if opaque or generative artificial intelligence is performed in unexpected ways by the "algorithm black box", it can still be attributed to the principle of shared responsibility or strict responsibility. In any case, artificial intelligence is created by people for a certain purpose, so they must assume responsibility for the overall behavior they create or use, rather than transferring the responsibility to a non-human existence without subjective status and personality qualifications. Otherwise, if artificial intelligence takes part or complete responsibility instead of humans, it will inevitably lead to more complex situations such as mutual shirking and liberty, and even the disappearance of responsibility due to no one taking responsibility.

Strictly speaking, the various autonomous or intelligent behaviors of artificial intelligence are still only probability theoretical choices based on previous human experience or data, and are themselves extensions and projections of human will and value. Therefore, we must clearly attribute the responsibility to the specific singular or plural people who create or use this artificial object, so that more people can take responsibility for these complex collective behaviors that are difficult to control, and use artificial intelligence more cautiously and rationally.

"Guangming Daily" (14th edition, September 9, 2024)

More