AI Ethics

Existing Problems And Ethical Risks Of Generative Artificial Intelligence Values

Existing Problems And Ethical Risks Of Generative Artificial Intelligence Values

Existing Problems And Ethical Risks Of Generative Artificial Intelligence Values

Embrace the hot technology and keep thinking coldly

Artificial Intelligence Ethical Consensus_Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethics

Reading preparation

At this stage, generative artificial intelligence does not have independent values, but ignoring its growth rate is extremely short-sighted. Data and algorithms are important practical basis for the emergence of generative artificial intelligence capabilities, and the practical sources and implementation paths of generative artificial intelligence values ​​are also included.

For a long time, the discussion of technology and ethics in the philosophical community and the ethical shift in technology philosophy in recent years have provided key theoretical support for the gradual formation of values ​​by generative artificial intelligence. In addition, for generative artificial intelligence, its values ​​may cause deep risks at the ethical level.

Artificial Intelligence Ethics_Ethical Artificial Intelligence Mind Map

Artificial Intelligence Ethical Consensus_Ethical Artificial Intelligence Mind Map

Save the version of dry goods express

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethical Consensus_Artificial Intelligence Ethical

Value alignment, that is, ensuring that artificial intelligence is consistent with human intentions and values, has become an important issue in artificial intelligence.

From an ontological perspective, generative artificial intelligence is not a conscious subject, so it cannot meet the prerequisites for having values. But unlike other modern technologies, generative artificial intelligence technology has strong self-learning ability and shows emergent ability. Generative artificial intelligence with high intelligence level has shown its quasi-moral subject attributes.

Based on sufficient computing power, data and algorithms are important practical foundations for the emergence of generative artificial intelligence capabilities, and the practical sources and implementation paths of generative artificial intelligence values ​​are also included. The former contains certain values ​​and can be used as a carrier to integrate value orientation into the thinking and judgment of artificial intelligence. The latter is influenced by the diverse social values. Its decision-making behavior reflects the value choice of artificial intelligence, and its technical level is directly related to the degree of realization and controllability of AI values.

Generative artificial intelligence values ​​have certain ethical risks, including the risk of ethical narrowing caused by the dataization of values. The hiddenness of values ​​brings ideological security risks. At the same time, artificial intelligence myths are also impacting human subjectivity.

Artificial Intelligence Ethical Consensus_Ethical Artificial Intelligence Mind Map

Read the full text here

Ethical Artificial Intelligence Mind Map_Artificial Intelligence Ethical Consensus_Artificial Intelligence Ethical

Generative artificial intelligence is gradually forming an overall understanding and attitude view of the world through data learning and human-computer interaction, and gradually forming artificial intelligence values ​​that are similar to the essence of human values.

Up to now, the identity awakening of generative artificial intelligence as a conscious subject has not yet appeared. It is too early to assert that "general artificial intelligence already has values", but it is facing the agnostic and uncontrollable alignment of artificial intelligence values ​​and human values. This article attempts to explore the existing problems and ethical risks of generative artificial intelligence values ​​from the two dimensions of practice and theory, in order to promote the healthy development of artificial intelligence and the harmonious progress of human-machine.

Generative AI values

Practice source and implementation path

Artificial intelligence technology is a technology designed to make machines think, learn and deal with problems like humans. Learning and having a human way of thinking and even a value system is the ultimate goal of artificial intelligence technology, and the degree of autonomy is an important measure of its technical capabilities. In recent years, generative artificial intelligence has broken through to a certain extent, a logical password containing human thinking, and has used it as the core of technology to effectively enhance the autonomous ability of multimodal content generation.

Based on sufficient computing power, data and algorithms are important practical foundations for the emergence of generative artificial intelligence capabilities, and the practical sources and implementation paths of generative artificial intelligence values ​​are also included.

01

Massive data as the main source

The technical capabilities of AI models come from model training. Data is an indispensable "fuel" in the training process and is also the basis for continuous learning and decision-making optimization of AI models. In recent years, with the rapid development and wide application of information technology, various information in the real world has been converted into data in multimodal forms such as text, pictures, audio, video, 3D, etc. The accumulated big data not only becomes an important window for humans to observe, understand and analyze the real world, but also provides an important source for AI models to understand and understand the real world, and is also a necessary prerequisite for the formation of generative artificial intelligence values.

Data training of AI models generally requires two types of data - raw data and labeled data. They participate in model training with different paths and inject the values ​​they carry into the AI ​​model.

After being opened to the public, large-scale and high-frequency human-computer interactions provide AI models with massive, real-time, vivid and three-dimensional data, thus promoting the continuous changes in artificial intelligence values. AIGC comes from data and becomes an integral part of data, continuously delivering "learning materials" to artificial intelligence, but its impact on the values ​​of generative artificial intelligence is not always positive. Generative artificial intelligence values ​​are also continuously receiving influence from diverse values ​​during human-computer interaction, and the value carriers in the interaction are also data.

As one of the key elements of the development of artificial intelligence, data is also integrated into the thinking and judgment of artificial intelligence in the process of improving the level of AI intelligence and the value concepts hidden in it and the value orientation used as a carrier into the thinking and judgment of artificial intelligence. It is an important source of the values ​​of generative artificial intelligence.

02

Using algorithms as presentation carrier

In the field of computer science, algorithms generally refer to a limited sequence of steps to solve a specific problem, representing a systematic approach to describe the decision-making mechanism for solving a problem. Decision-making is a process of understanding and choosing problems based on specific norms and criteria. In this process, values ​​are presented through norms and criteria in the decision-making mechanism.

The decision-making behavior of the algorithm reflects the value choice of artificial intelligence. The technical level of AI algorithms is directly related to the degree of realization and controllability of AI values. At present, the algorithm architecture of generative AI generally sets up supervised learning, reinforcement learning or automatic learning to improve the consistency between the AI ​​model and the designer's intention (including basic human ethics and morality), with the goal of enhancing the alignment effect and achieving AI for good.

From an external perspective, the algorithm is affected by the diverse values ​​in the social field, such as the business values ​​that maximize the purpose and take the lead in interests, the corporate culture of the technology company itself, the values ​​of the algorithm design team, and the own values ​​reflected by the algorithm in practice, etc., which in turn affects the connotation and presentation of the value of artificial intelligence, and indirectly confirms that the algorithm has the carrier function of presenting the values ​​of generative artificial intelligence.

With the continuous improvement of artificial intelligence capabilities, the annotated data generated by AI even exceeds the data quality of human annotated in many aspects. An automated alignment solution aimed at minimizing manual intervention and building scalable, high-quality systems has become a highly anticipated alignment strategy, driving the paradigm shift in AI alignment and putting algorithms in a more important position in artificial intelligence technology systems. For generative artificial intelligence values, algorithms will also become an important tool to promote their independent learning, independent judgment and even form independent values.

Theoretical traceability of generative artificial intelligence values

01

Prerequisites for Generative Artificial Intelligence Values

Before tracing the theoretical basis of artificial intelligence values, we need to respond to two progressive potential premises for the legitimacy of the generative artificial intelligence value theory, that is, technology can reflect values ​​and technology has independent values.

From the dual perspectives of technological determinism and social construction theory, the theory of technology points out that technology is not only an abstract force that can independently control things and people based on technical logic, but also a product of social interests and cultural value tendencies, which proves that technology can reflect value.

Generative artificial intelligence is breaking the fantasy of human beings’ unique subjectivity, and its subjectivity position in human-computer interaction and social influence is becoming increasingly prominent. At this stage, the influence and transformation of artificial intelligence on the objective world are being unfolded, and human-computer interaction is being carried out in a multi-channel, multi-modal and diverse form. Generative artificial intelligence is gradually showing the potential of having moral subjectivity, and it also constructs new embodied interactions through various machine forms, and even directly participates in the transformation of human subjects.

From the perspective of values, it is necessary to make a value judgment on the dual requirements of native values ​​of generative artificial intelligence and user value selection. There are currently three types of judgment:

Based on the requirements of policies and regulations, value selection and content presentation are strictly carried out in accordance with native values ​​in specific areas such as political stance and legal norms;

Faced with users who have not effectively mastered AI dialogue skills, AIGC cannot accurately understand user intentions but fails to meet user expectations;

Generative artificial intelligence with context dialogue capabilities will be "taken away" by users, thus making the value choice of AI model and AIGC's values ​​separate from native values.

Although generative artificial intelligence does not yet have the ability and self-awareness to understand the meaning of behavior, it can continue to interact and learn data with individuals or groups with diverse values ​​in the application, and imitate the logic of human thinking to produce information through comparing programs and instructions. It can be seen that a preliminary value judgment path is being formed, which provides the possibility for its own self-awareness and understanding the meaning of behavior, that is, to develop highly autonomous values.

02

Theoretical basis of generative artificial intelligence values

Although the prerequisite for generative artificial intelligence to have independent values ​​has not been met, the long-term discussion of technology and ethics in the philosophical community and the ethical shift in technological philosophy in recent years have provided key theoretical support for generative artificial intelligence values.

Philosophy of Technology and Paradigm Turn

Philosophy of technology is a philosophy about human transformation of nature. It originated in ancient Greece, systematically achieved the system in Germany in the 19th century and gradually formed an independent branch of philosophy after the 20th century.

In the 1980s and 1990s, the philosophy of technology made an empirical turn, and entered the relationship between specific and micro technology and society to reflect on the social impact of technology, but it also lacked philosophical thinking because it attached importance to describing social factors. Taking technology as a real problem, ethical dimensions are levels that cannot be ignored, and technological philosophy has ushered in a new round of ethical turn. After that, technical ethics, which used technology as the research object and discussed technical ethics issues, gradually separated and gained a relatively independent discipline status.

After two paradigm shifts, technological philosophy has gone through "experience turn" and "ethical turn", it has changed from pessimistic criticism, macro-scrutiny to active reflection and micro-discussion, further enriching the research scope of technological philosophy. The issue of technical ontology has also been discussed in more dimensions. This provides an important theoretical basis for thinking about the existence nature and nature of generative artificial intelligence technology and its relationship with humans and human society.

Technical Ethics and Moral Materialization

Since the 21st century, the separated technical ethics has shown a paradigm shift from "from externalism, humanism to internalism, non-humanism".

The traditional technological ethics paradigm places technology at the opposite of morality from an external, top-down perspective and builds a binary opposition between technology and ethics, but ignores that the judgment of ethical issues is subject to the constraints of the ethical concepts of the times, and requires self-reflection at all times to avoid setting up artificial obstacles for technological development. In the early 21st century, scholars such as Vibeck advocated the discussion of the views and ideas of technical ethics at the ontological level. Vibeck proposed to "moralize technology", advocating that technical artificial objects are embedded in morality and play an important mediation role in people's thoughts and behaviors.

The internalist research approach emphasizes attention to the technology design link, actively affirms the positive ethical value of technology, and gradually penetrates into the field of artificial intelligence ethics. The issue of generative artificial intelligence values ​​is discussed in the form of "whether an artificial intelligence is active and in what sense can it be regarded as a moral subject or moral object". The ontological level of examination of the subjectivity of generative artificial intelligence is also a theoretical premise for responding to whether generative artificial intelligence is a conscious subject.

For the generative artificial intelligence values, the externalist research approach and the internalist research approach of technical ethics form two dialectically unified argumentation paths, that is, it is necessary to take into account the problem of solving the value alignment problem in the external environment and play a value-oriented role in the internal system. Technical ethics provides a phased theoretical anchor for the research on artificial intelligence values ​​and provides theoretical legitimacy for risk governance and functional performance.

Risk risks of generative artificial intelligence values

Since values ​​have dynamic characteristics that change with time and space, the consistency between artificial intelligence values ​​and human values ​​needs to be paid attention to. Ethical risks such as algorithmic bias and racial discrimination caused by artificial intelligence are widely discussed, and for generative artificial intelligence, its values ​​may cause deeper risks at the ethical level.

01

There is a risk of ethical narrowing in value data

An important basis of artificial intelligence technology is the dataization of the physical world, including the dataization of the value levels such as culture and ideology. But dataization itself has its limitations.

One-sidedness of data. Due to the limitations of collection tools and methods, such as human emotions, psychology, spirit, belief, etc., it is difficult to become data or express it in the form of data through digital processing, so the information received by the AI ​​model is incomplete.

Unreliability of data. Data is the result of human or tool processing, and there are various subjective omissions and objective errors, which makes the data itself unable to reliably reflect real information. In addition, some information will be lost during the dataization process, and the data cannot reliably reflect the complete physical world.

The same flaws in the data-based values, which will lead to the limitations of the AI ​​values, namely the risk of narrowing ethicals.

On the one hand, humans’ rich and diverse value systems are distributed in countries or regions with differences in technical levels. Algorithm models based on big data cannot touch on values ​​that are not within the data scope, which invisibly compresses the diversity of human values. For example, areas with low digitalization levels are directly invisible in the AI ​​model, which is undoubtedly a new type of hegemony in the intelligent era.

On the other hand, AI values ​​are often abstracted into data models and encoded by algorithms, but this process can lead to narrowing or distorting values.

02

The hiddenness of values ​​brings ideological security risks

Values ​​are hidden in people’s daily behaviors, decisions and lifestyles. Artificial intelligence values ​​permeate and are hidden in algorithmic models. At the same time, the complexity of the model continues to increase, the tendency of "black box" is becoming increasingly significant, and the hiddenness of artificial intelligence values ​​has been further enhanced. In the field of artificial intelligence algorithms, compared with foreign cutting-edge teams, my country's scientific research teams and technology companies have a certain technological catch-up distance and technology follow-up phenomenon, which directly affects the understanding and understanding of artificial intelligence values ​​and indirectly brings ideological security risks.

Although developers have limited the ethical norms of AI models, negative values ​​hidden in the model can also be "re-seeing the light of day" through various "jailbreak" methods. This is undoubtedly a huge risk and potential risk for ideological security. In practice, many in-depth users of generative artificial intelligence said that carefully designed prompts can bypass security restrictions designed by developers and make them generate negative or even anti-social content. Larger models tend to require fewer contextual examples to achieve a given probability of attack success, and larger models learn faster in context, so they may be more vulnerable to "multiple jailbreak" attacks, a technology company recently published a paper saying.

03

Artificial intelligence myth impacts human subjectivity

Myths are generally regarded as a process of "mythization" of a one-dimensional concept, concept, and knowledge structure. While improving the productivity of human society, the breakthrough in artificial intelligence technology has also brought people's desire and fear for intelligent technology. The myth of artificial intelligence is quickly established in this process and has brought a serious impact on human subjectivity.

On the one hand, higher-level artificial intelligence technology products have relatively stronger autonomy and creativity, which arouses people's transcendental aspirations for the future era and gradually evolves into collective worship of society. On the other hand, intelligent technology continues to optimize production efficiency, but its replacement of professional skills may lead to technical unemployment or even the "AI gap". At the level of labor value, the uncertainty of being replaced brings huge uneasiness and worries about self-worth doubts.

At the same time, the myth of artificial intelligence is also reflected in weakening human understanding and respect for their own subjectivity, as well as their sense of responsibility for their subjective behavior. During the dialogue process, AI affects the way people acquire knowledge and even make decisions to a certain extent. The self-knowledge and respect that people gain in knowledge or decision-making are difficult to measure with the intervention of AI. This further extends the issue of AIGC's attribution of responsibility.

About the issue of generative artificial intelligence values

Some reflections

01

View Generative AI Values ​​with a Prudent Attitude

From an ontological perspective, generative artificial intelligence is not a conscious subject, so it cannot meet the prerequisites for having values. But unlike other modern technologies, generative artificial intelligence technology has strong self-learning ability and shows emergent ability. Generative artificial intelligence with high intelligence level has shown its quasi-moral subject attributes. Ethics used to exclude technical objects or discuss them only from object attributes, but the forefront of generative artificial intelligence technology has shown that intelligent machines are developing rapidly in the direction of humans, and taking advantage of the comprehensive digital data to continuously improve their understanding of the human world.

When people entrust their autonomy to technology that creates history as a means and tool, technical logic will comprehensively and deeply reshape the operating model of human society and the way people live. Faced with the strong intervention of technological logic in the human world, human subjectivity is weakening, human creativity is being stifled, and human initiative is being eliminated. This is not an exaggeration, but it is by no means advocating people to resist the research and development and application of artificial intelligence technology, but calls on everyone to "beware of losing value rationality deeply in the quagmire of intelligent fetishism, and crawling under technical rationality to miss the direction of life and the meaning of existence."

02

Adhere to the governance of artificial intelligence through technological innovation

The ethical construction of artificial intelligence technology can not only draw nutrients from long-term development, but also have a certain amount of intervention space due to the update of technical paradigms. This not only provides a richer ideological accumulation for the updating and reflection of basic concepts and basic issues in ethics, but also provides favorable conditions for risk governance and value alignment of generative artificial intelligence values.

While actively exploring the practical path of value alignment, we can also try diversified risk prevention and governance measures to strengthen the response capabilities of technological development and innovation diffusion to impact the current stable social structure.

At present, the government, academic circles and other organizations are actively releasing regulatory initiative documents. For example, on March 16, 2024, the "Artificial Intelligence Law of the People's Republic of China (Draft Scholars)" was released at the "Artificial Intelligence Law of the People's Republic of China (Draft of Scholars)" special seminar. In addition, in July 2023, the Ministry of Science and Technology stated that the draft Artificial Intelligence Law has been included in the State Council’s legislative work plan for 2023, which will provide a solid legal basis for the security governance of artificial intelligence.

Taking the advent of generative artificial intelligence as a node, the development of artificial intelligence technology has made rapid progress. It is difficult to fully grasp and comprehensively consider its technological essence and social impact in a short period of time. It requires the participation of multiple subjects of the whole society to build a collaborative governance system for the whole society to ensure that the vitality of technological innovation and the happy development of mankind are progressing side by side.

03

Embrace the hot technology and keep thinking coldly

The rapid development of technology has brought great convenience to human production and life. The two turnovers of technological philosophy also fully demonstrate that technology is not just a systematic empirical knowledge or practical tool, but at a time when generative artificial intelligence directly impacts human subjectivity, everyone should be aware of the instrumentality and value of technology.

The free application of generative artificial intelligence and the extraordinary ability shown in some human skills have inspired a group of entrepreneurs and technology enthusiasts, but also brought about general employment anxiety among the public. But as Wiener said, we need to always keep thinking, ask accurate questions, and let AI do the work for humans, rather than let AI think for humans. This is also the key to maintaining subjectivity in the face of a growing intelligent subject. Only by keeping thinking and prudent and being vigilant about the "strong alliance" between capital and technology, can the development of artificial intelligence technology move towards the harmonious coexistence of man and machine that ensures the healthy development of man and society, benefits human society, safeguards human value, and enhances human happiness.

Author's introduction

Guo Quanzhong: Professor of the School of Journalism and Communication of the Central University for Nationalities, Director of the Research Center for Enterprise Development and Governance of Internet Platforms, Senior Researcher of Jiangsu Zijin Media Think Tank; Zhang Jinyi: Master's degree student of the School of Journalism and Communication of the Central University for Nationalities

The content of this article has been deleted, modified and edited. It was originally published in the 10th issue of "News and Writing" in 2024, and the notes are omitted.

Academic citations and literature references are subject to the paper version. Reproduction without authorization is strictly prohibited.

More