AI Ethics

The Ethical Issues Raised By Spielberg In The Movie "Artificial Intelligence" More Than 20 Years Ago Are Still Thought-provoking

The Ethical Issues Raised By Spielberg In The Movie "Artificial Intelligence" More Than 20 Years Ago Are Still Thought-provoking

The Ethical Issues Raised By Spielberg In The Movie "Artificial Intelligence" More Than 20 Years Ago Are Still Thought-provoking

On December 16, the 2023 Science and Technology Ethics Summit Forum hosted by Fudan University was held. The experts at the meeting discussed

Artificial Intelligence Ethics_Features of Ethical Artificial Intelligence_Ethical Artificial Intelligence Mind Map

On December 16, the 2023 Science and Technology Ethics Summit Forum hosted by Fudan University was held. The experts at the meeting discussed "Theory and Practice of Chinese Science and Technology Ethics in the Age of Digital Technology".

[Who is the value alignment aligned with? What to align? 】

As early as 2001, Steven Spielberg raised an ethical question in his film "Artificial Intelligence" directed by him: When robots learn to love humans, will humans love robots?

"Spielberg did not give us an answer in the movie. In addition to the attention of film and television works, it can be said that every wave of artificial intelligence is accompanied by a discussion of the relationship between humans and machines." said Li Zhenzhen, a researcher at the Institute of Science and Technology Strategy Consulting, Chinese Academy of Sciences. For example, after "AlphaGo" defeated human chess players, the Electrical and Electronics Engineers Association released the world's first draft of the ethical code of artificial intelligence in 2016 and raised an ethical question: How to embed human norms and moral values ​​into AI systems?

When the generative AI, which is believed to be representative, began to independently create coherent content based on training data, elevating AI to the logical level of cognition, "This means that the nature of artificial intelligence problems is also changing, which once again increases the attention to human-computer ethical relationships and proposes value alignment." Li Zhenzhen said.

How should humans deal with their relationship with robots when "artificial intelligence bodies" increasingly show emotions and autonomy that are the same or similar to humans? "At this stage, the relationship between human-computer ethics has become an issue that cannot and cannot be avoided." Li Zhenzhen said that although value alignment faces challenges in the current technological and social and cultural context, the efforts to incorporate the concept of value alignment into the research and development practice of AI large models highlight the value goal of putting AI in the controllable range of human beings.

Professor Stewart Russell of California, Berkeley believes that value alignment is designed to ensure that the AI ​​created by humans has the same values ​​as humans.

"Value" is a complex existence, and "human values" is a value consensus under the greatest common divisor.

"Who is value alignment aligned with? What is alignment?" Li Zhenzhen said that from an individual perspective, the judgment of "what is valuable" depends on the individual's life experience, life experience, knowledge structure, and the values ​​formed in the processing of the events they experience. As far as groups are concerned, there are usually differences between different groups, and even different judgments and choices exist within a group. The human moral system covers the different cultural circles gradually form their own mainstream values ​​in the long history of their history. They retain common elements, but the moral content is not exactly the same.

In her opinion, the intelligent manifestation of artificial intelligence is just to imitate human rational thinking patterns. The enlightenment of human civilization begins with love, which benefits from natural interactions in emotions, will, emotions, experience, etc., which is exactly what artificial intelligence does not have.

[Is "Understanding Artificial Intelligence" the next milestone? 】

"In fact, before it was released, the academic community had paid close attention to the issue of artificial intelligence ethics." said Professor Zhou Cheng, director of the Department of Science and Technology Philosophy, the Department of Philosophy of Peking University.

Since 2017, literature in the field of artificial intelligence ethics has shown a rapid increase, which is related to the emergence of "AlphaGo" using deep learning technology. When drawing a keyword map for the artificial intelligence ethical literature, we can see that "transparency" has become a key issue.

With the rapid development of artificial intelligence, humans are gradually entering the algorithm society. Artificial intelligence has a certain degree of autonomy based on its powerful machine learning capabilities, but this also leads to the problem of lack of transparency of algorithms.

Not long ago, former Microsoft CEO Bill Gates publicly stated that generative artificial intelligence like this has reached the platform period, and future GPT-5 can only make partial improvements compared to current GPT-4, and cannot make leaps and breakthroughs. One of the basis for his judgment is that artificial intelligence has transparency, that is, the "black box" problem. He predicts that "understandable artificial intelligence" will be the next milestone in the development of artificial intelligence, but it will take at least ten years to achieve this goal.

The so-called "black box" problem refers to "After a large amount of training, its internal state becomes quite complex, and the operations between input and output are automatically performed, which also makes it difficult for people to accurately predict the behavior of the algorithm and not easily understand the mechanism of the algorithm outputting specific results."

For example, in the field of medical artificial intelligence, AI-assisted diagnosis and treatment systems sometimes make some treatment suggestions that professional doctors feel confused, but doctors cannot ask the algorithm "black box" why it makes such seemingly unreasonable decisions.

Zhou Cheng said that improving the comprehensibility and trustworthiness of artificial intelligence has become a new requirement, and multiple documents have been issued at home and abroad to propose to improve the interpretability of artificial intelligence. On October 30, the G7 issued the "International Guiding Principles for the Development of Advanced Artificial Intelligence Systems in Hiroshima Process", which mentioned that, for example, testing and mitigation measures should strive to ensure the trustworthiness, security and security of the entire life cycle of the system so that they do not pose unreasonable risks.

Although the scientific, philosophical and scientific and technological management communities have all emphasized the need to improve the transparency of artificial intelligence systems, there is still no consensus on the connotation of the key concepts of "interpretable AI", "understandable AI" and "trusted AI".

In Zhou Cheng's view, this set of concepts involves how humans open the "black box" of artificial intelligence and how to bridge the gap between human mental systems and artificial intelligence systems. Therefore, it is necessary to systematically sort out concepts such as "interpretability", "understandability" to more comprehensively outline the direction and goals of our future artificial intelligence. If we can establish a trust bond between users, developers, and machines when developing artificial intelligence, and integrate the value concerns of individuals and society into the entire process of development, we can better let artificial intelligence benefit mankind, achieve more "human-centered artificial intelligence", reduce potential risks, and better respond to the ethical challenges brought by artificial intelligence.

More