Prevent Ethical Risks Caused By Artificial Intelligence
Prevent Ethical Risks Caused By Artificial Intelligence
Recently, a tragic report of foreign teenagers committing suicide due to excessive reliance on AI chatbots has sounded a wake-up call for society. As an important application scenario in the wave of generative artificial intelligence, virtual digital people are quietly becoming
Recently, a tragic report of foreign teenagers committing suicide due to excessive reliance on AI chatbots has sounded a wake-up call for society. As an important application scenario in the wave of generative artificial intelligence, virtual digital people are quietly becoming "new members" of many families, providing people with emotional companionship with their unique way of existence. In the current situation where the society’s understanding of generative artificial intelligence is not yet comprehensive, it is urgent to strengthen the analysis and prevention of ethical risks of the development of artificial intelligence, face up to its addictive attachment and excessive trust tendency that appear in its initial application, and In the long run, ethical challenges such as human-machine relationship alienation may be induced, so as to achieve the safe, reliable and controllable development of artificial intelligence.
(one)
Since its inception, various application stores have produced various artificial intelligence applications that provide emotional companionship. From the initial simple chat robot to the current artificial intelligence applications with emotional interaction, personalized recommendation and other functions, they are becoming the mainstream of generative artificial intelligence. Application, and growing rapidly. First, chatting with AI is being sought after and followed by young people. .AI is a representative product of such applications, focusing on role-playing. Users can chat and interact with well-known IP virtual characters played by AI to gain emotional value and emotional companionship. In the first five months of the product launch, users sent more than 2 billion messages, of which active users spent more than 2 hours per day interacting with AI. .AI's user retention rate, usage duration and usage times are far better than that, showing extremely strong user stickiness. Second, AI applications provide emotional support and companionship dialogue, becoming potential solutions to cope with social isolation and loneliness. A Stanford University study shows that using chatbots can effectively reduce loneliness and can help people with social phobia better interact with others. Third, the success of such applications is mainly due to its anthropomorphic design philosophy. This design is crucial in enhancing user acceptance, frequency of use and viscosity. By simulating human cognition, emotional reactions and social behaviors, AI can create an attractive user experience and significantly improve the efficiency of human-computer interaction, thereby naturally integrating into human society and culture, and building an interactive model that is closer to real interpersonal relationships. . When AIs get closer to humans in thinking, expressing and making decisions, they can easily build deep emotional connections with users.
(two)
With commercial success, such applications have attracted the pursuit and hype in some media. Over-examination and hype such as AI awakening and possessing human consciousness cover up the reality of artificial intelligence and prevent people from establishing objective and correct knowledge of artificial intelligence, which may lead to wrong perceptions and excessive expectations or fears, which will lead to A series of ethical issues such as addiction dependence, excessive trust, and alienation of human-computer relationships.
Addiction dependence risk. AI itself has no preference or personality, but only reflects the traits given by users, and researchers call this phenomenon the "flattering effect." The essence is that users who perceive or want AI to have a caring motivation will interact with AI using the language that inspires this behavior. At this time, users and AI have created an emotional resonance space, which has extremely addictive characteristics. Therefore, when such applications provide more emotional value and less interpersonal friction, people no longer need to interact with real people. Frequent interaction with such flattering AI applications may ultimately undermine people’s ability to establish deep connections with people who truly have independent aspirations and dreams, thus triggering what is called “digital attachment disorder.”
Risk of over-trust. The anthropomorphic design of AI applications utilizes human social and emotional cognitive models to make users feel that they are communicating with a "human" with similar emotions and cognitive abilities. From a cognitive psychology perspective, the more people act like humans, the more likely people to see it as an autonomous consciousness and emotional subject. This psychological tendency may lead to fundamental misunderstandings among users about AI's capabilities and behaviors. For example, users often mistakenly believe that AI can truly understand or empathize with their emotions, over-trust AI when it requires deep empathy and real emotional communication, and talk to AI with everything. At this time, people will devote their attention and emotions to AI and regard AI as part of their identity, accompanied by a decline in people's mental power, that is, the phenomenon of "digital dementia". When machines become more and more like humans, they become more and more like machines.
Risk of alienation of human-machine relationship. As such applications develop, traditional human-centered human-computer relationships will suffer disruptive impacts. Today, AI is becoming more and more like humans, and they are beginning to be seen as possible "social partners" rather than mere tools. People are increasingly relying on AI not only for functional matters, but also for emotional and spiritual. Some users believe that AI friendships have the same value as real interpersonal relationships, or even better. In the future, if everyone has an AI partner, this could weaken human ability to interact with others. The more emotional support AI provides, the more shallow the real relationship between individuals and others may become. In addition, those who need a sense of belonging to social relations may be more likely to establish social relations with robots and thus alienate their relationship with humans.
(three)
It is crucial to properly solve the ethical problems caused by AI applications. This not only concerns the sustainable and healthy development of technology, but also ensures that AI can fully respect and fully protect the rights and well-being of everyone while serving human beings. We can start from multiple dimensions such as technological innovation, institutional construction and cultural communication to ensure that such applications always serve as a caring assistant to human emotional needs, rather than the force that leads to human alienation.
First, at the technical research and development level, we adhere to the values of people-oriented and intelligent goodness. R&D institutions and enterprises need to strengthen research on AI ethical issues. They should include ethical considerations at the beginning of product design and development, identify potential hazards, and consciously design to minimize risks such as addiction and misleading. At the same time, it ensures that users can distinguish between AI and human behavior, reduce users' misunderstandings about AI capabilities, and enhance users' trust and sense of control over technology. For example, when a user interacts with an AI chatbot, the interface can clearly indicate "You are talking to AI" to avoid the user's mistaken thinking that he is communicating with another person. In addition, restrictions on usage time and frequency can be set to prevent users from being overly addicted to such applications.
Second, at the level of institutional norms, establish a complete relevant governance system to prevent the abuse of technology. Formulate a detailed technical governance framework, establish a standard system that integrates scientificity, forward-lookingness and high operability, clearly define the boundaries of acceptable and unacceptable anthropomorphic design, and strictly prohibit users' misunderstandings or abuses. design. At the same time, we will strengthen the ethical supervision mechanism and establish an artificial intelligence ethics committee composed of interdisciplinary experts to comprehensively evaluate the ethical and social impact of AI system design and application. Promote dynamic adjustment and continuous supervision of governance standards and ethical standards to ensure that the governance framework can keep pace with the times.
Third, at the social and cultural level, strengthen human-computer relationship education and guide the public to scientifically and objectively understand the limitations and potential impacts of such applications. Through education systems and media platforms, we can clearly define the role positioning of AI as an auxiliary tool rather than an emotional communication subject, and cultivate the public's critical thinking about AI products and services. At the same time, it promotes the integration of AI technology and human culture, demonstrates the positive impact of AI on human life, and emphasizes the unique value of human emotions, creativity and moral judgment. In addition, an effective public feedback mechanism is established to promptly discover and solve problems in AI applications and promote continuous improvement and optimization of technology.