Install An Ethical "navigator" On Artificial Intelligence
Install An Ethical "navigator" On Artificial Intelligence
Source: Guangming Daily Original title: Installing an ethical "navigator" to artificial intelligence Artificial intelligence is profoundly changing the world. But it is like a "double-edged sword", which not only brings huge opportunities but also causes many challenges - in the fields of medical care, education, transportation, etc., artificial intelligence improves efficiency
Artificial intelligence is profoundly changing the world. But it is like a "double-edged sword", which not only brings huge opportunities but also raises many challenges - in the fields of medical care, education, transportation, etc., artificial intelligence improves efficiency and improves quality of life; however, it may also impact employment, invade privacy, trigger algorithm bias, etc. How to grasp this "double-edged sword" well so that artificial intelligence can truly become a force that promotes human progress.
The current choice determines the future direction. In this issue, we invite two experts to discuss how to avoid risks and guide technology toward goodness.
Artificial intelligence plays an increasingly important role in improving production efficiency, reducing costs, optimizing resource allocation and improving quality of life. But there is no doubt that technological dividends and ethical risks coexist in the development of artificial intelligence. To this end, it is urgent to tie the ethical "reins" to artificial intelligence with unlimited potential, and provide them with dynamic adjustment ethical "navigator" so that the development of artificial intelligence will always move on the correct path guided by human ethical civilization.
Artificial intelligence provides huge convenience and also triggers ethical risks
More and more people are experiencing the positive impact of artificial intelligence on the economy, society and personal lives.
While enjoying the huge convenience it provides, we have to face up to the many ethical risks caused by artificial intelligence:
Shopping apps infer consumers' health and privacy through heart rate data, and e-commerce platforms accurately predict users' purchasing behavior through user browsing records, voice interactions, etc. The scattered data of individuals on application software can be reorganized by artificial intelligence to generate "digital clones". The artificial intelligence system can splice out massive data to create a life profile that knows you better than you.
Artificial intelligence has subverted the traditional "behavior-responsibility", that is, the responsibility ethics of "whoever makes mistakes will be responsible". For example, in different levels of autonomous driving failures, it is difficult to determine which specific technical link is faulty or faulty, and it is difficult to determine the responsible subject.
Algorithm bias in artificial intelligence undermines fairness and justice. For example, due to the opacity of the operation logic, decision-making basis and influence mechanism of the takeaway platform algorithm, new employment forms such as online deliverymen and online car-hailing drivers who rely on Internet platforms are trapped in a "data maze" and "invisible cage" to a certain extent, and their rights protection may face systemic risks. Based on the differentiated and dynamic pricing mechanism tailored by "user portrait", the so-called "big data kills old customers" phenomenon has been formed, which is a "implicit bias" in algorithmic decision-making.
In addition, in the field of life sciences, artificial intelligence-driven gene editing technology is breaking through the ethical laws of natural evolution. In terms of human emotions, the "virtual companion" created by generative artificial intelligence will reshape interpersonal ethical relationships. Some young people are more willing to talk to artificial intelligence and alienate real relatives and friends around them. This emotional substitution not only changes human emotional patterns, but also eliminates the biological basis of social ethical connections.
Use technological ethics to prevent the risk of technology out of control
The ultimate goal of the development of artificial intelligence is to enhance human capabilities, promote social equity, and improve the quality of life, rather than simply pursuing technological breakthroughs or commercial interests. Science and technology ethics is the "navigator" of the development of artificial intelligence, which can provide clear directional guidance and value constraints for its research and development, application and governance, ensure that the development of artificial intelligence technology always serves the well-being of human society, prevent artificial intelligence from getting out of control or deviating from the moral track, and achieve the coordinated progress of artificial intelligence technology and human welfare.
The ethics of science and technology proposes the value of "people-oriented", requiring artificial intelligence technology to respect human dignity, protect human freedom and rights, and avoid technological alienation into a tool of oppression. If an artificial intelligence algorithm damages the rights of relevant groups due to data bias, the algorithm design must be revised according to the principles of scientific and technological ethics to ensure fairness and justice to all people.
At the same time, the role of scientific and technological ethics is also to prevent the risk of technological out of control. Artificial intelligence technology has the characteristics of autonomy, inexplicability and widespread influence. If there is no ethical constraint, it may lead to privacy violations, algorithmic discrimination, etc. The role of governing the development of artificial intelligence with scientific and technological ethics lies in the formulation of "preventive ethical principles" to predict risks at the design stage, such as formulating ethical decision-making guidelines for autonomous driving and limiting the use of deep forgery technologies.
Lay the foundation for social trust. The public's acceptance of artificial intelligence directly affects the implementation of technology. If artificial intelligence systems lack transparency and interpretability, or have problems such as data abuse, it will inevitably lead to a crisis of public trust in artificial intelligence. Technology ethics emphasizes the principles of transparency and accountability, and requires artificial intelligence systems to disclose decision-making logic and clarify the ethical and legal responsibility boundaries of developers, so as to win the trust of the public.
Balancing the interests and power of multiple parties. Artificial intelligence technology may intensify the monopoly of data resources. Technology ethics prevents technology from becoming a tool of interest for minorities or organizations by advocating value concepts such as equity, inclusion and democratic participation. When using technologies such as facial recognition, it is necessary to balance the relationship between public safety and personal privacy and protect the public's privacy as much as possible.
Coping with uncertainty in the future. Strong artificial intelligence (AGI), brain-computer interface and other cutting-edge technologies may completely change the way human society exists and people live. Through forward-looking discussions such as "whether artificial intelligence should have a moral subject" and other issues, it provides an effective basis for future legislation and policy formulation. If artificial intelligence develops to the level of autonomous awareness, it is necessary to study in advance how humans define the rights and responsibilities of artificial intelligence, put forward scientific and technological ethical governance suggestions to solve the problem, and define a forbidden zone for the development of artificial intelligence technology.
Build a new form of civilization "human-machine symbiosis"
In 1942, science fiction writer Isaac Asimov proposed three laws of robots: First, robots must not harm humans, nor can they harm humans due to inaction; second, robots must obey the commands given to it by humans, unless these commands conflict with the first law; third, robots must protect their existence unless this protection contradicts the above two points. This provides a basic framework for the ethics of artificial intelligence, but its individual-centrism, static rule setting, and vague expressions such as "human harm" and "obeying orders" cannot adapt to the complexity of contemporary artificial intelligence, and cannot solve the possibility of artificial intelligence systems that may harm the interests of the group through algorithmic discrimination, data abuse, etc., such as macro-level ethical risks such as employment fairness and privacy rights; they cannot effectively constrain dynamically evolved artificial intelligence behaviors and cannot provide a clear mechanism for division of rights and responsibilities. At present, it is urgent to calibrate the waterway of artificial intelligence development through multiple paths and build an ethical security network that adapts to the iterative upgrade of artificial intelligence.
The first is to clarify the ethical bottom line of artificial intelligence development and application. First, life safety is preferred. Any artificial intelligence system must take the protection of human life as the highest criterion, especially in high-risk scenarios such as autonomous driving and medical decision-making, and use preset algorithm priorities to avoid direct or indirect harm to humans. The second is the traceability of responsibility. Artificial intelligence developers and operators must bear legal liability for system behavior, including scenarios such as algorithm design defects, data abuse or decision-making out of control, establish a full-chain responsibility traceability ethics mechanism from R&D to application, and clarify the obligation boundaries between technology providers and users. The third is data privacy and fairness. Artificial intelligence systems must follow the principle of data minimization, implement strict data protection, adopt advanced encryption technology and anonymization processing, and prohibit the collection and use of unauthorized personal information.
The second is to promote the preventive ethical design of artificial intelligence systems. At the technical level, the new generation of "moral embedded artificial intelligence" has entered the experimental stage. This type of artificial intelligence system has a built-in "ethical conflict resolution protocol". When medical artificial intelligence discovers that there is unfair resource allocation in the treatment plan, the system will automatically trigger an ethical warning. This preventive ethical design embeds the moral requirements of "ethical priority" and "intelligent goodness" into the technical architecture.
Again, relevant ethical norms or regulations must be formulated. This work has achieved significant results. The EU has passed ethical guidelines or regulations such as the General Data Protection Regulations, the Ethical Code of Artificial Intelligence, and the Artificial Intelligence Law to guide the future development and application of enterprises and government departments in the field of artificial intelligence, and directly include high-risk artificial intelligence applications, such as social credit scores, into the ban. The "Ethical Standards for the New Generation of Artificial Intelligence" issued by my country integrates ethics and morality into the entire life cycle of artificial intelligence, providing ethical guidance for natural persons, legal persons and other related institutions engaged in artificial intelligence-related activities. Of course, in order to deal with the problems in the application of artificial intelligence technology, it is imperative to build a global collaborative artificial intelligence ethical governance system. In recent years, my country has issued the "Global Artificial Intelligence Governance Initiative" and jointly promoted global artificial intelligence ethical governance with other countries.
It must be noted that installing an ethical "navigator" for artificial intelligence is not a stumbling block that hinders the innovation and progress of artificial intelligence technology. It should not only maintain tolerance for artificial intelligence technology innovation, but also maintain the bottom line of human value. Only when humans adhere to ethical values in the torrent of digital technology, and when artificial intelligence can understand and adhere to "doing what they do and not doing what they do" from a human ethical perspective, can we ensure that the artificial intelligence technological revolution truly serves mankind. When smarter artificial intelligence gets along with humans friendly with the warmth of human nature, that is the arrival of a new form of civilization "human-machine symbiosis".
(Author: Chen Weihong, chief expert of the Marxism Research Institute of Shanghai University of International Business and Economics, professor of the School of Marxism, and a researcher at the International Economic and Trade Governance Research Center of Shanghai University Think Tank)