How To Get Out Of The Dilemma Of Artificial Intelligence Ethical Norms?丨Technology Cloud·Personal
How To Get Out Of The Dilemma Of Artificial Intelligence Ethical Norms?丨Technology Cloud·Personal
What should we do if artificial intelligence is not "moral"?
Artificial intelligence can be said to be one of the most promising emerging technologies in the 21st century. It has made many major breakthroughs in autonomous driving, voice recognition, etc., but there have also been many unexpected negative consequences in its commercial application process. The ethical risks of artificial intelligence emerged have brought new challenges to the healthy and sustainable development of technology and the implementation of commercialization.
Among the artificial intelligence failure cases listed in "Artificial Intelligence and Cybersecurity: The Failure of Artificial Intelligence" published by Professor Roman of the University of Louisville, in addition to the Tesla autonomous driving accidents we are familiar with, there is also a special "ethical" accident. For example, the failure cases of black people associated with pornography and violent information in Google searches, and insurance companies using Facebook data to predict traffic accident rates, the moral risks caused by artificial intelligence have begun to attract more and more people's attention.
Artificial Intelligence Ethical Risk
Attributable to human cognitive limitations
People who adhere to the "technical instrument theory" will say that artificial intelligence is nothing more than a product, how can it have ethical attributes? And some people who are called "technical entity theory" also insist that artificial intelligence has the ability and right to have independent consciousness and emotions. Regardless of right or wrong, the debate itself represents the existence of ethical and moral hazard.
The moral hazards reflected by artificial intelligence can be attributed from two aspects. First, it can be attributed to the errors in the algorithm design and learning stages of artificial intelligence systems, and second, it may be due to the problem that artificial intelligence technology results are difficult to predict and difficult to evaluate with human power. But its origin is due to the limited rationality of human beings.
When humans' own cognitive capabilities are restricted, artificial intelligence is getting closer and closer to humans and even far surpassed humans in some aspects, human rationality has begun to lag behind the development speed of artificial intelligence. It ultimately sparked concerns and moral criticism about artificial intelligence.
In the world of algorithms, artificial intelligence does not have the ability to worry, and perhaps they do not worry, and even in their "eye" how absurd human morality is. This unknown also deepens human concerns about the moral ethics of artificial intelligence.
Artificial intelligence ethics to be standardized
Difficult to belong to the rights of the responsible subject
Artificial intelligence technology can be said to be the first technological application to challenge human ethics. It is very likely that it is reshaping human social order and ethical norms.
Regarding the ethical issues of artificial intelligence, most of the focus of academic circles is on the ethics of responsibility for technology applications. With the improvement of the autonomous initiative of artificial intelligence, the most prominent ethical difficulties are the most prominent ethical problems - how to determine the responsible subjects for the consequences of artificial intelligence technology activities are issues that must be considered in the development of artificial intelligence.
This issue involves philosophy and ethics, so it is difficult to be discussed by the public. However, the ethical problems shown by artificial intelligence are also closely related to the daily life of ordinary people.
For example, although artificial intelligence is much higher than humans in terms of speed and accuracy, algorithm deviations and discrimination often occur during the operation of big data. For example, in Google search, black names are more likely to be associated with advertisements and content that suggest crime and violence than searching for white names.
In terms of responsible subjects, although "machines are smarter than humans" is not a sufficient condition for machines to replace humans with control, this advantage also highly reflects the hidden worries of human subject rights in intelligent technology. Human self-responsibility comes from the right to self-decision, and it also means the ability to bear self-risk and behavioral consequences.
Taking autonomous vehicles as an example, after a car driven by artificial intelligence has the right to make decisions, it can effectively avoid unlawful person errors such as drunk driving and fatigue driving.
However, after the corresponding responsibility is transferred to the artificial intelligence algorithm, when smart cars face the driverless version of the "tram problem", artificial intelligence should kill a small number of people's insurance companies, or just protect the safety of people in the car, and other dilemmas, it is difficult to escape the question of the ownership of the responsible subject's rights. Is it a "technical vulnerability" or "improper use" or "algorithm" that is arbitrary, and this question has not been solved yet.
Develop ethical embedding technology
Coping with the ethical risks of artificial intelligence
Pay attention to machine ethics issues brought about by artificial intelligence technology and improve the security of artificial intelligence technology. First of all, at the technical level, in the future, we can avoid the potential moral risks of artificial intelligence by embedding ethical standards and formulating and improving design principles.
The "Three Laws of Robots" by American science fiction writer Asimov classically interprets the basic moral norms that robots should follow. In "I, Robots", robots are collectively embedded in moral laws to regulate the behavior of artificial intelligence. In reality, the morality of artificial intelligence can also be preset.
Although moral embedding technology has not yet been implemented, it has always been regarded as the main means of avoiding ethical risks in artificial intelligence. Sociologist Bruno Artur calls it "moralizing the device" and transforms the constraints of moral norms on people into "moral embeddings" of things.
In addition to stepping up the development of moral embedded technology, we must also enhance our awareness of moral hazards in the design process, predict and evaluate the moral hazards that artificial intelligence products may cause at the beginning of design, and regulate them from the source of the design. The many problems of moral imbalances of artificial intelligence exposed today are actually omissions in algorithm design.
Although there are still many unknown areas waiting for us to explore for artificial intelligence technology, the most important thing is to insist on dealing with the technological revolution brought by artificial intelligence with a rational and positive attitude. As Langdon Winner said in "Technology of Autonomy: Out-of-control technology as a theme of political thought, "It is not the fault of technology itself, but the lack of courage for people to imagine being overwhelmed by doubts."