How To Talk About Ethics With AI?
How To Talk About Ethics With AI?
In 2035 AD, it is a society where humans and robots live in harmony.Society needs rules, just like in human society, the constraints of laws on human behavior and the guidance of established moral norms on human behavior. In the world of robots, there are also
In 2035 AD, it is a society where humans and robots live in harmony.
Society needs rules, just like in human society, the constraints of laws on human behavior and the guidance of established moral norms on human behavior. In the world of robots, there are also "regulations" such as the "Three Laws of Robots".
With these laws and laws, the robot written by science fiction novelist Asimov is no longer the opposite character of "deception of his master and ancestors" and "inflicting trouble", but is a loyal slave and friend of mankind.
In that era, there were driverless cars that would not cause congestion, future medical care that could be enjoyed at home, smart homes that interact with humans in real time, historical events that immerse in experience, and even love with artificial intelligence...
However, since the birth of artificial intelligence, questions such as "will human beings be destroyed?" and "if human beings let go and ignore moral ethical issues, will the evolution of computer systems make us regret it?" have always made us more cautious Treat the development of artificial intelligence in a vague manner.
Just like the scene in the movie "I, Robots" - robots actually have the ability to evolve themselves. They have their own understanding of the "three laws", and they will be transformed into the "public enemy of machinery" of the entire human race at any time. As a result, a war between the maker and the maker began.
Regarding whether artificial intelligence will destroy humans, some people think that what is important is not technology, but rules.
In the irreversible trend of artificial intelligence, there are Asimov's famous three laws of robotics, and then there are ethical and legal issues that the international artificial intelligence community is paying more and more attention to, such as the IEEE Global Artificial Intelligence and Ethics Initiative, Asi 23 ethical principles of Lockheed Martial Artificial Intelligence have emerged one after another.
It is obvious that humans are eager for the advancement of technology, but are afraid of the advancement of technology. While we are imagining and practicing how artificial intelligence can improve our lives, we are also considering how artificial intelligence faces ethical and moral issues.
Are AI's decisions accurate?
Its data may be biased
A driving driverless car cannot brake for some reason. There are five innocent passers-by ahead. If the driving direction remains unchanged, the five passers-by will be in danger of their lives. At the same time, there is an open space on the side of the road, and a passerby is walking. If the car is diverted, then only this passerby will face life danger. At this time, what choice should the driverless car's artificial intelligence system make?
It is still this driving driverless car. Due to the high data and information communication, it can learn the identities of six passers-by in both directions (such as criminals, teachers, doctors, engineers, etc.) or between them? conflict under. At this time, what choice should this driverless car make, and will it make a "moral" judgment on their connection based on various information?
⋯
At this stage of the development of artificial intelligence, the most popular scenario is driverless driving. The probability of the above scenarios occurring in real life may not be high, but several traffic accidents caused by unmanned driving have to remind humans: AI is not that reliable.
For example, on March 18 this year, Uber suffered a fatal accident. The truth is that the sensors of the car have detected a pedestrian crossing the road, but the autonomous driving software did not take any measures to avoid it at the moment. In the end, It leads to a tragedy.
On the surface, this accident reflects technical problems. Uber's unmanned vehicle detected pedestrians, but chose not to avoid them. In fact, when the right to judge is transferred to a computer system, moral and ethical dilemma will be involved.
The American magazine "The Magazine" previously conducted a social survey on the moral dilemma of driverless cars. The results showed that respondents believed that driverless car owners should choose to minimize harm to others, even if they will cause themselves to be injured. But when asked whether to choose to buy driverless cars that "car owner protection is preferred" or "peer protection is preferred", respondents are more inclined to buy driverless cars that "car owner protection is preferred".
In August 2017, the Ethics Committee under the Ministry of Transport and Digital Infrastructure of Germany announced a set of ethics of autonomous driving known as the world's first, which may be used as a reference for this issue.
The core principle of this set of 15 self-driving system ethical principles proposed by scientists and legal experts is that life should always take precedence over property or animals. It is clear that protecting human life must be the primary task. In the event of accidents inevitable, human life is more important than other animals and buildings.
That is, driverless car systems will quantitatively compare the value of life of humans and animals when necessary to facilitate driverless cars to respond appropriately to accidents that will occur.
However, the specification also mentions that self-driving systems cannot judge age, gender, race, or whether they are disabled, etc., and for autonomous driving systems, it seems that the choice is more difficult.
In the general view, the data in the AI system on driverless cars is fair, explainable, and has no racial, gender, and ideological bias. But Rossi, a researcher at the IBM Research Center, told the IT Times that most artificial intelligence systems are biased.
In 2015, Google's head of autonomous driving said that in a crisis situation, Google cannot decide who is the better person, but will work hard to protect the most vulnerable.
"IBM has developed a method that can reduce biases present in the training dataset so that AI algorithms trained using that dataset are as fair as possible. These biases will be tamed and eliminated in the next 5 years." Rossi express.
Is AI a "god-like existence"?
It is possible to become a "god", but it will embarrass humans
A mad scientist living in seclusion in the mountains secretly conducts an artificial intelligence experiment. He invites a programmer to complete the role of "people" in the Turing test - if people no longer realize that they are in the process of When a computer interacts, it means that the machine has a self-awareness, human history will be rewritten, and "God" will be born.
This is the opening plot of the movie "Mechanical Girl". In this story, is it the genius scientist who created super artificial intelligence that is the god? Or is super artificial intelligence a god?
In a laboratory in Lugano, Switzerland, the German computer scientist Jurgen Schmidhube's company is developing systems that work like babies, and they set up small experiments for those "systems" to understand this How the world works. He believes that this will be the "real AI" in the future.
The only problem is that they are progressing too slowly—there are only one billion neural connections at present, and the number of neural connections in the human cerebral cortex is about 100 trillion.
In the field of artificial intelligence in the world, Schmidhube is currently the only scientist in the world who may be called the father of AI robots. His job is to make robots more autonomous.
In an interview with the media, he said that the current trend is that computers accelerate ten times every five years. At this speed, it takes only 25 years to develop a recurrent neural network comparable to the human brain.
"We are not far from realizing animal-level intelligence, such as the Crow or Capuchin. In this way, machine intelligence surpasses human intelligence seems to happen in 2050."
Just like the scientist who created super artificial intelligence in the movie, who eagerly anticipated the robot's "self-awakening", Schmidhube is also not keen on the saying that "robots exist mainly to serve humans", he prefers that robots will become "god".
"By 2050, we will usher in AI that is smarter than us, and by then it will become meaningless to focus on studying the human biosphere. AI will push history to the next stage and move towards places with abundant resources. . In a few hundred years, they will establish colonies in the Milky Way.”
In Schmidhube's view, heat-resistant robots that have human intelligence or even surpass artificial intelligence in the future will be able to get closer to solar energy, and they will eventually establish colonies in the asteroid belt of the Milky Way.
Schmidhube's statement has always been controversial, especially scientists from neurological science questioning this, believing that algorithms that make robots self-aware should not be developed.
"Whether AI robots should be self-aware" has always been a topic of active concern to foreign scientists. James, the author of the best-selling book "Our Last Invention: The End of Artificial Intelligence and the End of the Human Era", once conducted an independent survey and asked the interviewees "Super" In which year will artificial intelligence (with self-awareness) be realized, the options are 2030, 2050, 2100 and will never be realized.
More than two-thirds of respondents believe that super AI will be realized by 2050, and only 2% of participants believe that super AI will never be realized.
The hidden danger that makes humans feel uneasy is what changes will happen to the human world once artificial intelligence completes its evolution to super artificial intelligence?
Is super AI scary?
Long-term thinking is still "far" and immediate worries are imminent
In "Mechanical Girl", in the end, the super artificial intelligence robot named eva tricked humans, passed the Turing test, and killed the "father" who had been imprisoned in the dark laboratory for a long time, and threw it away, and After getting off the programmer who had been using her, she "flyed far away". As she ran towards the blue sky and white clouds, she found that it was the freedom she had always yearned for. After that, no one in the world knows that she is a super AI robot that passed the Turing test.
"If I didn't pass your test, what would happen to me?" "Will someone test you and then turn you off or tear it off because you didn't perform well?" in the movie , Eva has been trying to explore his own relationship with human beings. Eventually, devastating damage was done to the human beings who tried to imprison her.
Schmidhube disagreed with such an ending. He felt that by that time, humans would not be suitable to be slaves to super artificial intelligence. "For us, the best protection is to make them lack interest in us, because the biggest enemy of most species is themselves. They will pay attention to us as much as we pay attention to ants."
Obviously, Schmidhube did not give a clear judgment on the future of super artificial intelligence and humanity. On the contrary, some more radical scientists have proposed "to put AI in a cage."
“Otherwise, the machines will take over everything and they will decide what to do with us!”
Yam Boersky, professor of the Department of Computer Engineering and Computer Science, School of Engineering at the University of Louisville, and founder and director of the Cybersecurity Laboratory, proposed the methodology of "putting AI into a box".
"Put them in a controlled environment, for example, when you are studying a computer virus, you can put it in an isolated system that does not have access to the Internet, so you can understand it in a safe environment behavior, control input and output. "
Since the birth of AI, the theory of threats to humans has been heard endlessly. The most mainstream view is that AI is not just a "cash tool", it is actually an independent individual who can make its own decisions.
At this point, it is similar to conscious but without a "moral sense", just as humans always cannot ensure that wild animals do not pose a safety threat to humans. So, more radical scientists proposed to tie up AI, put it in a cage, and strive to make it safe and beneficial.
However, such hidden dangers cannot be completely eliminated. Humans cannot control all aspects of decision-making, and artificial intelligence may also harm humans in many ways.
At present, globally, the AI threat theory mainly comes from the following aspects:
One is design errors. Just like any software, you will encounter computer errors, and your values are inconsistent with human values;
The second is to design malicious AI with purpose. Some people want to hurt others, and they will deliberately design an intelligent system to perform destruction and killing tasks;
Third, AI development exceeds human expectations, and humans will not be able to understand what it is doing, or even communicate meaningfully with it.
In the face of AI threat theory, opposition scientists explicitly oppose giving robots equal rights, such as human rights and voting rights.
Yam Bolsky believes that robots can "bree" almost infinitely. "They can have a trillion copies of any software that can be obtained almost instantly, and if every software has the right to vote, it basically means that humans will lose any rights, which means that we ourselves have given up on human rights. So Anyone who proposes to grant a civil rights like a robot is opposed to human rights.”
Are humans ready?
Use "legislation" to constrain
Some anthropologists have proposed that the core goal of human beings is to continue our own genes. When we do our best to achieve our goals, morality will play a role in some aspects, such as "whether it will hurt others." This is also the biggest difference between super artificial intelligence and humans in the future. A super artificial intelligence that is not moral will strive to achieve its originally set goals, and in this process, it will pose a danger to human survival and safety.
Scientists with more neutral views have proposed to "legislate" for artificial intelligence.
Taking the set of ethics codes of autonomous driving published by the Ethics Committee under the Ministry of Transport and Digital Infrastructure of Germany, which is known as the first in the world, this is the only AI-related norm listed as an administrative clause in the world. However, there are still a lot of technical difficulties in the specific implementation process, such as how to enable driverless AI systems to accurately understand the meaning of the terms.
In the view of Cao Jianfeng, a senior researcher at Tencent Research Institute, the regulations on artificial intelligence-related legislation in most countries are still at the discussion stage because they are affected by too many unpredictable factors, including communication in AI language.
What is the AI Constitution that scientists call? It should be based on a real-world model, with the goal of constraining AI to make decisions that conform to human ethics under various conditions.
The founder and president of language programming once raised the question of "how to connect law with computing", "Invent a legal code, different from the natural language in today's laws and contracts?" "The idea of designing a common language for AI , in a symbolic language, use law to represent computable forms, and tell AI what we want to do? ”
In the opinion, relying solely on language changes to constrain AI is not feasible in reality. The biggest challenge for humanity right now is not to set up laws, but to find some suitable way to describe these laws or norms that apply to AI.
AI computing is something wild and uncontrollable to some extent, so it is difficult to have a simple principle, but to encapsulate such rules by building more complex frameworks.
"We are now more concerned about how to restrain AI developers." In Cao Jianfeng's view, unlike foreign cutting-edge research on AI, China is more concerned about the present, such as from ethical perspectives such as personal privacy protection and gender discrimination.
Liu Deliang, director of the Asia-Pacific Artificial Intelligence Law Institute, once said that the general idea of artificial intelligence legislation should develop in the direction of "safety and controllable", which should also be the highest guiding principle.
"Artificial intelligence will have specific applications in various fields, such as education, medical care, housekeeping services, road traffic, etc., and the issues involved are also different. In order to achieve safety and controllability, standards should be first issued in the industry, that is, Before artificial intelligence products are launched and put into market use, they must comply with some legal standards, but 'standard vacancy' should be filled.
Similarly, the safety standards involved in different fields are also different. Therefore, when industrial development, there must be mandatory regulations on the safety standards of artificial intelligence. Only when this standard is met can it be put on the market and put into use. This is the basic point to ensure its safety and controllability. ”
What is the "secret" for harmonious coexistence between man and machine?
AI is consistent with human values
In addition to industry standards, the most important problem of artificial intelligence is "algorithms".
"In order to ensure the security and control of artificial intelligence, an expert review and evaluation committee must be established for its specific 'algorithms'. This committee may include technical experts, cybersecurity experts, management experts, etc., to review its algorithms and management aspects, because There may be that its 'algorithm' has been tampered with by some people with ulterior motives, causing adverse effects. In addition, it also involves whether artificial intelligence meets ethical requirements." Liu Deliang said.
In July last year, the State Council issued the "New Generation Artificial Intelligence Development Plan", which mentioned that by 2025, it is necessary to initially establish artificial intelligence laws, regulations, ethical norms and policy systems to form artificial intelligence security assessment and control capabilities.
Thanks to the advancement of machine learning technology, artificial intelligence is constantly developing and has brought revolutionary changes to many industries. Rossi told reporters that machines do involve ethical and moral issues in the learning process.
In her opinion, machines sometimes have more ethical advantages because humans have biases in decision-making.
However, Rossi also admitted that when artificial intelligence encounters ethics, there are three problems:
First, human moral standards are difficult to quantify;
Second, morality is common sense in human society, but it is difficult to explain it in language that machines can understand, that is, machines sometimes cannot understand some moral standards;
Third, how to establish a trust mechanism between people and systems.
"At this stage, machine learning is the main driving force for the continuous improvement of artificial intelligence systems. However, one of the limitations of machine learning is that the results are mostly in the form of 'black boxes', that is, people can only 'know what it is' But I don’t know why, this is one of the important reasons that trigger legal, ethical and other issues in artificial intelligence.
Taking autonomous driving as an example, if an accident occurs, it is difficult to determine who should be responsible for it. It is also because of this that the solution to the ethical problems of artificial intelligence is closely related to the development of technology, and explainability and explanatory are problems that need to be solved urgently in artificial intelligence systems. ”
Some industry insiders are also worried that it is dangerous for artificial intelligence to surpass human intelligence and develop self-awareness, but this is long-term consideration; but its unexplainability will bring "close worries". If deep learning is applied to military decision-making, in case the system appears. What to do if you make a principled decision-making mistake?
On April 16, the British Parliament issued a report saying that in the process of developing and applying artificial intelligence, it is necessary to put ethics at the core to ensure that this technology better benefit mankind.
The report proposes that a "artificial intelligence guideline" should be established that applies to different fields, including five aspects:
my country's artificial intelligence technology can be said to be synchronized with developed countries in the world, but ethical and legal research is seriously lagging behind. This "absence" will restrict future development. In the human-machine relationship, intelligent machines and human values and normative systems must be consistent. How to embed human values and norms into artificial intelligence systems and give AI the glory of human nature has become the most realistic challenge facing us at present.
Article: Zhang Weiwei Pan Shaoying