Jiang Xiaoyuan: Artificial Intelligence Is The Most Dangerous Fire That Humans Are Playing
Jiang Xiaoyuan: Artificial Intelligence Is The Most Dangerous Fire That Humans Are Playing
Stephen Hawking, Bill Gates, Elon Musk and others have long called on the world to be vigilant about the blind research and development of artificial intelligence, especially the research and development of military use artificial intelligence, believing that this may
Stephen Hawking, Bill Gates, Elon Musk and others have long called on the world to be vigilant about the blind research and development of artificial intelligence, especially the research and development of military use artificial intelligence, believing that this may "call out the devil." Musk made it clear that "we need to be very careful about artificial intelligence, which may be more dangerous than nuclear weapons." I think there are two of the most dangerous fires that humans are playing now, namely genetic engineering and artificial intelligence, and the more dangerous one is artificial intelligence, but many people do not realize this.
1. Recent threats: large-scale unemployment and militarized use
First, large-scale unemployment. At present, artificial intelligence is developing very rapidly in China. Most people are optimistic about such a phenomenon of "overtaking on a curve", but I think this may not be a good thing, and the result is that it is likely to bear the "landmines" for developed countries. Task. The large-scale use of artificial intelligence in industry has led to the replacement of jobs by robots, and the society has ushered in a large number of unemployed people. Although most of the work is done by artificial intelligence and the wealth it creates will continue to grow, it will usher in new social wealth distribution problems. If most unemployed people earn only a little less wealth than a few workers, it will inevitably dampen the enthusiasm of workers; if the wealth differentiates greatly and the wealth earned by a small number of workers is much higher than that of most unemployed people, it will inevitably lead to dissatisfaction among the unemployed people. Intensify social conflicts.
Second, military uses. Use artificial intelligence to produce more efficient killing weapons, such as drones. In the past, people decided on the target of attack, but now, drones produced under artificial intelligence technology pursue no longer need humans. By pointing out the target of attack, you can automatically determine the target to execute the attack, which is undoubtedly giving up the power of killing to a machine. Whether the crime of murder is borne by the manufacturer of the machine or the owner of the machine is not subject to the regulations on these issues. Recently, Musk and 116 others jointly called for "not to develop murderous robots", which made the voice even louder.
All major countries are hijacked on this issue. People often say that "if you fall behind, you will be beaten." The implicit logic is that "advanced can beat people". Advanced countries can oppress backward countries. Even if countries do not want to create murder weapons, they have to create murder weapons because no one can do it. If you don’t want to fall behind, no one wants to be beaten.
In fact, the harm of the militarization of artificial intelligence does not lie in the number of casualties it causes, and war will not become desirable because it causes fewer casualties. Here, we recommend a US documentary "Zero Day", in which the United States and Israel jointly developed a super virus to undermine Iran's nuclear program. In fact, this super virus is a ghostly artificial intelligence. After it infects Iran's network, it will automatically find and identify a specific target - the controller of the centrifuge. Because the virus was cleverly designed, it was almost impossible to detect at the time, and it would not destroy and interfere with any other machine except the centrifuge, so it was very concealed at the beginning. However, the Americans didn't have long laughed, and this super virus was discovered. Not only was it discovered, but it was also cracked. This means that Iran and its allies have also obtained this super virus. The militarized application of artificial intelligence is of course less casualties than the atomic bomb can cause, but its destructiveness cannot be underestimated.
2. Medium-term threat: Out of control of artificial intelligence
Now that robots and the Internet are combined, the previous individual physical limits will no longer exist. Simply put, in the past, it was possible to unplug the power supply to terminate the work of a robot, but for artificial intelligence combined with the Internet, it is impossible to control it by "unplug the power supply".
Some industry experts are still saying that the robot can be unplugged. In fact, after artificial intelligence is combined with the Internet, it does not require any form at all. When there was no Internet in the past, a single artificial intelligence had limitations. Even if it was all chips, it also had limits in storage and computing. But now that it is combined with the Internet, all servo institutions are no longer needed. As long as you place an order and order various services on the Internet, you can complete the destruction of society. So by that time, the power supply cannot be unplugged at all because there is no power supply that can be unplugged.
Experts of artificial intelligence say they want to establish a moral system for artificial intelligence. The author is skeptical about this. Human beings have not yet guaranteed to educate their descendants well. How can they be confident that artificial intelligence will be smarter than humans will be avoided. go astray?
On this issue, experts often say what we write in the robot's chips so that it will not learn bad things. The simplest question about this question is that you can't guarantee that your child will not learn bad things, let alone artificial intelligence? So far, no one can give a way to believe that it can really prevent the rebellion of artificial intelligence.
3. Long-term threat: the ultimate threat of artificial intelligence
People often mention the three laws of robots in Asimov's novel, but rarely know that he has another important point-all civilizations that rely on artificial intelligence will surely decline. Civilizations that can develop for a long time in the future are legislation prohibiting the development of artificial intelligence.
Why? If artificial intelligence is assumed to be completely loyal to humans, all human activities can be accomplished by dominating it. Once such artificial intelligence is available, humans will lose their meaning in life and become a walking zombie. They will decline rapidly in physical fitness and intelligence. Moreover, since humans will hand over everything to artificial intelligence, this will also transfer the management rights of this world to Artificial intelligence. As they evolve, they will eventually realize that human beings are a burden to the world, and the final result of rationality will surely be to eliminate human beings who have become walking dead. For humans, this is actually digging their own graves.
This is the ultimate threat to artificial intelligence.
4. The ultimate question: Why do we need to develop artificial intelligence?
Many people agree with the development of artificial intelligence just because it is science and technology, and science and technology have always been considered a perfect thing, so just talking about the development of science and technology is natural and correct. If we oppose a certain technological research and development, it must be said to hinder the development of science, and what hinders scientific development in many people think that what is natural is a crime. In fact, today, we can no longer simply regard "hindering scientific development" as a crime. At least we should realize that some science and technology should not be developed and are harmful.
In terms of artificial intelligence, the driving force behind it is very obvious. After planning the commercial hype of AlphaGo playing chess, more capital flowed to the artificial intelligence industry. But the appreciation of capital is blind and reckless, and it makes sense to review Marx's teachings on this. My fear of capital is much greater now than my fear of power. Capital is more terrifying than power because capital is actually very blind.
I put forward some radical views on artificial intelligence and advocate timely thinking about the irreversible development situation of artificial intelligence. I think that we cannot blindly praise artificial intelligence and praise the beauty of artificial intelligence. We should discuss more ethical aspects of artificial intelligence in the media, which can at least attract attention from all walks of life on this issue of artificial intelligence. In some areas, it is possible to consider retaining low-level artificial intelligence and retaining certain primary applications of artificial intelligence. To realize that this thing is like a devil. We try our best to clarify certain indicators, such as countries agree that certain things are not allowed to be done globally, which may extend human civilization. (Jiang Xiaoyuan is a chair professor at Shanghai Jiaotong University and the dean of the Institute of History and Science and Culture)