A Camera At A Home In The United States Suddenly Spoke, Playing Vulgar Music, And Unplugging The Power Supply And Changing The Password Is Useless
A Camera At A Home In The United States Suddenly Spoke, Playing Vulgar Music, And Unplugging The Power Supply And Changing The Password Is Useless
After a robbery, Gray
After a robbery, Gray's wife was killed and he was paralyzed. He received "upgraded" transformation treatment from a genius scientist - he implanted the artificial intelligence program STEM into his body, and gained super ability, directly upgrading from a "disabled" to a professional killer. With the evolution and upgrading of STEM, Gray is pressing forward to hand over the right to use the body and control of the brain consciousness...
Many people think that the best film about artificial intelligence and the future of human beings is the most important thing. The discussion on the confrontation between artificial intelligence and humans is an eternal topic in science fiction movies. From "Blade Runner" to "Mechanical Girl", to this year's low-cost movie "Upgrade", they all reflect the threat of artificial intelligence to humans in the future. .
Black industry maliciousness originates from human genes
AI rebellion is a common plot in science fiction movies. The problem is that in reality, the real AI seems to be moving towards us step by step. Many people are worried and uneasy. Will artificial intelligence "do evil"?
There are quite a few people who tend to be AI threat theory. "We need to be very careful about artificial intelligence, it may be more dangerous than nuclear weapons," Musk once said on Twitter. "Artificial intelligence may be a 'real danger'. Robots may find improvements to themselves and these improvements do not always benefit humanity.”
"Any technology is a double-edged sword and can be used to do evil. Why does artificial intelligence cause such a big response?" At the sub-forum of the 2018 China Computer Conference held recently, Wu Xiangqian, the president of Harbin Institute of Technology, was appointed as a professor at Harbin Institute of Technology. A question has been raised, what is the bottom line of artificial intelligence research?
As early as 1942, Asimov proposed the three laws of robots. But the problem is that the beautiful laws in these science fiction books will encounter great problems when executing them.
"What kind of program runs in a computer depends on who wrote the program." Tan Xiaosheng, technical president and chief security officer of 360 Group, said that whether the law of robots is reliable is first defined by people, and then by machines. Store, execute.
It is worth noting that "no evil" has become a technical principle in the technology industry. So, where does the robot do evil and maliciousness come from?
Nowadays, artificial intelligence is developing in full swing, but the first ones to embrace AI were the black industry groups, including using AI methods to break through verification codes and remove some accounts from being hacked. Tan Xiaosheng smiled and said: "In 2016, the income of China's black industry exceeded 100 billion yuan, and the entire black industry earns more than we earn. How could it have no motivation?"
"The essence of AI doing evil is that humans are doing evil." Zhang Ping, a professor at the School of Law of Peking University, believes that AI is just a tool. If someone uses AI to do evil, then the people behind AI should be punished, such as AI R&D personnel, Controller, owner or user. When AI shows "evil" manifestations that harm humans, public interests and market rules, the law will come out to regulate it.
At present, accidents caused by driverless and robotic surgery, as well as the proliferation and out of control during big data analysis. So, will artificial intelligence evolve into uncontrollable humans? By then, can humans still resist the AI do evil?
Mission-driven AI cannot commit "crime against humanity"
It is worth noting that Hawking warned humans in his final work, "The short-term impact of artificial intelligence depends on who controls it, and the long-term impact depends on whether it can be controlled." The implication is that artificial intelligence truly The risk is not maliciousness, but ability.
"The future development of artificial intelligence will threaten human survival. This is not an unfair worry. There is indeed a great risk. Although it may not happen, there is a high probability that it will happen." In Tan Xiaosheng's view, humans will not be allowed to be Destruction, no matter how artificial intelligence evolves, there will always be loopholes, and hackers will find a way to completely destroy this system in extreme situations.
In this regard, Ni Bingbing, a special researcher in the Department of Electronics of Shanghai Jiaotong University, is optimistic. "Most of our AI technologies are task-driven, and the function output and input of AI are pre-specified by researchers and engineers." Ni Bingbing explained that most AI technologies are far from anti-human capabilities. , at least not to worry at this time.
Zhang Ping said that when AI develops to the stage of strong artificial intelligence, the ability of machine automation is improved. It can learn and upgrade itself, and it will have very powerful functions. For example, when the human brain is incomparable to the computer, such strong artificial intelligence will pose a threat to us.
"What kind of wisdom and values humans inject into AI is crucial, but if AI reaches the top level of evil that humans cannot control - 'crime against humanity', it must be handled in accordance with the current human laws." Zhang Ping said that in addition to the law, In addition, there is also a mechanism to immediately "execute" such AI to promptly stop the greater harm it causes to humans. "This requires that the technical treatment of 'one-click paralysis' must be considered in AI research and development. If such a technical preset cannot be achieved, this type of AI should stop investing and R&D, and punish it globally like humans treat drugs."
The prevention mechanism should keep up with the increasing evidence of evil cases
In fact, people’s concerns are not groundless. The incidents of artificial intelligence evil behavior have first emerged two years ago, such as workplace bias, political manipulation, racial discrimination, etc. Previously, there was also an incident in Germany where artificial intelligence robots killed managers on the assembly line.
It can be predicted that AI commits evil cases will increase day by day, so how should humans respond?
"If we regard AI as a tool and product, we should have a preventive function from a legal perspective. Scientists should intervene in values from the perspective of moral constraints and technical standards." Zhang Ping emphasized that R&D personnel cannot instill in AI Wrong values. After all, for the development of technology, development is always first and then legally bound.
In Ni Bingbing's view, at present, whether it is AI algorithms or technology, humans are manipulating it. We always have some strong control methods. Controlling AI will not have any negative impact on people at the highest level. "If there is no such manipulation or backdoor, it means that it is not AI doing evil, but the person who invented this AI tool is doing evil."
All technology will have two sides. Why do we feel that the evil done by artificial intelligence makes people even more frightened? The experts at the meeting said bluntly that it is because of the uncontrollable nature of AI. In the case of black boxes, people have a stronger sense of fear of uncontrollable things.
This is the hottest field, "deep learning", which is currently the hottest field. Industry players call it "contemporary alchemy". They enter various data to train AI, and "refined" a bunch of them. We don't know why it turned out like this. What a thing. Can humans trust decision-making objects that they cannot understand?
Obviously, the boundaries of technology development need to be clear, and Bill Gates also expressed concern. He believes that at this stage, in addition to further developing AI technology, humans should also start to deal with the risks caused by AI. However, “most of these people are not studying AI risks, they are just accelerating the development of AI.”
Industry experts call for us to know clearly what decisions artificial intelligence will make, and we must have constraints on the scope of application and application results of artificial intelligence.
Will AI evolve? Will an AI society be formed in the future? "AI may destroy humans in order to gain resources, which is entirely possible, so we still need to pay attention to the degree and risk of AI doing evil." A guest at the scene suggested that we should now depend on different stages of artificial intelligence, such as weak intelligence and strong Intelligent and superintelligent clarify which artificial intelligence should be studied, which should be studied carefully, and which should be studied absolutely cannot be studied.
How to prevent AI from going astray on the road to rapid progress? "We must prevent it from multiple aspects such as technology, law, morality, and self-discipline." Zhang Ping said that AI R&D first considers moral constraints. When humans cannot foresee their consequences, R&D should be cautious. At the same time, it is necessary to regulate legally, such as jointly establishing an international order, just like an atomic bomb, and cannot be allowed to develop without restrictions.