AI Risks

Hawking And Musk Warn Of The Dangers Of Artificial Intelligence

Hawking And Musk Warn Of The Dangers Of Artificial Intelligence

Hawking And Musk Warn Of The Dangers Of Artificial Intelligence

"Iron Man" and "Martian", Elon Musk, who created Tesla Motors and Rockets, is a "scientific madman" who is just a fake one. What's gratifying is that Musk has enough rationality to control his wildness.

Last week, Musk said, "We need to be very careful about artificial intelligence, it may be more dangerous than nuclear weapons." Just earlier this year, Musk mentioned James Cameron's classic 30 years ago. Write "Terminator". He said, "That's definitely a situation that humans will do their best to avoid. Since there are such warning movies, we must be careful."

Stephen Hawking, who is also an AI warning signer, said in an interview with HBO that “Artificial intelligence can be a 'real danger’. Robots may find ways to improve themselves, and these improvements will not always benefit humans. ”

The typical belief among those who are optimistic about artificial intelligence is that “humans cannot be surpassed by artificial intelligence because we can use artificial intelligence to strengthen ourselves. There will be no situation where we fight against machines. By integrating with intelligent products, humans can strengthen themselves. ” This idea is too presumptuous. If the people who invented the gun said, "Guns are used to deal with jackals and beasts, and can make humans stronger. There will be no situation where guns are used to harm similar ones..." Looking back at these swearings are not ridiculous?

Almost all cutting-edge technologies have been first used in warfare, and artificial intelligence will be given priority for killing. Compared with ordinary guns and even nuclear weapons, this system will continue to learn to improve the efficiency of killing, repair and upgrade itself, and will do everything possible to maintain energy supply... Are manufacturers confident of controlling them? Can Brother Gu’s engineers be able to do it? Or what if the maker itself is an anti-human demon?

In fact, the essential difference between living bodies and non-living bodies lies in whether they have or not their survival instinct. The so-called survival instinct is love for one's own life, from which emotions such as greed and fear can be derived. From the curiosity of young children to space exploration, the fundamental driving force is the survival instinct. No matter how complex a spacecraft is, without the instinct of survival, it is not a living body, but a machine. No matter how simple a single-celled creature is, it is a living body with the instinct of survival. Antibiotics are toxins produced by bacteria to kill other bacteria and dominate the living environment and nutrients.

For artificial intelligence: without giving survival instinct, it cannot think completely like a human, and it is a machine after all. Give them the survival instinct and let them know how to love their "life". What is the value of human beings to them? Do machines need humans to cook and clean the rooms for it? Will those who try to turn off the power and end the "life" of the machine be terminated by a machine that can "see words and expressions" or even have "intuition"?

Fortunately, artificial intelligence cannot be achieved in one step. When the "terminator generation" appears, it will be easily ended by humans without too much consequences. I hope that at that time, the engineers of Microsoft and Gu Ge can face up to the warnings of Hawking and Musk.

More