Musk Once Suffered From Insomnia Due To The Threat Brought By AI, And Now He Issued A Warning: Artificial Intelligence Is More Dangerous Than Nuclear Bombs!
Musk Once Suffered From Insomnia Due To The Threat Brought By AI, And Now He Issued A Warning: Artificial Intelligence Is More Dangerous Than Nuclear Bombs!
This is a wonderful speech by speaker Guo Rui @ Yishi talks
Author | Yishijun
Such a powerful tool
How does he be used by us without hurting us?
This is all artificial intelligence we encounter
A question about the crisis of creating order
On November 29, local time, at the summit, Musk made his own speculation about the "coup" storm during this period. Musk said that although he was not clear about the story of the farce, he was worried about what all this would mean for AI and hoped to know the real reason why the old board of directors removed Ultraman.
Later, when asked whether AI is good or bad for humans, Musk replied: "We live in the most interesting era ever. I think there was a time when I was really hit and I would have insomnia because of the threats brought by AI. Later, I chose to accept fate and thought, 'Well, even if destruction is coming, do I still want to continue to live?' Then, I felt that I would definitely choose to live because that was the most interesting, even if I couldn't do anything (for destruction).'
At the meeting, Musk, who has always emphasized the theory of AI threat, also once again called on the government to regulate AI: "I think AI is more dangerous than nuclear bombs. We regulate nuclear bombs, but you can't make nuclear bombs in your own backyard. I think we should implement some kind of supervision on AI."
After the "coup" storm, several internal researchers wrote to the board of directors warning that a powerful AI discovery could threaten humans. The progress made on the project called Q* has convinced some company insiders that this may be a breakthrough point for them to find "super intelligence (i.e., AGI). Defining AGI as "a smarter AI system than humans" is believed to be the fundamental reason why Musk is worried.
What I want to share today is the speech of Guo Rui, a lecturer of Yishi Talks, a doctorate in law at Harvard University and an expert in future rule of law research. He introduced us to the ethical problems encountered by artificial intelligence at present, and put forward two principles for response, namely the principle of human fundamental interests and responsibility, and called on everyone to actively participate in the thinking of the ethical risk issues of artificial intelligence.
What ethical problems may artificial intelligence bring?
Hello everyone, I am Guo Rui, the speaker of Yishi Talks
In fact, there is no era like today. We are full of expectations for technology. Artificial intelligence has been used in all aspects of our lives. We see taxis driven by artificial intelligence are already driving on the road. We see that artificial intelligence has made our manufacturing industry more efficient and accurate. Artificial intelligence is helping doctors diagnose diseases more efficiently and accurately. In fact, there is no era that is as full of doubts about the application of technology as we do today. I believe that many of the communities here should have access control for facial recognition. Even in occasions like kindergartens, there will be application of facial recognition technology.
Some time ago, an article was flooded with the screen in our circle of friends. It was a game between takeaway riders and algorithms. While we are full of sympathy for takeaway riders, we sometimes can’t help but worry about our own future. Will our own profession be dominated by algorithms in the future like takeaway riders? Our competition has also become such an inverted, very inefficient competition.
But in the face of so many complex applications of artificial intelligence, what makes us feel restless? We need to understand the ethics of artificial intelligence more deeply in order to better understand these principles and think of what we can do. We will use two examples to illustrate.
In the just-passed Double 11, everyone's homepage's intelligent recommendations are different, which is already a huge technological progress. But there are actually some problems. Teacher Guo, I actually think one thing is very strange. My wife and I accidentally mentioned something in the kitchen, such as a smart toilet ring. I have never bought such a thing before, but just after we just said this, the e-commerce platform quickly recommended me this smart toilet ring. Is this a mobile phone eavesdropping?
So I have also discussed these issues with front-line developers in the enterprise. The technician told me that if you want to accurately extract the same product you mentioned through the mobile phone microphone in an environment with a lot of background noise under the current technical conditions, and confirm that this is the product you need and push it to you, this is actually very uneconomical, and this is more likely to be derived from your other shopping habits and behavioral characteristics. OK, it seems that our privacy is at least not a problem in the sense of eavesdropping. Will there be other problems?
Professors at Harvard University actually reviewed how such a scenario came about. Since 2002, Google has had a new business model. Before Google collected personal data, it was used to improve searches. The system will recommend a product and sell it to you for advertisements based on the results of your search. However, after 2002, Google discovered that the collected personal data does not need to be improved just by improving search. It can directly match your advertising needs with the collected personal data, and then accurately recommend it to you. Therefore, a large amount of personal behavior characteristics can be collected to better predict what the user is doing. With better predictions, it will give you a better opportunity to intervene. This is a prerequisite for Google to achieve sustainable growth in advertising revenue.
In such a business context where algorithms are so powerful, will there be a feeling of a huge gap in strength between ourselves and the algorithm? This is the first example we talked about.
Our second example is about autonomous driving. Everyone has heard of a famous philosophical problem called tram problem. This famous tram problem has actually become a reality in autonomous driving. The manager of Mercedes-Benz's self-driving development once accepted an interview. Someone asked if there is a particularly urgent situation, between passengers in the car or pedestrians on the road, you have to choose one side, who do you want to protect? The product manager of Mercedes-Benz said: We are a car company, we want to sell cars, and of course we need to protect the pedestrians in the car, not the pedestrians on the road. This reply caused a stir because people suddenly discovered that in the technical context of autonomous driving, pedestrians on the road were at risk of disproportionate traffic accidents, which forced Mercedes-Benz to take back its position and say that this was not our official position.
OK, but how should you choose? This is another classic ethical problem. Of course, some solutions have been given to this ethical problem, which we will mention below. What I want to ask is, among the various and complex issues of artificial intelligence ethics, what is its essence?
We see that in all these technical contexts, the ethical problems of artificial intelligence emerge because we encounter a crisis of creative order, that is, the possibility that people are backfired by the technology they create. It contains two major problems. The first problem is called the final-level criterion problem. We have seen this in the context of autonomous driving, because even among us, we are actually right to say this problem. In the technical context, it is even more difficult for us to solve.
People basically believe that artificial intelligence can be divided into dedicated artificial intelligence, that is, weak artificial intelligence; general artificial intelligence, which is what we call strong artificial intelligence; and super artificial intelligence, whose intelligence is far greater than that of humans. In the context of super artificial intelligence, of course we see these two major problems appear at the same time, and we are truly facing a crisis of creative order that allows us to resonate with our hearts.
Someone said that if you have an AI service robot in your home, and you say you are hungry, help me prepare food, when the owner comes back, he sees his beloved pet cat being cooked, a powerful tool, how can he be used by us without hurting us? This is a question we encounter when we encounter all the crises of artificial intelligence creating order. The key to the problem is that artificial intelligence has not achieved human intelligence, but has been given the responsibility for making decisions for humans. This is a common scenario where we encounter these artificial intelligence ethical issues. So what should we do?
This is also a work we are currently doing in the process of artificial intelligence standardization recently. We hope to evaluate the ethical risk for artificial intelligence products and services, all those using artificial intelligence technology. We hope this assessment is very concise, just like the performance marks we use when using white appliances, we can evaluate its ethical risk through green and red. After we discovered its ethical risks, all stakeholders, from technicians to our users and manufacturers, can intervene in a timely manner to calibrate its value. Only in this way can artificial intelligence be truly achieved without harming people and benefiting our human society. OK, thank you everyone.