Musk Once Lost Sleep Over The Threat Posed By AI, And Now He Is Warning: Artificial Intelligence Is More Dangerous Than Nuclear Bombs!
Musk Once Lost Sleep Over The Threat Posed By AI, And Now He Is Warning: Artificial Intelligence Is More Dangerous Than Nuclear Bombs!
This is a wonderful speech by the speaker Guo Rui @ moment talks
Author | Mojijun
such a powerful tool
How can he be used by us without harming us?
This is where we meet all artificial intelligence
A Question to Create a Crisis of Order
On November 29, local time, at the summit, Musk expressed his speculation on the "coup" storm during this period. Musk said that although he did not know the inside story of this farce, he was worried about what it would mean for AI and hoped to know the real reason why the old board of directors dismissed Altman.
Later, when Musk was asked whether AI is good or bad for mankind, he replied: "We live in the most interesting era in history. I think that for a period of time I was really hit and would lose sleep thinking about the threats posed by AI. Later, I chose to accept the fate and thought, 'Okay, even if destruction is coming, do I still want to continue to live?' Then, I felt that I would definitely choose to live, because that is the most interesting, even if I can't do anything (for destruction)."
At the meeting, Musk, who has always emphasized the threat of AI, once again called on the government to regulate AI: "I think AI is more dangerous than nuclear bombs. We regulate nuclear bombs, but you can't make nuclear bombs in your own backyard. I think we should implement some kind of regulation on AI."
After the "coup" incident in China, several internal researchers wrote to the board of directors warning that a powerful AI discovery could threaten humanity. Progress on the project, dubbed Q*, has some within the company believing it could be their breakthrough point in finding "superintelligence, or AGI." Defining AGI as "an AI system that is smarter than humans", I believe this is also the root cause of Musk's worries.
What I want to share today is the speech of Guo Rui, a speaker of Moment Talks who has a doctorate in law from Harvard University and an expert on future rule of law. He introduced us to the ethical issues currently encountered by artificial intelligence and proposed two principles for dealing with them, namely the principle of fundamental human interests and the principle of responsibility. He called on everyone to actively participate in thinking about the ethical risks of artificial intelligence.
What ethical issues might artificial intelligence bring?
Hello everyone, I am Guo Rui, the speaker of Moment Talks
In fact, there is no era like today. We are full of expectations for technology. Artificial intelligence has been used in every aspect of our lives. We have seen that AI-driven taxis are already driving on the road. We have seen that AI has made our manufacturing industry more efficient. AI is helping doctors diagnose diseases more efficiently and accurately. In fact, there is no era that is as full of doubts about the application of technology as we are today. I believe that many of my friends here should have installed facial recognition access control in their communities. Even in places like kindergartens, facial recognition technology will be used.
Some time ago, we had an article that hit the screens in our circle of friends. It was about the game between food delivery riders and algorithms. While we are full of sympathy for food delivery riders, sometimes we can’t help but worry about our own future. Will our own profession be ruled by algorithms in the future like food delivery riders? Our competition has also become this kind of involution and very inefficient competition.
But in the face of so many complex applications of artificial intelligence, what makes us uneasy? We need to have a deeper understanding of the ethics of artificial intelligence in order to better understand these principles and think of what we can do. We will use two examples to illustrate.
In the Double 11 that just passed, the intelligent recommendations on everyone's homepage were different, which was already a huge technological progress. But there are actually some problems. Speaking of Teacher Guo, I actually think one thing is very strange. My wife and I accidentally mentioned something in the kitchen, such as a smart toilet seat. I have never bought such a thing before, but just after we said this, the e-commerce platform soon recommended this smart toilet seat to me. Is this a mobile phone eavesdropping?
So I have also discussed these issues with front-line developers in the company. The technicians told me that under the current technical conditions, using the microphone of the mobile phone in an environment with a lot of background noise, it is actually very uneconomical to accurately extract the product you mentioned, confirm that it is the product you need, and push it to you. This is more likely to be derived from your other shopping habits and behavioral characteristics. Okay, it seems that our privacy is not a problem at least in the sense of eavesdropping. Are there other problems?
A professor from Harvard University actually reviewed how such a situation came about. Starting in 2002, Google has had a new business model. Previously, Google collected personal data, which was used to improve searches. The system will recommend a product to you and sell you ads based on your search results. But after 2002, Google discovered that the personal data collected does not only need to improve searches. It can use the collected personal data to directly match your advertising needs, and then accurately recommend ads to you. Therefore, it collects a large amount of personal behavior characteristic information to better predict what this user will do. With better predictions, it also gives better opportunities for intervention. This is a prerequisite for Google to achieve continued growth in advertising revenue.
In a business context where algorithms are so powerful, is there a sense of disparity between ourselves and the algorithms? This is our first example.
Our second example is about autonomous driving. Everyone has heard of a famous philosophical problem called the trolley problem. This famous trolley problem has actually become a reality in autonomous driving. The manager responsible for autonomous driving development at Mercedes-Benz was once interviewed. Someone asked that if there is a particularly urgent situation, between the passengers in the car or pedestrians on the road, who do you want to choose? Who do you want to protect? This product manager from Mercedes-Benz said: We are a car company, and we want to sell cars. Of course, we have to protect pedestrians in the car, not on the road. As soon as this reply came out, it caused an uproar because people suddenly discovered that in the technical context of autonomous driving, pedestrians on the road are at a disproportionate risk of traffic accidents. This forced Mercedes-Benz to take back its position and said that this is not our official position at Mercedes-Benz.
Okay, but how should you choose? This is another classic ethical issue. Of course, some people have already given some solutions to this ethical issue, which we will mention below. What I want to ask is, among such complex and complex artificial intelligence ethical issues, what is its essence?
We see that in all these technical contexts, artificial intelligence ethical issues arise because we encounter a crisis of the creative order, which means that people have the possibility of being backlashed by the technology they create. It contains two major problems. The first problem is called the final criterion problem. We have seen this in the context of autonomous driving, because even among us humans, this issue is actually justified by public opinion, and mother-in-law is right. In a technical context, it's even harder for us to solve.
Everyone basically believes that artificial intelligence can be divided into special artificial intelligence, which is weak artificial intelligence; general artificial intelligence, which is what we call strong artificial intelligence; and super artificial intelligence, whose intelligence far exceeds that of humans. In the context of super artificial intelligence, of course we see these two problems appearing at the same time, and we really encounter a crisis of the creative order that we can resonate with in our hearts.
Someone said that if you have an artificial intelligence service robot at home, and you say you are hungry, help me prepare food. When the owner comes back, he sees his beloved pet cat being cooked. How can such a powerful tool be used by us without harming us? This is a question we encounter in all crises of artificial intelligence-created order. The crux of the problem is that artificial intelligence has not reached human intelligence, but has been given the responsibility of making decisions for people. This is a common scenario where we encounter these artificial intelligence ethical issues, so what should we do?
This is also a work we are currently doing in the process of artificial intelligence standardization. We hope that all artificial intelligence products and services, and all those that use artificial intelligence technology, can conduct an ethical risk assessment. We hope that this evaluation will be very concise, just like the performance marks we use when using white goods. We can evaluate its ethical risk through green and red. After we discover its ethical risks, all stakeholders, from technicians to our users and manufacturers, can intervene in a timely manner to calibrate its value. Only in this way can artificial intelligence, a very powerful technology, truly not harm people and benefit our human society. Okay, thank you all.