How Does Musk-funded FLI Respond To AI Threats?
How Does Musk-funded FLI Respond To AI Threats?
Max Tigermack, a professor and cosmologist at the Massachusetts Institute of Technology, will design the logo of the future of life that is committed to reducing the risk of human existence into a tree. This unique tree is lush on one side and only dead branches are left on the other side, which seems to imply that technology can bring unprecedented prosperity to mankind, but it will also lead mankind to self-destruction.Max is one of the co-founders of the Future of Life Institute . As the name shows, this is an organization that focuses on the future of human beings, focusing on the prospects of the development of various modern technologies such as genes, nuclear and artificial intelligence.Many of them have expressed concerns about artificial intelligence to the media on different occasions. In December last year, Hawking said in an interview with the BBC that comprehensively developed artificial intelligence may become the endorsement of human beings. His main concern is those artificial intelligence that reach or surpass human beings -
Max Tigermack, a professor and cosmologist at the Massachusetts Institute of Technology, will design the logo of the future of life that is committed to reducing the risk of human existence into a tree. This unique tree is lush on one side and only dead branches are left on the other side, which seems to imply that technology can bring unprecedented prosperity to mankind, but it will also lead mankind to self-destruction.
Max is one of the co-founders of the Future of Life Institute (FLI). As the name shows, this is an organization that focuses on the future of human beings, focusing on the prospects of the development of various modern technologies such as genes, nuclear and artificial intelligence.
The long list of members of the scientific advisory committee listed on the official website of FLI can be described as a gathering of big names. This is probably a group of consultants made up of the smartest and most influential group in the world.
Among them are Elon Musk, the founder of Tesla who is a fantastic person, and Ellen Elda, who has won many Emmy Awards and Golden Globe Awards, as well as many mainland Chinese actors, directors, actors, screenwriters, etc. The movie star Morgan Freeman is familiar to the audience. But more are celebrities in the academic circle. The famous physicist Steve Hawking leads a group of elites from Harvard, Oxford, Cambridge, MIT and other universities, involving genes, brains, artificial intelligence, physics, the universe, Digital economy and other fields.
Many of them have expressed concerns about artificial intelligence to the media on different occasions. In December last year, Hawking said in an interview with the BBC that comprehensively developed artificial intelligence may become the endorsement of human beings. His main concern is those artificial intelligence that reach or surpass human beings - "They will develop and renew quickly, but human beings are restricted. If you cannot compete with slow biological evolution, you will eventually be replaced." Musk also said that "developing artificial intelligence is like summoning a devil." He said: "Just like all evil wizards with magic circles and holy water in myths, every wizard claims to control the demons, but In the end, no one succeeded.”
Celebrity’s concerns reflect the popular and conflicting trend of society today: artificial intelligence has developed rapidly in the past five years, and many things imagined 5 years ago or many years ago have now happened, from robots active in various manufacturing factories. To brain prototype chips, to Siri, Microsoft Xiaobing, etc., artificial intelligence has gradually entered and changed people's lives, but machines are becoming more and more like humans and also making their creators deeply intimidated.
In March 2014, under the advocacy of Max, one of the founders of Skype, Victorvia Clarkforner, a PhD in Statistics at Harvard University, and Anthony Ah, a professor of theoretical cosmology at the University of California, Santa Cruz. Gille, and Mihaela Chita, a PhD student at Boston University with a background in education and philosophy, jointly founded the FLI.
Relying on the rich university resources in the Boston area, FLI has attracted volunteers from various universities. Volunteers and founders often sit together on the ground to brainstorm, discuss projects and organize activities.
"Out of concern for the future of mankind, we came together from different fields. Different professional backgrounds and interests can help us achieve our mission better," said Victorvia.
At that time, Jan Tarian, who had already achieved great business, injected development funds into the startup institutions. And it is Elon Musk who really makes the media and the public pay attention to this organization. Earlier this year, Musk provided FLI with $10 million in funding for research on AI security.
"We should take the initiative"
In October last year, Musk communicated with students at the Centennial Seminar of the Department of Aeronautics and Aerospace at the MIT Department of Aeronautics and Aerospace. The topics they talked about were "future", including computer science, artificial intelligence, space exploration, and Mars Colonial Project. Someone asked about his opinion on AI, and the tech maniac said: "I think we should be very careful about AI. Perhaps AI is the biggest existence threat we face. I'm increasingly inclined to have some norms. and supervision, national or international, to ensure we will not do stupid things.”
The remarks of "summoning the devil" quoted by many media came from this exchange. Max said one thing he admired Musk was that he not only said it, but he really did it.
“Musk has been focusing on the security issues of artificial intelligence for some time, which has prompted him to join CSER (Center for Existing Risk Research) and FLI,” said Victorvia. “At the beginning of this year, we organized an AI conference and reached a Musk hopes to implement these research studies in many ways."
From January 2 to 5, 2015, in San Juan, Puerto Rico, FLI organized a closed-door conference without media participation, including academic and industry AI practitioners, as well as economics, ethics and law. Experts from all walks of life. FLI hopes that this conference can determine the next direction for the future development and research of AI, so as to maximize its benefits and avoid going astray. Musk, who was already a scientific consultant at the time of FLI, also participated in the meeting. After that meeting, Musk announced a $10 million grant to the FLI for research on AI security.
In a conversation shortly after the announcement of funding, Max asked Musk: "Why are you so interested in the future of mankind?" Musk, who founded Tesla, said that this was related to his love of reading science fiction novels and philosophy books when he was a child. , made him realize, "What we should do is to let life continue in the future, especially conscious life. Only in this way can we better understand the essence of the universe and reach a higher level of civilization."
In a conversation with Max, Musk believes that in the future, technology will have a huge impact on human society from five levels - making life multi-planetized, the sustainable production and use of energy, the sustainable development of the Internet, and the human genes recoding and artificial intelligence. Tesla, founded by Musk, focuses on the use of new energy and reduces the dependence of human transportation on oil-based energy, while the latter, as a private space transportation company, aims to pave the way for humans to enter space.
Musk's position seems contradictory in his attitude towards artificial intelligence. He believes that the biggest benefit of AI is that it can free people from many boring, heavy work and tests, and break through some unsolved and difficult problems that human intelligence has not yet been solved and difficult to control, such as treating cancer and elderly diseases.
At the same time, he also believes that AI is "double-sided" like genetic technology, which may bring chaos. "The best way is to avoid the negative impact of AI, rather than wait until it appears before reacting. Some AIs have a serious potential threat. When the risk is high, we should take the initiative rather than react passively. ”
"At that time, the chance of success in Tesla and the company might be only 20%. "In Musk's view, both are a long-term optimization for human society and should not be eager for quick success. Similarly, he also looks at the long-term view on funding FLI to conduct artificial intelligence security research and is not very concerned about money. "Not everyone needs to do these things, but someone really needs to do them. If I see no one is doing these areas, (that’ll think) maybe I can do something in these things.”
Everything is for the benefit of humanity
Jan Tarian, the founder of Skype, once made an example, "Developing advanced artificial intelligence is like launching a rocket. The first priority is to maximize acceleration. Once it starts to grow, you have to pay attention to control and guidance. The question.”
Max, the founder of FLI, is not an opposition to AI and does not believe that the development of artificial intelligence should be slowed down, but he hopes to adjust the goals of artificial intelligence. The MIT professor told Blog World that the goal of developing artificial intelligence should not be to make machines smarter and smarter, but to make them more loyal and more beneficial to humans.
"People who do biological things do not want to make biological weapons, they just want to do things like medicines that are beneficial to humans; they do chemistry, they don't want to develop chemical weapons, they just want to create new materials to help Humans. The same is true for artificial intelligence, and the people who develop them should think clearly at the beginning where these things will go,” Max said. Therefore, one requirement for FLI for parameter options is, "How to make AI more beneficial to humans rather than simply becoming more capable."
Under the guidance of the general direction, there are also specific research priorities. Based on the previous survey and conference results, FLI released a document detailing the short-term and long-term research priorities of AI security research. Short-term research focuses on the impact and threats of AI that have already appeared, while long-term research focuses on whether super artificial intelligence can be achieved.
After receiving funding from Musk, FLI began solicitation and selection of future research projects for AI. Previously, Max and another founder of FLI, Anthony Aguirre, had run a similar organization in the field of physics. They have some experience in organizational operations and project management, so they have also established a series of scientific and rigorous projects for the project. Approval Process - After receiving proposals from various research teams, FLI will organize a team of recognized scholars, professors or industry insiders in the corresponding field to form an expert jury to conduct peer review of the project. Of course, FLI will ensure that reviewers and project applicants do not know each other's information.
The evaluation criteria include the qualification review of the main researchers of the project and their team, the value, scientific rigor and innovation of the project itself, and the feasibility of research within a given time frame. Finally, 37 projects passed the review.
Afterwards, FLI worked with a donor guidance fund (DAF) affiliated with the Silicon Valley Community Foundation to determine how much money is allocated for each project. Based on the amount of funding required by the project undertaking team and combined with the influence of the project, the DAF finally determines the project funds allocated, and is responsible for the funding connection with the project undertaking party in the later stage.
The 37 projects include 32 specific research topics, 1 preparation and establishment of the Artificial Intelligence Strategic Research Center, and 4 education and conference projects. The initial hope of FLI is to roughly divide Musk's funding into four categories: computer science (50%), policy (20%), law, ethics and economics (15%) and education (15%). Later, according to the quantity and quality of the applied projects, corresponding optimization and adjustment were made.
These projects will receive three-year funding. Most projects will start in September this year. The project undertaker will submit reports to the FLI every year, and the FLI will organize corresponding reviews.
What is more terrible than unemployment is loss of dignity
Professor Nick Bostrom of Oxford University received the highest funding of $1.5 million from FLI.
This fund will be used to build an artificial intelligence strategy research center. This center will be jointly established by Oxford University and Cambridge University, focusing on providing policy research support to governments, industry leaders, etc., thereby reducing the risks of artificial intelligence development and maximizing its benefits.
Nick is a philosophy professor at Oxford University, focusing on the research on risks and the future of humanity, and the author of the best-selling book "Super Intelligence: Paths, Dangers, Strategies". Nick believes that people have reason to believe that unregulated and restricted artificial intelligence can pose considerable dangers, on the one hand because the technology itself has unprecedented capabilities and on the other hand because of the irresponsibility of the government. High-tech such as nuclear fission, policies have followed the development of technology, which has accompanied the development of this technology with devastating risks. To avoid this, research on AI security strategies must be ahead of the widespread implementation of practical AI that transcends humans. Nick hopes that the results of the Center for Strategic Research will be used by governments around the world and those in the AI industry.
Nick himself pays more attention to the long-term development of AI, including issues such as super artificial intelligence, but after the completion of the Artificial Intelligence Strategy Research Center, it will also pay attention to short-term problems caused by AI, such as unemployment. His personal opinion on this issue is relatively optimistic: "I personally think that so far, we have not seen any difference between the application of artificial intelligence technology and the changes brought about by the earlier wave of technological progress - some work will be Eliminate, but new jobs will come.”
But not everyone is so optimistic. "I think the unemployment problem caused by artificial intelligence is the most worrying thing in the next 20 years," Michael Webber, Ph.D., one of the FIL's project undertakers and Stanford University, told Blog World, "No matter if you are a high An income lawyer or an ordinary worker who produces iPads on the assembly line, you may lose your job when your employer finds that using machines or software is cheaper."
Michael said that while humans enjoy the economic growth and improvement of quality of life brought by artificial intelligence, they should pay more attention to the happiness and dignity of everyone in society, especially those who may be robbed of their jobs by technological progress. His research topic is "The Optimization Transition of the Artificial Intelligence Economy". He will explore the impacts of various jobs and the economy based on relevant economic theories for AI development in different situations, and then analyze different taxes, welfare, and education. What will the results of policies such as innovation bring? “Only when we have a clearer understanding of these situations can the government and individuals make smarter decisions.”
Compared with studies that are close to reality such as unemployment under the influence of AI and economic optimization, the research on super artificial intelligence is more theoretical. This project is undertaken by the Global Institute of Risks, which aims to provide solutions to humans in combating threats.
Complete a theoretical project is no easier than solving real-life problems. "The biggest difficulty in project research is that it is a technology that does not exist at all, and it is even a problem whether it will appear in the future," said Seth Baum, executive director of the Global Risk Institute.
Seth and his team will build models based on existing information, using methods such as failure tree maps and impact maps to build models, trying to simulate the development paths of super artificial intelligence and the impact of different decisions on these paths.
It is important to correctly understand human intentions
At this stage, what Max is most worried about is the trend of developing artificial intelligence with a purely "goal orientation".
"There is no distinction between evil or good. The machine will only be smarter and more 'confident', and it is sure to achieve its goals. If you have a machine that is very 'confident' and more capable than you, it will do it There is no way you can stop a target mission," he said. Similar scenes are presented in the popular American drama "Silicon Valley" - a driverless car drove to an isolated island according to the established procedures. Jared, who was sitting in the car, could not stop the car that had already started the program, and was forced to Trapped in the car for more than 4 days.
Therefore, how to make machines or artificial intelligence not only achieve goals, but understand people’s true thoughts, but also the focus of research on many projects funded by FLI.
A research team at Duke University tried to use computer science, philosophy and psychology methods to build an artificial intelligence system that can make reasonable moral judgments and decisions in real scenarios, similar to rights (such as privacy that only humans have). Factors that affect moral judgment such as power), roles (such as being a family member), past behaviors (such as commitments), motivations and purposes will be added to the AI system.
Nick said he could not answer exactly what his project research could bring. Regarding the study of these unknown things in humans, he believes that it is important to maintain a good attitude, "This is a very new field. It is important to keep approaching and exploring with an open mind. Perhaps for outsiders, it is possible to stay close and explore. , it is difficult for them to evaluate these studies. And our focus is on constantly meeting the knowledge challenges we face and making progress in a practical sense, rather than catering to the curiosity of some people."
Nick feels that many celebrities are now portrayed as "AI opponents" by conflict-loving media. "It is very harmful to take extreme views on the future development of artificial intelligence." In fact, Musk, who is vigilant about AI, is also Invested in a company that develops artificial intelligence and a company that focuses on allowing robots to think like humans.
Max believes, "We don't want to slow down general AI research, we just accelerate research on AI security. We're like competing with the risks brought by AI. There's nothing happening yet, but it's possible that things can happen all of a sudden." That happened. We hope that those who develop artificial intelligence will think about these issues as well.”
In the online Q&A of FIL's official website, a netizen asked: "When artificial intelligence that reaches or even exceeds the human level appears, can humans still control the world?" FIL replied that if humans are no longer the smartest on this planet It is hard to say whether humans can still re-control it. Now FIL is using the capital of "Iron Man" and the world's most powerful brain to deal with the AI risks that humans may face in the future.
The article was first published in the 200th issue of "Blog World"