Lecture Review|Four Analogies Of Artificial Intelligence Ethics: Tools, Animals, Humans And Gods (Part 2)
Lecture Review|Four Analogies Of Artificial Intelligence Ethics: Tools, Animals, Humans And Gods (Part 2)
Conduct questions and answers around the boundaries and future of technical ethics
On April 23, Professor Cave, a professor at the Felham Intelligent Future Center of Cambridge University, discussed four analogies of artificial intelligence ethics in Fangtang's "Quanqu" lecture. . When Professor Cave finished his speech, the afterglow in the venue was not dispersed, and the audience's thinking about the perspective, boundaries and future of technical ethics reflects more aspects of this problem.
The exciting content of the Q&A session is now excerpted as follows, looking forward to bringing thoughts and inspiration to readers
Do humans have a choice when it comes to whether to obey AI?
On-site question: When we talk about technology, there are always alternatives. AI is regarded as a decision-making technology that deals with complex problems. But historically, many alternatives have not been ideal—for example, humans had long lacked systematic medical care before AI designed medical systems, but they still survived. In my opinion, we don’t have much situation in which we face “either choose the best solution or be eliminated”, or we are pursuing the ultimate or in competition. For example, historical cases such as market competition and the Cold War all reflect this logic. My question is: Even if AI is really a better alternative to decision making, do we really have to give up human positions and thinking in any case and obey AI’s decisions?
Professor Cave: This is about "technological determinism" and human choice. Many people hold technical determinism and believe that the development of AI is inevitable. In addition, "accelerators" advocate full-speed advancement of AI, believing that it can solve all problems, such as ending war and poverty. They often oppose ethical constraints and supervision, believing that AI must be developed as soon as possible.
However, it is crucial to believe that we have the right to choose. Especially in the current worrying competitive situation, this kind of thinking of "only seeking victory and ignoring the rules" is crazy. There is no end or winner in this competition. AI should be a technology that benefits all mankind, but risks need to be managed together. There are precedents in history: Although the supervision of technologies such as nuclear energy and nuclear weapons is not perfect, it is far better than laissez-faire.
Therefore, we must believe that humans have real choice and initiative and turn this belief into a policy of self-defense.
If super smart comes
Where should people who don't believe in AI go
On-site question: When you compare AI to gods, believers and atheists exist. How do you imagine the future of AI everywhere, how will atheists deal with themselves?
Professor Cave: Both of these questions are grand. The first question, if I understand it correctly, is about how atheists and others view the rise of a God-like AI system and how it fits into people’s religious ideas. I think the trend of projecting technology as a religious destination is very interesting in history. This view exists both among atheists and among some religious believers.
This kind of thinking is particularly evident in the United States. When Europeans colonized North America, they firmly believed that they were creating a new "Garden of Eden"—as described in the Bible, humans were expelled from paradise due to fall, and technology became their tool for rebuilding heaven. Technology plays a central role in the ideology of Europe's expansion and conquering America. Today, people still closely link their expectations for technology with their desire for utopia in the Christian tradition, and even believe that powerful machines can upload human consciousness to the Internet to allow us to live forever in the digital realm. Although this kind of view still exists among some Christians, it is becoming more and more popular among atheists. They turn to technology for redemption. We do not fully accept this idea, but this is reflected in the rise of the "life extension movement"—that is, the belief that technology can achieve immortality (which is essentially a religious concept)—and people's blind optimism about AI solving all problems.
Although this is only a very small marginal group at present. But it is worrying that these people may really start admiring AI. Just like there have always been some people in history who give up critical thinking and hand over the responsibility to the gods, there may be people who are willing to let AI make decisions for them in the future. They might say, “I just have to follow the instructions of the AI.”
Follow-up question: The "atheists" I mentioned want to refer to those who do not believe in AI, rather than those who are religiously meaningful.
Professor Cave: Oh, I understand. It doesn't matter. This one is very interesting. AI will eventually speak out for itself. No one admires Go AI before it appears, but what about after winning? Maybe we will see this sign soon.
The complex ecology behind AI
On-site question: You describe the relationship between AI and people as private/intimate, but in reality, there are third parties such as engineers and enterprises behind AI. Does this mean that AI ethics is not only a binary relationship, but also a hidden third party needs to be considered?
Professor Cave: This question is very important. Because I used analogy to hope to bring some revelation, but at the same time, analogy also has its limitations. The "relationship between individuals and AI" emphasized before does conceal the complex ecology behind it.
There are some other analogies here, such as the University of Cambridge Political Science professor who compares artificial intelligence to companies. Now we are all thinking about how to interact with AI, but in fact, we have had AI for hundreds of years - that's the company. A company is a legal entity with its own decision-making process, its values are independent of any individual, and even the CEO has difficulty controlling this very complex entity from which legal structures and ethical structures are produced to try to manage it, and maybe we can learn from it.
Another colleague in Cambridge, a technical historian, thinks that artificial intelligence is like a bureaucracy. Because when we think about the problems brought about by the rise of artificial intelligence, they are opaque and in some ways they can be very efficient and can make decisions faster and better than humans. But at the same time, if it has a little bias, it can amplify that bias, which is like bureaucracy.
What was the world like before the bureaucracy emerged? It is a feudal society, where the local king or anybody would say, “Okay” and “No” that kind of decision-making method is very individually. And the bureaucracy takes power from the hands of the individual and turns it into a decision-making machine, which may be more fair, but at the same time, if it has a little mistake, then the wrong decision will be made millions of times, not just once. The same is true for artificial intelligence. Millions of people are using large language models, and if there is a problem, a bias, it will be amplified.
These are all about other analogies about artificial intelligence, and there are many different ways to think about how we view artificial intelligence. Perhaps each analogy has its own limitations. We still need to constantly generate different cognitive frameworks to help us understand AI ethics from multiple dimensions and avoid the one-sidedness of a single framework.