Over The Past Year, These Thoughts Have Been Made Around The World About Artificial Intelligence Ethics
Over The Past Year, These Thoughts Have Been Made Around The World About Artificial Intelligence Ethics
Author | Wang Jingshu Assistant Researcher at Tencent Research Institute. At the 2019 International Consumer Electronics Show ended in late January , LG President and Chief Technology Officer IPPark delivered a keynote speech on how AI can promote “self-evolution” products. The discussion on
Author | Wang Jingshu Assistant Researcher at Tencent Research Institute
At the 2019 International Consumer Electronics Show ended in late January (CES 2019), LG President and Chief Technology Officer IPPark delivered a keynote speech on how AI can promote “self-evolution” products. The discussion on "Artificial Intelligence and Ethics" became the prelude to this consumer technology event that was not very relevant to serious topics.
From autonomous driving to smart homes, whether it is discussion frequency or technological innovation, artificial intelligence has penetrated into every aspect of people's pursuit of a better life.
According to McKinsey's estimates, by 2030, artificial intelligence will generate nearly $13 billion in additional economic output worldwide, accounting for 1.2% of world GDP growth [1]. At the same time, governments of various countries have also noticed the in-depth development of this technology into human society. In the "World of Building an Artificial Intelligence - National and Regional Artificial Intelligence Policy Report" released by an international think tank in Canada [2], 18 major countries around the world have issued targeted strategies and policies on artificial intelligence, including science and technology, regulations, ethics, social governance and other aspects.
However, with the explosion of artificial intelligence, the impact of artificial intelligence on the entire society has also entered reality from science fiction works.
Putting aside the cool and powerful terminators in science fiction works, simple autonomous driving technology will also face the trouble of the "tram paradox" when it is truly put into use. The complexity of human social life lies in the fact that it not only pursues the efficiency and accuracy of work, but also pursues the fairness and rationality of choices.
With the continuous development of technology, when artificial intelligence is truly embedded in people's daily lives, these ethical choices are truly placed in front of us, becoming problems that artificial intelligence designers need to solve.
So, specifically, what discussions has artificial intelligence triggered? What issues do we need to pay attention to? Over the past year, everyone has focused on these issues:
Artificial intelligence should not hurt people
As early as when robots were born, people formulated ethical norms for such machines designed to help humans. In the 1950s, the famous science fiction novelist Asimov proposed the "Three Rules of Robot Studies" in his book "I, Robots".
Article 1 of the law stipulates: "Robots shall not harm human individuals, or they shall stand idly by witnessing human individuals suffering from danger." Protecting the supreme human life is the prerequisite for all actions. Literally, this should be a rule that is generally recognized and followed, but the reality is not as simple as science fiction works.
In April 2018, Google reached a collaboration with the Pentagon in the United States to reduce the work burden of military analysts through machine learning technology and advanced computer vision technology to identify objects in videos captured by drones, automatically detect and identify 38 categories of targets. In addition, the technology can also be used to track the actions of targeted personnel.
This move caused many project-related employees to resign, and nearly 4,000 employees signed petitions to petition Google to cancel the project. In June, Google issued a statement emphasizing that it will not conduct any weapon development or artificial intelligence research for the purpose of harming humans, but the project has not been terminated. Finally, under pressure from employees that "refusing to study artificial intelligence for the military", Google finally confirmed that the project will not renew its contract after it expires in March 2019.
The victory of this "anti-war movement" initiated by ordinary employees in the artificial intelligence industry reminds us that there is no doubt that artificial intelligence technology has extremely broad application fields. Today, with the rapid development of technology, the humanoid "terminator" will not appear, but may come quietly in another form.
However, technological development requires ethical and rule limitations. After all, it is these most basic regulations that maintain the normal progress of human society in the continuous friction, collision and conflict.
Perhaps the deweaponization of artificial intelligence has reached a consensus in most society, and Google's project has also been defeated by the condemnation of many pacifists around the world.
However, in addition to "actively hurting people" such as automated weapons, when artificial intelligence penetrates into different scenarios in our lives, how to deal with some "forced hurting people" scenarios has also become an important issue.
Autonomous driving is an example.
In October 2018, MIT published a paper involving autonomous driving in the "tram problem" [3].
Through the “moral” it developed, researchers have raised a series of questions about how different lives are chosen to face in emergencies to more than 2 million online participants from more than 200 countries.
The results show the diversity of choice among humans—the selection preferences of different groups vary greatly, such as some clusters tend to protect animals, while some clusters tend to protect the elderly. These particularities can appear in different clusters of humans, but for autonomous vehicles that need to be prescribed in advance and choose according to the only specific mode, either the preferences of a few people are infinitely amplified and conflict with the majority, or the preferences of a few people are completely annihilated under the overwhelming advantage of the majority.
If the rules for autonomous driving are formulated according to the universal results of the study, most people and young people will be protected, while a few people and the elderly will be sacrificed. It is conceivable that if such rules are fully promoted, will the elderly still dare to appear on the busy streets?
“On the one hand, we want to provide the public with an easy way to have important social discussions,” said Iyad, associate professor of media arts and sciences at the Media Lab. “On the other hand, we want to collect data to determine what factors people think are important for autonomous vehicles in solving moral trade-offs.” The discussion has not ended, and people are also further aware of the real impact of autonomous driving and setting rules for it in the ongoing discussion.
Artificial intelligence should not discriminate against people
In the MIT survey, in addition to the choice of life, what is worth paying more attention to is the amplification of the difference in human preferences by artificial intelligence algorithms. If not paid attention, these differences are likely to be expanded into bias through algorithm development and actually affect people's choices.
For example, Amazon's artificial intelligence employee system. This experimental employee system will give female technicians a lower resume score in practice by machine learning of previous employee resumes. After countless controversies, Amazon finally shut down the system in November 2018.
Just the same month Amazon shut down its artificial intelligence resume screening system, eight U.S. congressmen sent a joint signature letter to Amazon's CEO, protesting that its facial recognition system also had racial discrimination. Although Amazon withstood the pressure and fought back this time, equal rights holders from all over the world still don’t buy it. The quarrel between the two sides has not stopped yet.
Through large, rapid and precise repetitive computing, machine learning can solve efficiency problems and has great room for development in many fields. But while being efficient, looking at the machine learning samples provided by humans in detail. When eliminating errors and finding laws, it naturally takes the majority of the samples as examples, and when applied to reality, it becomes the "discrimination" that people accuse. This is not a deliberate behavior of the developer, nor is it a problem with the artificial intelligence itself.
It can only be said that in the development of human history, artificial intelligence has helped us point out problems that we once did not realize. When we firmly believe that artificial intelligence should serve everyone, we need to think about how to overcome the previous prejudice. This is a common task between man and artificial intelligence.
Artificial intelligence should not "manipulate" people
In addition to the different preferences of existing data, people also have many concerns about data and recommendations of artificial intelligence.
As AI continues to demonstrate its powerful data processing capabilities, many commentators believe that AI is centralized, and due to its extremely strong data grabbing capabilities, huge information asymmetry brings social panic. The privacy leak incident in 2018 pushed data security to the forefront. Distrust of artificial intelligence and algorithms has also risen to a height that has never been seen before.
At the same time, algorithm recommendation products have also been reexamined.
On the one hand, the powerful data capture and analysis behind algorithm recommendations make people feel that they are naked in front of artificial intelligence and have no privacy; on the other hand, algorithm recommendations, the "knowledge cocoon" created by blindly satisfying people's preferences, also makes people fear that artificial intelligence is powerful enough to control human "self-awareness".
In December 2018, Google's video social networking website was pointed out to push extremism, fake news and other content to customers. I immediately apologized and deleted the video, but how to correct the issues discussed by all parties in the algorithm recommendation system.
Human moral values are naturally formed in the long history of social practice activities, and for computers that have not experienced these social practices, moral constraints cannot be naturally formed.
But obviously, the need for values of algorithms has become a mainstream consensus in the artificial intelligence community.
To add constraints that conform to human values to the algorithm, publicizing algorithms is only the first step. How to establish rules through social discussions and design algorithms that conform to human ethical values through imagination and creativity also needs to be constantly explored on the road to the development of artificial intelligence technology.
Artificial intelligence should not completely replace people
If algorithms cause cracks between consumers and artificial intelligence, then the crushing advantage of artificial intelligence on production will bring more panic. From robotic arms to robots, from automated production to artificial intelligence writing news, artificial intelligence products that can replace human labor are emerging one after another.
In December 2016, the White House released the "Artificial Intelligence, Automation and Economic Report" [4], pointing out that artificial intelligence production is disrupting the labor market. At that time, this seemed a little "overthinking", but in the next two years, people also personally felt the pressure brought by artificial intelligence to the labor market.
In early 2019, a report from the Brookings Research Center pointed out that [5] that about 36 million Americans are at risk of being replaced by artificial intelligence, not only in agriculture and manufacturing, but also in the service industry. But some economists have pointed out that automated production and services can create new positions, and at the same time, some positions that require high creativity cannot be replaced.
In fact, every technological leap and change will bring about the pain of the corresponding society, but society can always achieve transformation in the pain. Or use education to improve personal quality and join more challenging and valuable jobs; or create new positions through artificial intelligence to achieve labor transformation; or join the tide of entrepreneurship and innovation to inject more vitality into social innovation.
We see that these creative people cooperate with them in the process of "fighting" artificial intelligence and realize the continuous progress of society through sacrifice.
Beware of the original intention of technology to be good
In the face of problems brought by technology and social discussions, governors and industrialists responded in their own ways.
The National Security Council for Artificial Intelligence appeared for the first time in the U.S. Department of Defense fiscal budget in 2019. The committee hires experts from artificial intelligence giants, including Google and Microsoft, to specifically evaluate technological innovations related to artificial intelligence and machine learning, to ensure that their application is within the legal framework and conforms to ethical value.
Coincidentally, in December 2018, 52 experts from various fields in the EU released the "Ethical Guidance on the Development and Application of Artificial Intelligence" [6], hoping to regulate artificial intelligence technology in ethical aspects. They believe that artificial intelligence should first respect basic human rights, ethical rules and social values, that is, pursue "ethical goals"; while maintaining technological creativity, it is necessary to reflect and strengthen the reliability of artificial intelligence.
In the 2018 AI development plan, the UK also established a data ethics center to try to manage elusive big data.
In addition to the government, major companies have also taken the initiative to issue their own ethical initiatives on artificial intelligence to ensure the rational development of artificial intelligence. At the 2018 Mobile World Congress, IBM Chief Technology Officer Rob High proposed three principles of IBM's artificial intelligence development - trust, respect and privacy protection.
At the same time, Microsoft advocated the government to formulate regulations for facial recognition technology and put forward six principles - fairness, transparency, responsibility, non-discrimination, informed consent and legal control. Those who are deeply trapped in the vortex of data leakage also started to take action in early 2019 - spending $7.5 million to cooperate with the Technical University of Munich to establish the Institute of Artificial Intelligence Ethics, hoping to introduce more ethical thinking into the research of artificial intelligence.
The "four functions" concept of artificial intelligence mentioned by Tencent Chairman Ma Huateng at the 2018 World Artificial Intelligence Conference also made a complete response to various issues of artificial intelligence. "Four-can" is translated as "ARCC" (, , , and , pronounced as ark), that is, in the future, artificial intelligence should be "knowable", "controllable", "available" and "reliable".
Tencent hopes to start with a moral framework like "Sike" to help AI developers and their products win the trust of the public. This is also the goal pursued by artificial intelligence developers all over the world today.
Guo Kaitian, senior vice president of Tencent, said at the second Science and Technology Good Forum: "In the context of a digital society, practitioners need to remain alert and self-reflective, and they need to believe in science and technology good, and believe that humans have the ability and wisdom to control and control this technological revolution."
From data collection, training to machine learning, artificial intelligence has actually only amplified the problems that have always existed in human society, reminding us to use our imagination and creativity to think about and solve these problems in the context of the new era.
Technology comes from human pursuit of a better life. It begins with people and should also be controlled by people. Whether it is legislative restrictions or value principles, in the final analysis, it is a re-examination of the ethical value of human society.
The tide of artificial intelligence is still continuing. The pioneers of the Internet era should be able to gain a foothold, think in this tide, hold the belief in goodness, and keep moving forward steadily.
-- END --
【End Note】
[1] , J. et al (2018). Notes from the AI : the of AI on the World .
com/~/media//%/%/Notes the%
the% of AI on the%%/MGI-Notes-from-the-AI--
-the-of-AI-on-the-world---2018.ashx
[2] , T. et al (2018). AN AI WORLD-- on and AI.
[3] Awad, Sohan , Kim, , , Azim , Jean-François & Iyad . The Moral . 563, –64 (2018)
[4] of the , , , and the .
[5] See the report
[6]AI With An Ethic: Draft . #
Do I look good?