The Rise Of Algorithmic Decision-making: Several Ethical Issues And Strategies In The Era Of Artificial Intelligence | AI Observation
The Rise Of Algorithmic Decision-making: Several Ethical Issues And Strategies In The Era Of Artificial Intelligence | AI Observation
This article was compiled based on a speech by Tencent Research Institute researcher Cao Jianfeng at the sub-forum of
This article was compiled based on a speech by Tencent Research Institute researcher Cao Jianfeng at the sub-forum of "Intelligent System Evaluation Score Forum in the Era of AI Change".
Hello everyone! I am honored to have this opportunity to share with you some of my own observations and thoughts on the ethics of artificial intelligence. The topic I share today is "Ethics of Artificial Intelligence: Problems and Strategies".
The era of artificial intelligence is accelerating
Algorithm decision-making begins to emerge
We see that the third AI wave has begun. This is reflected in three levels.
At the technical level, there are advances in algorithms. When artificial intelligence started in 1956, people were talking more about artificial intelligence, including logic-based expert systems; during the second wave, machine learning became mainstream; this time it was deep learning, which was able to learn by itself , Self-programming learning algorithms do not require programmers to program the computer step by step, which can be used to solve more complex tasks. In fact, learning algorithms are like the human brain. In addition, there are also improvements in computing power, including today's quantum computers and increasingly common big data, which have played a great role and are of great value to artificial intelligence, making more complex algorithms possible.
At the application level, from speech recognition, machine translation to medical diagnosis, and autonomous driving, the application of AI is constantly deepening and maturing, and has even begun to surpass humans. This has led people to expect the ultimate algorithm with universal intelligence. So people have begun to worry that occupations such as shorthand, translation, and drivers may be replaced by machines. Some studies even believe that nearly half of the jobs will be replaced by machines.
At the business level, facing foreseeable benefits and benefits, mainstream Internet companies at home and abroad have begun to align with AI, and entrepreneurship and investment in the AI field are in full swing. There are already more than 1,000 AI companies around the world, and there is a huge room for growth in the market size. The future is very large.
Against this background, we are increasingly living under algorithms, and algorithmic decision-making has begun to intervene and even dominate more and more human social affairs. For example, many of the content people get on the Internet, such as news, music, videos, advertising, social network dynamics, etc., as well as the products they buy, are recommended to users by the recommendation engine, rather than by someone making decisions behind the scenes. For example, in the financial field, algorithms can decide whether to issue loans to a user and the specific loan amount. In addition, in terms of company management, an American investment company began to develop and manage the company's AI system a few years ago. Recruitment, investment, major decisions and other company affairs are managed and decided by this AI system. The rest of this company The employees are actually a group of programmers responsible for ensuring the stable operation of this system.
Perhaps in the future, a company's success will no longer depend mainly on the emergence of a great CEO like Jobs, but rather an AI system that is smart enough and powerful enough. There are actually a lot of similar algorithm decisions, so I won’t list them one by one here.
Algorithm discrimination, privacy, security/responsibility
AI ethical issues such as robot rights are emerging increasingly
So we see that AI is indeed a social change that is happening, and the potential benefits are huge. However, we cannot ignore the ethical issues behind AI. Today I will mainly talk about four aspects of ethical issues.
The first is algorithmic discrimination.
People may say that algorithms are mathematical expressions, which are very objective. They do not have various prejudices and emotions like humans, and are easily affected by external factors. How can discrimination occur? Some previous studies have shown that when judges are hungry, they are more severe in the criminals and sentence them to severe punishment. Therefore, people often say that justice depends on whether the judge has eaten breakfast. However, algorithms are bringing similar discrimination problems.
For example, a crime risk assessment algorithm used by some U.S. courts proved to have systematic discrimination against black people. What does it mean? That is, if you are a black man, once you commit a crime, you are more likely to be mistakenly marked by this system as having a high risk of crime, thus being sentenced to imprisonment by a judge, or a longer sentence, even if you should have been probation. In addition, some image recognition software previously mistakenly labeled black people as "chimpanzees" or "apes"; in March last year, Tay, a chatbot online on Microsoft, became a group of gender discrimination and racial discrimination in the process of interacting with netizens. It is equivalent to a "bad girl". As algorithmic decisions become more and more, similar discrimination will also increase.
Algorithm discrimination is harmful. Some recommended algorithm decisions may be harmless, but if the algorithm is applied to crime assessments, credit loans, employment assessments and other situations that are concerned about personal interests, because it operates on a large scale, not just for a certain person, it may have similar effects. The interests of a group of people or race are very large.
Moreover, a small mistake or discrimination in algorithmic decision-making will be enhanced in subsequent decisions and may become a chain effect. This time I am unlucky, and I will be unlucky many times later. In addition, deep learning is a typical "black box" algorithm. Even the designer may not know how the algorithm makes decisions. It may be technically difficult to find out whether there are discrimination and discrimination sources in the system.
Now let me talk about why the algorithm is not very objective and may contain discrimination. Algorithm decision-making is often a kind of prediction, using past data to predict future trends. Algorithm models and data input determine the prediction results. Therefore, these two elements have become the main sources of algorithmic discrimination.
On the one hand, algorithms are essentially "opinions expressed in mathematical ways or computer code", including their design, purpose, success criteria, data use, etc., which are subjective choices of designers and developers, and designers and developers may Embed the bias you embrace into the algorithm system. On the other hand, the effectiveness and accuracy of data will also affect the accuracy of the entire algorithm decision-making and prediction.
For example, data is a reaction of social reality, and the training data itself may be discriminatory. AI systems trained with such data will naturally also bring the shadow of discrimination; for example, data may be incorrect, incomplete or outdated. , bringing the so-called "garbage in and out of garbage" phenomenon; further, if an AI system relies on majority learning, it will naturally not be compatible with the interests of minorities. In addition, algorithmic discrimination may be learned by algorithms with self-learning and adaptable abilities during the interaction process. In the process of interacting with the real world, AI systems may not be able to distinguish between discrimination and what is not discrimination.
Finally, algorithms tend to solidify or amplify discrimination, so that discrimination can last forever in the entire algorithm. Orwell wrote a famous saying in his political novel "1984": "Whoever grasps the past will master the future; whoever grasps the present will master the past." This sentence can actually be used to compare algorithmic discrimination. . Ultimately, algorithmic decisions are to use the past to predict the future, and past discrimination may be consolidated in the algorithm and strengthened in the future, because the erroneous output formed by the wrong input serves as feedback, further deepening the error.
Ultimately, algorithmic decisions will not only code past discriminatory practices, but will also create their own reality, forming a "self-realized discriminatory feedback loop." Because if the algorithm is trained with inaccurate or biased data from the past, the results will definitely be biased; then the new data generated by this output will be used to feedback the system, which will consolidate the bias and may eventually make it possible to Algorithms to create reality. Similar problems exist, including predictive policing, crime risk assessment, etc. Therefore, algorithmic decision-making actually lacks imagination for the future, and the progress of human society requires such imagination.
The second is privacy.
Many AI systems, including deep learning, are big data learning and require a large amount of data to train learning algorithms. So people say that data has become the new oil in the AI era. This brings new privacy concerns. On the one hand, AI’s large-scale collection and use of data, including sensitive data, may threaten privacy, especially if a large amount of sensitive data such as medical and health data is used in the deep learning process, this data may be exposed in the subsequent process. It will have an impact on personal privacy. Therefore, foreign AI researchers are already advocating how to protect personal privacy in the deep learning process. A paper called "Semi- for Deep from Data" is seeking to protect privacy in the deep learning process. At this year's AAAI conference Quote as an excellent paper.
On the other hand, the widespread use of user portraits and automated decision-making may also have an adverse impact on personal rights and interests. In addition, considering the large amount of transaction data between various services, data flows constantly and frequently, data becomes a new circulation, which may weaken individuals' control and management of their personal data. Of course, there are already some tools that can be used to strengthen privacy protection in the AI era, such as planned privacy, default privacy, personal data management tools, anonymization, pseudonymization, encryption, differentiated privacy, etc. Some standards that are constantly developing and improved are worth promoting in deep learning and AI product design.
The third is responsibility and safety.
Some celebrities such as Hawking and Schmidt were previously wary of strong artificial intelligence or super artificial intelligence that might threaten human survival. But what I want to talk about here actually refers to the security and controllability of intelligent robots during operation, including behavioral security and human control. From the three laws of robots proposed by Asimov to the 23 principles of artificial intelligence proposed by Asimov in 2017, AI security has always been a focus of people's attention. In addition, safety is often accompanied by responsibility. Nowadays, driverless cars can also cause car accidents. So if intelligent robots cause personal and property damage, who will bear the responsibility? If the existing legal liability rules are followed, because the system is highly autonomous, its developers cannot predict it, including the existence of the black box, and it is difficult to explain the cause of the accident, and a responsibility gap may arise in the future.
The last one is robot rights, that is, how to define the humanitarian treatment of AI.
As autonomous intelligent robots become more and more powerful, what role should they play in human society? Can we get treatment like humans in some aspects, that is, enjoy certain rights? Can we abuse, torture or kill robots? For example, take the intelligent robot "Jia Jia" developed by Teacher Chen's team as an example. If someone mistakenly thinks Jia Jia is a human and then goes forward to molest, can we prosecut him for committing the crime of forced molestation and insulting women? But here is the problem of the object of crime error, because Jia Jia is not a woman in the human sense.
So, what are autonomous intelligent robots legally? Natural person? Legal person? animal? Something? In fact, the EU is already considering whether to give intelligent robots the legal personality of "electronics", have rights and obligations, and be responsible for their actions. This issue may be worth more discussion in the future.
Building an internal and external constraint mechanism for algorithm governance
Regarding some of the ethical issues mentioned above, people may need to build an internal and external constraint mechanism for algorithmic governance in advance. I will talk about four points here.
The first is ethical AI design, that is, embedding the legal, moral norms and values of human society into the AI system.
This is advocated by the International Organization for Standardization IEEE. It can be achieved in three steps.
First, the discovery of norms and values. First, it is necessary to determine which laws and value norms AI systems need to comply with, but there may be problems of moral overload and value rank in this. How to choose when different values conflict, more Interdisciplinary work is required.
The second step is how to embed the AI system after these specifications have been clarified. Can ethics and laws be turned into computer code? There are now two methodologies. One is top-down, that is, writing the rules that need to be followed into the system, and then concretizing the value yourself under different circumstances, that is, reasoning from abstract principles to specific behaviors, such as the previous The three laws of robots, but they are too abstract. Another type is bottom-up, which is a self-learning process. It does not tell the AI system's value and normative morality in advance, but allows the system to obtain information about value from observing human behavior and finally form a judgment.
The third step is that after embedding value into the AI system, the specifications and value need to be evaluated to evaluate whether it is consistent with human ethical value, and this requires evaluation criteria. On the one hand, it is user evaluation, how to establish trust in AI as a user; if the system behavior exceeds expectations, explain to the user why it is done. On the other hand, third-party evaluations such as competent departments and industry organizations need to define value consistency and compliance standards, as well as AI trustworthiness standards.
But there are still two difficulties that need to be faced. The first is the ethical dilemma. For example, MIT solicits opinions from netizens around the world on its website about the choice of self-driving cars' ethical dilemma. Without time to brake, if the self-driving car drives forward, three people running red lights will be killed, but if you turn, you will hit obstacles and kill five people in the car. At this time, how should the vehicle be selected? When facing problems similar to the tram dilemma, utilitarianism and absolutism will give different moral choices. This conflict is unresolved in human society, and such problems will also be encountered in automated scenarios.
The second is the issue of value docking. In fact, many robots nowadays have a single purpose. If you ask them to get coffee, they will "will all-outly" overcome any difficulties to get coffee, and the sweeping robot will sweep the floor wholeheartedly. But is the behavior of robots really what we humans want? This creates the problem of value docking.
Let me give you a myth. King Midas wanted to turn stones into gold. As a result, when he owned this magic weapon, everything he touched, including food, would turn into gold, but he was starved to death. Why? Because this magic weapon does not understand Midas’ true intention, will the robot bring similar situations to us humans? This question is worth pondering.
Therefore, some people proposed AI that is compatible with humans, including three principles: one is altruism, that is, the only goal of robots is to maximize the realization of human values; the other is uncertainty, that is, robots are uncertain at the beginning of what human values are; The third is human-in-the-loop, that is, human behavior provides information about human values.
The second is that ethical principles need to be implemented in AI research and development.
On the one hand, for AI R&D activities, AI R&D personnel need to abide by some basic ethical principles, including beneficial, non-violent, inclusive design, diversity, transparency, and privacy protection, etc. On the other hand, it may be necessary to establish an AI ethical review system, which should be interdisciplinary, diverse, and evaluate and make recommendations on the ethical impact of AI technologies and products. Ethical review committees have been established in the industry, IBM and others. Moreover, the Independent Review Committee of the Medical Department will release an independent report in June this year and will issue an evaluation report regularly in the future.
Third, it may be necessary to regulate the algorithm to avoid the algorithm doing evil.
Because the current algorithms are indeed becoming more and more complex, including the impact of decision-making, we need to conduct certain supervision of algorithms in the future, which are carried out by industry organizations or regulatory departments. Possible regulatory measures include standard formulation. It involves classification, performance standards, design standards, responsibility standards, etc.; in terms of transparency, including the algorithm's own code transparency and algorithm decision-making transparency, there are now some open source movements in artificial intelligence abroad. In addition, there is an approval system, such as autonomous vehicles, intelligent robots, etc., which may bring public safety issues. In the future, regulatory authorities may require prior approval. If it is not approved, it cannot be introduced to the market.
Fourth, legal relief is required for the algorithmic decision-making and discrimination mentioned above, including personal and property damage caused.
For algorithmic decision-making, on the one hand, it is necessary to ensure transparency. If decision-making is made using automated means, it is necessary to inform the user that the user has the right to know and should provide a certain explanation to the user if necessary; on the other hand, it is necessary to provide a mechanism for appeal. . For personal and property damage caused by robots, on the one hand, innocent victims should be rescued; on the other hand, for the liability challenges brought by autonomous vehicles, intelligent robots, etc., strict liability, differentiated liability, compulsory insurance and compensation funds, The legal personality of intelligent robots are all remedies that can be considered.
Finally, let’s summarize it, because today’s sub-forum is intelligent system testing, including Turing testing, semantic testing, security testing, etc., but ethical testing is equally important, including ethical code, privacy, justice, and discrimination. The current AI industry is more involved in engineers, and there is a lack of participation in other disciplines such as philosophy and ethics. In the future, such interdisciplinary AI ethics testing needs to be strengthened. Because in a sense we are no longer creating a passive and simple tool, but designing things that have the abilities of perception, cognition, decision-making, etc. like humans. You can call it a "more complex tool", but it cannot Denied that we need to ensure that such complex tools enter human society and that of human values and human needs.