Six Ethical Principles Of Artificial Intelligence
Six Ethical Principles Of Artificial Intelligence
Transparency and accountability are the foundation, and the other four principles are fairness, reliability and security, privacy and security, inclusiveness
Transparency and accountability are the foundation, and the other four principles are fairness, reliability and security, privacy and security, inclusiveness
Tim O'Brien | Text
In 2018, Microsoft published the book "The Future Computing", which proposed six principles for artificial intelligence development: fairness, reliability and security, privacy and security, inclusiveness, transparency and responsibility.
First of all, fairness. Fairness means that for people, people in different regions and all people of different levels are equal in front of AI, and no one should be discriminated against.
The design of artificial intelligence data begins with the selection of training data, which is the first step that may cause injustice. Training data should be enough to represent the diverse world we live in, at least the part of the world where artificial intelligence will run. Taking the artificial intelligence system for facial recognition and emotion detection as an example, if only the facial images of adults are trained, this system may not be able to accurately identify children's characteristics or expressions.
Ensuring that data is “representative” is not enough, and racism and gender discrimination can also sneak into social data. Suppose we design an AI system that helps employers screen job seekers, and if screened with public employment data, the system is likely to "learn" most software developers as men, and when choosing the candidate for software developer positions, the system is likely to be male, although the company implementing the system wants to increase the diversity of employees through recruitment.
Injustice can also be caused if people assume that technical systems make fewer mistakes, more accurate and more authoritative than people. In many cases, the result output by the artificial intelligence system is a probability prediction, such as "the probability of a loan default of about 70% for applicants", which may be very accurate, but if the loan manager simply interprets "70% default risk" as "bad credit risk" and refuses to provide loans to everyone, then 30% of the people, although their credit status is good, their loan application is also rejected, resulting in injustice. Therefore, we need to train people to understand the meaning and impact of artificial intelligence results and make up for the shortcomings in artificial intelligence decision-making.
The second is reliability and security. It refers to the use of artificial intelligence that is safe, reliable and does not do evil.
A hot topic in the United States is the issue of autonomous vehicles. Previous news reports showed that there was a problem with a moving Tesla system. The vehicle was still driving at high speed at 70 miles per hour, but the driving system had crashed and the driver was unable to restart the autonomous driving system.
Imagine if you were to release a new drug, its regulation, testing and clinical trials would be subject to a very strict regulatory process. However, why is the system safety of autonomous vehicles completely loosely regulated or even unregulated? This is a bias against automation, which refers to our over-belief in automation. This is a very strange contradiction: on the one hand, humans rely too much on machines, but on the other hand, this actually conflicts with human interests.
Another case happened in San Francisco. A Tesla owner who was already drunk and fainted went straight into the car and turned on the autonomous driving system, slept in the car, and then the car drove away automatically. The owner of this Tesla car felt, "I'm drunk and I don't have the ability to continue driving, but I can trust Tesla's autonomous driving system to help me drive, so am I not illegal?" But in fact, this is also an illegal act.
Reliability and security are an area that artificial intelligence needs to pay attention to. Self-driving cars are just one example, and the areas it involves are not limited to autonomous driving.
The third is privacy and protection. Because artificial intelligence involves data, it always causes problems in personal privacy and data security.
A very popular fitness app in the United States is called, for example, if you ride a bicycle, the data on your cycling will be uploaded to the platform, and many people can see your fitness data on social media platforms. The problem follows. Many servicemen at US military bases also use this application when exercising. All the training trajectory data is uploaded, and the map data of the entire military base is available on the platform. The location of the US military base is highly confidential, but the military never expected that a fitness app would easily leak data.
Fourth, artificial intelligence must take into account the moral principles of inclusiveness and the various functional disorders in the world.
Take a LinkedIn example, they have a service called "LinkedIn Economic Map Search". LinkedIn, Google and some American universities jointly conducted a study to study whether there are gender differences among users who achieve career improvement through LinkedIn? This study focuses on some graduates in the top 20 MBAs in the United States. After graduation, they will describe their careers on LinkedIn. They mainly compare these data. The study concluded that there are gender differences in self-recommendation among the top 20 MBA graduates in the U.S. at least. If you are a male MBA graduate, you usually have to surpass women in terms of self-recommendation.
If you are a person in charge of recruitment in a company and log in to LinkedIn's system, you will have some keyword domains to choose, and one of them is a self-summary. On this page, men usually have higher summary and evaluation of themselves than women, and women have a low evaluation of themselves in this regard. Therefore, as a recruiter, you actually need to obtain different data signals when recruiting personnel. The weight of this data signal must be reduced so as not to interfere with the normal evaluation of the applicant.
However, this involves a question of degree. The data signal cannot be adjusted too low or too high, and it must be at a correct degree. Data can provide humans with a lot of insight, but the data itself also contains some biases. So how can we better grasp such a degree of bias from the perspective of artificial intelligence and ethics to achieve this inclusiveness? This is the connotation of artificial intelligence inclusiveness.
Beneath these four values, there are two important principles: transparency and accountability, which are the basis of all other principles.
The fifth is transparency. In the past decade, the most important technology has made rapid progress in the field of artificial intelligence. Deep learning is a model in machine learning. We believe that at least at this stage, the accuracy of deep learning models is the highest among all machine learning models, but there is a question of whether it is transparent. Transparency and accuracy cannot be achieved at the same time, you can only weigh the trade-offs between the two. If you want higher accuracy, you have to sacrifice a certain degree of transparency.
There is such an example in Lee Sedol's Go competition. Many of the chess played are actually something that artificial intelligence experts and professional Go players cannot understand at all. If you are a human chess player, you will never play such a move. So humans are not sure what the logic of artificial intelligence is and what its thinking is.
So the problem we are facing now is that the deep learning model is very accurate, but it has the problem of opacity. If these models and artificial intelligence systems are not transparent, there are potential insecurity problems.
Why is transparency so important? For example, in the 1990s, a scholar was doing research on pneumonia at Carnegie Mellon University, and one team did rule-based analysis to help determine whether a patient needs to be hospitalized. Rule-based analysis is not accurate, but since rule-based analysis is some rules that humans can understand, it has good transparency. They "learned" that asthma patients have lower probability of dying from pneumonia than the general population.
However, this result is obviously contrary to common sense. If a person suffers from both asthma and pneumonia, the mortality rate should be higher. The reason for the results obtained by this study is that asthma patients are often at risk and once they appear, they are more vigilant and receive better medical treatment, so they can get better medical treatment faster. This is the human factor, and if you know you have asthma, you will take emergency measures quickly.
Human subjective factors are not placed as objective data in the data graph of the training model. If humans can understand this rule, they can be judged and corrected. But if it is not a rule-based model, and it is not known that it is judged by such rules, it is an opaque algorithm, and it draws this conclusion. According to this conclusion, humans will recommend that asthma patients not be hospitalized for treatment, which is obviously unsafe.
Therefore, when artificial intelligence is applied to some key areas, such as the medical field and criminal law enforcement field, we must be very careful. For example, if someone applies for a loan from a bank and the bank refuses to approve the loan, as a customer, you have to ask why. The bank cannot say that I am based on artificial intelligence, it must give a reason.
The sixth is accountability. If an artificial intelligence system takes a certain action and makes a certain decision, it must be responsible for the results it brings. Accountability for artificial intelligence is a very controversial topic, so let’s go back to self-driving cars for discussion. Indeed, it also involves a legal or legislative issue. There have been many car accidents caused by autonomous driving systems in the United States. If machines replace people to make decisions and take actions, who will be responsible? Our principle is to adopt an accountability system. When bad results occur, machines or artificial intelligence systems cannot be used as scapegoats, and people must be responsible.
But the problem now is that we don’t know which country has the ability to handle similar cases based on the legal basis of the world. (U.S.) The rulings in many cases are based on "case law", but for such cases, we have no precedent that can serve as the legal basis for court rulings.
In fact, it is not only autonomous driving, but also many other areas, such as criminal cases and issues involving the military field. Nowadays, many weapons have been automated or artificially intelligent. If an automated weapon kills humans, how should such cases be ruled?
This involves the issue of legal person subjects in the law. Can artificial intelligence systems or fully automated systems exist as legal person subjects? It will bring about a series of legal questions: First, can artificial intelligence systems be determined to be the subject of a law? If you determine that it is a legal subject, it means that the artificial intelligence system has its own power and its own responsibilities. If it has power and responsibility, it means it is responsible for its own actions, but is this logical chain valid? If it exists as a legal subject, it must bear corresponding responsibilities and enjoy the right to receive legal aid. Therefore, we believe that the legal subject must be human.
The author is the general manager of Microsoft's artificial intelligence project, editor: Han Shulin, original published in "Finance" magazine on May 27, 2019
(This article was first published in the "Finance" magazine published on May 27, 2019)
Editor-in-chief | Su Yue