AI Ethics

Six Ethical Principles Of Artificial Intelligence

Six Ethical Principles Of Artificial Intelligence

Six Ethical Principles Of Artificial Intelligence

It proposes six major principles for artificial intelligence development: fairness, reliability and security, privacy and security, inclusion, transparency, and responsibility. Fairness means that for people, people from different regions and different levels are equal in front of AI, and no one should be discriminated against.

Transparency and accountability are the foundation, and the other four principles are fairness, reliability and safety, privacy and security, and inclusion

Written by Tim O'Brien |

In 2018, Microsoft published the book "Future Computing" (The), which proposed six principles for artificial intelligence development: fairness, reliability and security, privacy and security, inclusion, transparency, and responsibility.

The first is fairness. Fairness means that for people, people from different regions and different levels are equal in front of AI, and no one should be discriminated against.

The design of artificial intelligence data begins with the selection of training data, which is the first step that may cause injustice. The training data should be sufficiently representative of the diverse world in which we live, at least the part of the world in which the AI ​​will operate. Take an artificial intelligence system for facial recognition and emotion detection as an example. If it is trained only on adult facial images, the system may not be able to accurately identify children's features or expressions.

Ensuring that the data is “representative” is not enough, racism and sexism can also creep into social data. Suppose we design an AI system that helps employers screen job applicants. If public employment data is used for screening, the system will likely "learn" that the majority of software developers are male, and the system will likely favor males when selecting candidates for software developer positions, even though the company implementing the system wants to increase the diversity of its employees through recruitment.

It can also be unfair if people assume that technological systems are less error-prone, more accurate, and more authoritative than people. In many cases, the output result of the artificial intelligence system is a probability prediction, such as "the applicant's loan default probability is about 70%." This result may be very accurate, but if the loan administrator simply interprets the "70% default risk" as "bad credit risk" and refuses to provide loans to everyone, then 30% of people will have their loan applications rejected even though their credit status is good, resulting in injustice. Therefore, we need to train people to understand the meaning and impact of artificial intelligence results and make up for the shortcomings in artificial intelligence decision-making.

The second is reliability and security. It refers to the fact that artificial intelligence is safe, reliable and does not do evil when used.

One of the hotly debated topics across the country right now is the issue of self-driving vehicles. There were previous news reports that a driving Tesla had a system problem. The vehicle was still traveling on the highway at a speed of 70 miles per hour, but the driving system had crashed and the driver was unable to restart the automatic driving system.

Imagine if you were to launch a new drug, its regulation, testing and clinical trials would be subject to a very strict regulatory process. But why is the system safety of autonomous vehicles completely lax or even unregulated? This is a bias against automation, which refers to our overconfidence in automation. This is a very strange contradiction: on the one hand, humans trust machines excessively, but on the other hand, this actually conflicts with human interests.

Another case occurred in San Francisco. A drunk Tesla owner went directly into the car and turned on the autopilot system, slept in the car, and then the car drove away automatically. This Tesla owner felt, "I'm drunk and I don't have the ability to continue driving, but I can trust Tesla's autopilot system to help me drive, so am I not breaking the law?" But in fact, this is also an illegal act.

Reliability and security are areas that require great attention in artificial intelligence. Self-driving cars are just one example, and the fields they involve are by no means limited to self-driving.

The third is privacy and security. Because artificial intelligence involves data, it will always cause problems with personal privacy and data security.

A very popular fitness app in the United States is called. For example, if you ride a bicycle, your cycling data will be uploaded to the platform. Many people on the social media platform can see your fitness data. Problems arise. Many serving soldiers in US military bases also use this application when exercising. All their exercise track data are uploaded, and the map data of the entire military base is available on the platform. The location of U.S. military bases is highly confidential information, but the military never expected that a fitness app could easily leak data.

The fourth is that artificial intelligence must take into account the ethical principles of inclusivity and take into account the various dysfunctional people in the world.

Take LinkedIn as an example. They have a service called “LinkedIn Economic Graph Search”. LinkedIn, Google and some American universities jointly conducted a study to study whether there are gender differences among users who use LinkedIn to achieve career advancement. This study mainly focuses on some of the top 20 MBA graduates in the United States. They will describe their careers on LinkedIn after graduation. They mainly compare these data. The study concluded that there are gender differences in self-recommendation among graduates of at least the top 20 MBA programs in the United States. If you are a male MBA graduate, you will usually be more willing to self-recommend than a female.

If you are a person in charge of recruiting in a company and log in to the LinkedIn system, there will be some keyword fields to choose from, including a page of self-summary. On this page, men generally summarize and evaluate themselves higher than women, while women have a lower self-evaluation in this regard. Therefore, as a recruiter, you actually need to obtain different data signals when recruiting people, and you need to reduce the weight of such data signals so that they will not interfere with the normal evaluation of candidates.

However, this involves a degree issue. The data signal cannot be adjusted too low or too high. It must have a correct degree. Data can provide humans with a lot of insights, but the data itself also contains some biases. So how can we better grasp the degree of such a bias from the perspective of artificial intelligence and ethics to achieve this kind of inclusiveness? This is the connotation of what we call artificial intelligence inclusiveness.

Underneath these four values ​​are two important principles: transparency and accountability, which are the foundation for all other principles.

Fifth is transparency. In the past decade, one of the most important technologies that has made rapid progress in the field of artificial intelligence is deep learning. Deep learning is a model in machine learning. We believe that at least at this stage, the accuracy of deep learning models is the highest among all machine learning models, but there is a question of whether it is transparent. You can't have both transparency and accuracy. You can only make a trade-off between the two. If you want higher accuracy, you have to sacrifice a certain amount of transparency.

There is an example of this in the Go match played by Lee Sedol. Many of the moves he played were actually incomprehensible to artificial intelligence experts and professional Go players. If you were a human chess player, you would never make a move like this. So what exactly is the logic of artificial intelligence and what is its thinking are currently unclear to humans.

So the problem we are facing now is that the deep learning model is very accurate, but it has opacity problems. If these models and artificial intelligence systems are not transparent, there are potential insecurity issues.

Why is transparency so important? For example, in the 1990s at Carnegie Mellon University, a scholar was doing research on pneumonia, and one of the teams did rule-based analysis to help decide whether patients needed to be hospitalized. The accuracy of rule-based analysis is not high, but because rule-based analysis is based on rules that humans can understand, it is highly transparent. They "learned" that people with asthma were less likely to die from pneumonia than the general population.

However, this result obviously goes against common sense. If a person suffers from both asthma and pneumonia, the mortality rate should be higher. The reason for the "learning" results of this study is that because an asthma patient is often at risk, once symptoms occur, they will be more vigilant and receive better medical care, so they can receive better medical care faster. It’s the human factor, if you know you have asthma you will take emergency measures quickly.

Human subjective factors are not placed as objective data in the data graph of the training model. If humans can understand this rule, they can judge and correct it. But if it is not a rule-based model, I don’t know that it is judged by such rules. It is an opaque algorithm. It reaches this conclusion. Human beings will advise asthma patients not to be hospitalized for treatment based on this conclusion. This is obviously unsafe.

Therefore, when artificial intelligence is applied to some key fields, such as the medical field and criminal law enforcement, we must be very careful. For example, if someone applies for a loan from a bank and the bank refuses to approve the loan, then as a customer, you have to ask why. The bank cannot say that it is based on artificial intelligence, it must give a reason.

Sixth is accountability. When an artificial intelligence system takes an action or makes a decision, it must be responsible for the results it brings. Accountability in artificial intelligence is a very controversial topic, and let’s return to self-driving cars to discuss it. Indeed, it also involves a legal or legislative issue. There have been many car accidents caused by autonomous driving systems in the United States. If a machine replaces a person in making decisions and taking actions, and bad results occur, who will be responsible? Our principle is to adopt an accountability system. When bad results occur, machines or artificial intelligence systems cannot be used as scapegoats. People must bear responsibility.

But the problem now is that we don’t know which country has the ability to handle similar cases based on the legal basis around the world. (U.S.) Many cases are decided based on "case law", but for some of these cases, we have no precedents that can serve as the legal basis for court decisions.

In fact, it is not only autonomous driving, but also many other fields, such as criminal case issues and issues involving the military field. Nowadays, many weapons have become automated or artificially intelligent. If an automated weapon kills a human, how should such a case be adjudicated?

This involves the issue of legal entities in the law. Can artificial intelligence systems or fully automated systems exist as legal entities? It will bring about a series of legal issues: First, can the artificial intelligence system be determined to be a legal subject? If you determine that it is a legal subject, it means that the artificial intelligence system has its own powers and its own responsibilities. If it has power and responsibility, it means that it is responsible for its own actions, but does this logical chain hold? If it exists as a legal subject, it must bear corresponding responsibilities and have the right to receive legal aid. Therefore, we believe that the legal subject must be a human being.

The author is General Manager of Microsoft Artificial Intelligence Project, Editor: Han Shulin, originally published in "Finance" magazine on May 27, 2019

More