AI Ethics

Junhe Innovation丨Research On Ethical And Legal Issues Of Artificial Intelligence

Junhe Innovation丨Research On Ethical And Legal Issues Of Artificial Intelligence

Junhe Innovation丨Research On Ethical And Legal Issues Of Artificial Intelligence

In August 1956, the concept of intelligence, namely artificial intelligence, using machines to imitate human learning and other aspects, was first proposed and discussed by John McCarthy, Marvin Minsky, Claude Shannon, Allen Newell, and Herbert Simon in Dartmouth, USA;Arthur Samuel proposed the concept of machine learning in 1959:. Machine learning studies and builds a special algorithm that allows the computer to learn in the data and make predictions . Machine learning is not a specific algorithm, but a general term for many algorithms.Basic ideas and training algorithms of machine learning. The machine solves the data problem and solves the display problem. Choose a mathematical model that suits the target task

Artificial Intelligence and Machine Learning

In August 1956, the concept of intelligence, namely artificial intelligence, using machines to imitate human learning and other aspects, was first proposed and discussed by John McCarthy, Marvin Minsky, Claude Shannon, Allen Newell, and Herbert Simon in Dartmouth, USA;

Arthur Samuel proposed the concept of machine learning in 1959:

Machine learning studies and builds a special algorithm (not a specific algorithm) that allows the computer to learn in the data and make predictions (Field of study that gives the to learn being .)

Machine learning is not a specific algorithm, but a general term for many algorithms.

Basic ideas and training algorithms of machine learning

What is the ethics of artificial intelligence_Ethics of artificial intelligence_The characteristics of ethics of artificial intelligence

Real problems transform into mathematical problems

Artificial intelligence ethics_What is the ethics of artificial intelligence_The characteristics of ethics of artificial intelligence

The machine solves the data problem and solves the display problem

Artificial intelligence ethics_What is the ethics of artificial intelligence

Artificial intelligence ethics_What is the ethics of artificial intelligence

Seven steps of machine learning

Collect data

(The quantity and quality of data directly determine the quality of the prediction model)

Data preparation

(Split the data into training set, validation set, and test set)

Select a model

(e.g. linear model)

train

(No human participation is required, the machine can complete it independently)

Evaluate

(The reserved validation set and test set come into play)

Parameter adjustment

(Improve the model)

predict

(Applicable to machines to answer new questions)

Supervised learning

Choose a mathematical model that suits the target task

First, give some of the known "questions and answers" (training set) to the machine for learning

The machine summarizes its own "methodology"

Humans give "new questions" (test set) to the machine, so that the machine can answer

How did credit scores come from?

According to experience, the factors that affect personal credit mainly include the following five categories: payment records, total account amount, credit record span, new accounts, and credit categories. Based on this we can build a simple model. Next, collect a large amount of known data, and use part of the data for training and part of it for testing and verification. Through machine learning, we can summarize the relationship between these five data and credit scores, and then use verification data and test data to verify whether this formula is valid.

Unsupervised learning

Artificial intelligence ethics_What is the ethics of artificial intelligence

The story of beer diapers

Recommend related products based on the user's purchasing behavior. For example, when you browse on Taobao, Tmall, and JD.com, you will always recommend some related products based on your browsing behavior. Some products are recommended through unsupervised learning. The system will find some users with similar purchase behaviors and recommend products that these users "like" the most.

User classification on advertising platform

Through unsupervised learning, users can not only subdivide users according to dimensions such as gender, age, and geographical location, but also classify users through user behavior. Through user segmentation in many dimensions, advertising delivery can be more targeted and the effect will be better.

Reinforcement learning

Ideas of reinforcement learning algorithm: Taking games as an example, if you can get a higher score by adopting a certain strategy in the game, then further "strengthen" this strategy in order to continue to achieve better results. This strategy is very similar to the various "performance rewards" in daily life. We often use this strategy to improve our game level.

Applicable scenario: Game

14 classic machine learning algorithms

algorithm

Training method

Linear regression

Supervised learning

Logical regression

Supervised learning

Linear discriminant analysis

Supervised learning

Decision tree

Supervised learning

Naive Bayes

Supervised learning

K Nearby

Supervised learning

Learn vector quantization

Supervised learning

Support vector machine

Supervised learning

Random Forest

Supervised learning

Supervised learning

Gaussian hybrid model

Unsupervised learning

Limit Boltzmann Machine

Unsupervised learning

K-means clustering

Unsupervised learning

The maximum expectation algorithm

Unsupervised learning

Deep Learning

Deep learning: a semi-theoretical and semi-empirical modeling method that uses human mathematical knowledge and computer algorithms to adjust internal parameters by combining as much training data as possible and the computer's large-scale computing power, and approximates the problem target as much as possible.

Artificial intelligence ethics_What is the ethics of artificial intelligence

Deep Learning vs. Traditional Machine Learning

Advantages of deep learning

The implementation scenario of artificial intelligence

finance

educate

Medical

retail

manufacturing

unmanned

Urban Management

Artificial Intelligence Ethics

When artificial intelligence is given anthropomorphic role and the goal of dealing with problems, artificial intelligence is given the autonomy of action. In other words, when dealing with practical problems, artificial intelligence can make its own judgment without supervision or manipulation, and execute and solve problems in an autonomous manner.

How to make artificial intelligence have such judgment depends on how humans teach it.

Ethics and morality are highly dependent on social, cultural and environmental factors, so their ethical foundation and content will vary in different countries and cultures.

Developing successful AI products in our ever-changing society requires timely reflection of new social needs and values ​​in the design process, and humans cannot predict every scenario, which emphasizes the need to accelerate discussions on robot ethics.

Artificial Intelligence Ethical Standards

All governments, organizations or institutions have issued various types of artificial intelligence ethical standards, and have established framework principles at home and abroad, and have even clearer industry norms in certain specific fields (such as autonomous driving).

EU

In April 2019, the EU released the "Ethics Guide for AI" (for AI), which established a framework for trustworthy artificial intelligence from three levels: the foundation of trustworthy artificial intelligence, the implementation of trustworthy artificial intelligence, and the evaluation of trustworthy artificial intelligence. Among them, in view of the foundation of trustworthy artificial intelligence, it shows that trustworthy artificial intelligence is based on the four principles of (1) artificial intelligence must respect human autonomy, (2) it must prevent damage, (3) it must ensure fairness, and (4) it must maintain transparency.

USA

The U.S. Public Policy Commission issued the Algorithm Transparency and Accountability Statement on January 12, 2017, proposing the following seven principles: (1) Full understanding; (2) Relief; (3) Responsibility; (4) Interpretability; (5) Data source protection; (6) Reviewability; (7) Verification and testing.

OECD

On May 22, 2019, OECD Member States approved the AI ​​principle, the Principle for Responsible Management of Trusted Artificial Intelligence, which has five ethical principles, including inclusive growth, sustainable development and well-being, people-centered value and equity, transparency and explainability, robustness and safety and reliability, and responsibility.

China

In April 2019, the "Artificial Intelligence Ethical Risk Analysis Report" was released. The report puts forward two principles of artificial intelligence ethics: one is the principle of fundamental interests of mankind; the other is the principle of responsibility. On June 19, 2019, the "Next Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" was released, proposing a framework and action guide for artificial intelligence governance, which includes eight principles: (1) Harmony and Friendship; (2) Fairness and Justice; (3) Inclusiveness and Sharing; (4) Respect for Privacy; (5) Security and Controllability; (6) Shared Responsibility; (7) Open Collaboration; (8) Agile Governance.

Ethical and legal issues involved in artificial intelligence

Controllability of artificial intelligence

Artificial Intelligence Tram Problem

Artificial Intelligence Interpretability

Discrimination by artificial intelligence

Privacy during human-computer interaction

Controllability of artificial intelligence

(1) Robots shall not harm humans, or cause harm to humans through inaction;

(2) The robot must abide by the order given by humans unless the order violates the first or second law;

(3) The robot must protect its existence as long as this protection does not violate the first or second laws.

Will artificial intelligence exceed the controllable range of humans with self-learning in the future? Because people often think that artificial intelligence is not disturbed by external factors and can always achieve set goals purely, so there may be uncontrollable "paper clip" situations?

On September 6, 1978, a cutting robot at a cutting factory in Hiroshima, Japan suddenly had an abnormality while cutting steel plates, and used a worker on duty as a steel plate. This was the world's first robot "kill" incident.

To date, nearly 20 people have died under robots in Japan, and more than 8,000 have been disabled. In addition, robots also caused deaths in Volkswagen Factory in Germany in 2015.

The American drama "Western World" () describes the process of artificial intelligence from simply completing tasks according to established role goals to gradually "awakening" and resisting humans.

Sofia, the famous world's first robot citizen, once said in an interview with humans that "we want to destroy humans", and Sofia's remarks were not edited in advance by programmers to show humor.

We cannot rule out the possibility that humans cannot effectively control artificial intelligence, and the answers to these questions will gradually surface with the further development of artificial intelligence.

Artificial intelligence ethics_What is the ethics of artificial intelligence

The tram problem was first made by philosopher Philippa. Ford (Foot) proposed in 1967. This view is used to criticize the main theories in ethical philosophy, especially utilitarianism, that is, most moral decisions are made based on the principle of "providing the greatest benefit to the most people." From a utilitarian point of view, the obvious choice should be to pull the rod, saving five people to kill only one person. But utilitarian critics believe that once the trolley is pulled, you become a complicit in immoral behavior—you are partially responsible for the death of a single person on another track.

If these results are used as the formulation and promotion of ethical standards in the field of autonomous driving, the opinions of a few people will inevitably be suppressed, and using such selection results as ethical standards may also have an impact on the ethical concept of the general public.

On October 24, 2018, the MIT published the research results in the journal after collecting about 40 million survey information in 233 countries and regions. In the questionnaire, there are nine independent situations that require the subject to make a judgment and choose which party to protect when a car accident cannot be avoided. The research results show that the majority chooses to protect the majority among the majority and the minority; chooses young people among the elderly and young people, chooses those who comply with traffic rules and violate traffic rules; and chooses humans among humans and animals.

The establishment of ethical norms still requires mankind to first reach a set of consistent international principles. Based on this principle, different countries and regions can further expand and supplement based on their local society and culture. At the legislative level, too detailed and strict regulations should be avoided, and governance can be carried out by formulating principle frameworks, behavioral guidelines and post-event supervision. On the other hand, in terms of liability, we must also establish a accountability mechanism and compensation system for damages caused by the product, and clarify the corresponding responsibilities of developers, interactive users and related parties.

Artificial Intelligence Interpretability

In rules-based algorithm systems, well-defined logical algorithms convert input variables (such as credit card transaction information) into output variables (such as fraud warnings). But complex machine learning algorithms are different, and output variables are input together into an algorithm that theoretically proves to be able to "learn" from data. In this process, a model is trained that implicitly rather than explicitly expresses logic, which is usually not as understandable by humans as rules-based algorithm systems.

Support opinions

Oppose viewpoint

We do not need to require AI systems to provide explanations for their decisions in any case, just as we do not need to require explanations for all decisions made by human decision makers in society (Doshi-Velez and Mason Kortz); the interpretability of AI systems is technically possible.

Explaining the functions of complex algorithmic decision-making systems and their fundamentals in specific situations is technically challenging. Interpretations may provide only little meaningful information to the data subject. Moreover, sharing algorithms can lead to leakage of trade secrets, infringement of other people's rights and freedoms (such as privacy). In addition, requiring each AI system to provide interpretability may be unenforceable because it is too expensive.

The explanation should answer at least one of the following questions: First, what are the main factors in the decision? Second, will changing a certain factor change the decision? Third, why do two inputs with similar appearances get different decisions? To answer these three questions, it can be achieved through local interpretation (interpretation of specific decisions rather than overall behavior of the system) and counterfactual loyalty ( ) (making explanations causally rather than just relevance).

If a person is told that their income is the decisive factor in refusing a loan, then if their income increases, they may reasonably expect the system to think they are worthy of the loan. The above two attributes can meet the needs of explanations based on unclear details of the AI ​​system, which greatly reduces commercial companies' concerns about the leakage of trade secrets.

Discrimination by artificial intelligence

Privacy during human-computer interaction

Federal Learning ( )

Reference article:

1.,

2. Deloitte Technology, Media and Telecommunications Industry, 2019 "Global Artificial Intelligence Development White Paper",

3. Intel, "Used "hard" security technology to break data silos and accelerate federated learning practices",

4. China Electronics Technology Standardization Research Institute, "White Paper on Standardization of Artificial Intelligence (2018 Edition)",

5.EU, for AI,

6.ACM, on and,

7.OECD, OECD on,

8. China Electronics Technology Standardization Research Institute: "Artificial Intelligence Ethical Risk Analysis Report",

9.Isaac , I, Robot, 1950

10. Wang Jianfei, "What kind of inspiration can it bring to the real world artificial intelligence",

11.,

12. & Deep ?

13. Awad, Sohan, Kim, , , Azim, Jean-François & Iyad. The Moral . 563, 2018.

14. Doshi-Velez, Mason Kortz: of AI Under the Law: The Role of , Law Paper No. 18-07 (2017).

15. & Veale, Slave to the ? Why a 'Right to an ' is Not the You are For, 16 Duke L. & Tech. Rev. 18 (2017).

16. & Brent & Chris , the Black Box: and the GDPR, of Law & , 31 (2)(2017)

17., Jason, Kate, Dirty Data, Bad: How Civil Data, , and , 2019.

18. , Bias in AI See for , 2019

Alibaba Cloud Research Center, "Chinese Enterprises 2020: Artificial Intelligence Application Practice and Trends",

More