AI Ethics

Artificial Intelligence Raises Ethical Problems, And Tech Giants Set Up Ethical Committees To Meet Challenges

Artificial Intelligence Raises Ethical Problems, And Tech Giants Set Up Ethical Committees To Meet Challenges

Artificial Intelligence Raises Ethical Problems, And Tech Giants Set Up Ethical Committees To Meet Challenges

This article is an exclusive selection of this artificial intelligence summit. With the development of artificial intelligence, machines are taking on more and more decision-making tasks , which has also triggered many new issues about social fairness and ethics.As the saying goes: the greater the power, the greater the responsibility. Artificial intelligence technology is becoming increasingly powerful, and those companies that first developed and deployed machine learning and artificial intelligence are now beginning to publicly discuss the challenges that the intelligent machines they create bring to ethics.At the MIT Technology Review summit, Eric, managing director of Microsoft Research, said: “We are at a turning point in artificial intelligence, which deserves to be bound and protected by human morality. ”。Hovetz carefully discussed similar issues with researchers from IBM and Google. Everyone is worried that the recent advances in artificial intelligence have made it more than humans in some ways, such as the healthcare industry, which may lose job opportunities for people in certain positions.Google researcher Maya Gupta called on the industry to work harder to propose reasonable development processes to ensure that the data used to train algorithms is fair, reasonable and impartial.

Artificial Intelligence Ethics_What is the ethics of artificial intelligence_The characteristics of ethics of artificial intelligence

This article is an exclusive selection of this artificial intelligence summit

With the development of artificial intelligence, machines are taking on more and more decision-making tasks (from humans), which has also triggered many new issues about social fairness and ethics.

As the saying goes: the greater the power, the greater the responsibility. Artificial intelligence technology is becoming increasingly powerful, and those companies that first developed and deployed machine learning and artificial intelligence are now beginning to publicly discuss the challenges that the intelligent machines they create bring to ethics.

At the MIT Technology Review summit, Eric, managing director of Microsoft Research, said: “We are at a turning point in artificial intelligence, which deserves to be bound and protected by human morality. ”

Artificial Intelligence Ethics_What is the ethics of artificial intelligence_The characteristics of ethics of artificial intelligence

Hovetz carefully discussed similar issues with researchers from IBM and Google. Everyone is worried that the recent advances in artificial intelligence have made it more than humans in some ways, such as the healthcare industry, which may lose job opportunities for people in certain positions.

IBM researcher Francesca Rossi gave an example: When you want to use a robot to accompany and help the elderly, the robot must follow the corresponding cultural norms, that is, to implement specific cultural backgrounds in which the elderly are located. Task. If you deploy such robots in Japan and the United States, they will be very different.

While these robots may be far from our goal, artificial intelligence has brought ethical and moral challenges. As businesses and governments increasingly rely on AI systems to make decisions, technological blind spots and prejudice can easily lead to discrimination.

Last year, a report showed that some states used a risk scoring system that was biased against blacks in notifying criminal trial results. Similarly, Hovetz described a commercial emotion recognition system provided by Microsoft that performed very inaccurately in the tests of young children because the dataset images that trained the system were inappropriate.

Google researcher Maya Gupta called on the industry to work harder to propose reasonable development processes to ensure that the data used to train algorithms is fair, reasonable and impartial. "A lot of times, data sets are generated in some automated way, which is not very reasonable, and we need to consider more factors to make sure the data collected is useful. Think about it, we only choose from minority groups. Like, even if the sample is large enough, will this ensure that the results we get are accurate?”

Last year, researchers did a lot of research on the ethical challenges posed by machine learning and artificial intelligence, both in academia and in industry. The University of California, Berkeley, Harvard, Cambridge, Oxford and some research institutes have launched related projects to address the ethical and security challenges brought by artificial intelligence. In 2016, Amazon, Microsoft, Google, IBM and Amazon jointly formed a nonprofit AI partnership (on AI) to solve this problem (Apple joined the organization in January 2017).

What is the ethics of artificial intelligence_Artificial Intelligence Ethics_The characteristics of ethics of artificial intelligence

These companies are also taking corresponding technical and security measures. Gupa stressed that Google researchers are testing how to correct bias in machine learning models and how to ensure that the models avoid bias. Hovetz describes the AI ​​Ethics Committee established internally by Microsoft (AI) to consider developing new decision-making algorithms deployed on the company’s cloud. The committee is currently all Microsoft employees, but they also want outsiders to join in to meet the challenges they face together. It has also established its own artificial intelligence ethics committee.

Companies that implemented these projects broadly believe they do not require new forms of government regulation of artificial intelligence. However, at the summit, Hovetz also encouraged discussions on some extreme results, which could lead some to the opposite conclusion.

In February, Hovetz organized a seminar where they discussed in detail the possible harms of artificial intelligence, such as messing up the stock market and disrupting election results. "If you are a little more proactive, you might come up with some bad results that AI will produce and be able to propose mechanisms that prevent them from happening," Hovetz said.

Such discussions seem to have caused panic among some of them. Gupa also encouraged people to consider how humans should make decisions, making the world more moral.

"Machine learning is controllable, precise, and can be measured by statistics. And there are many possibilities here to make society more just and more moral," she said.

MIT China's only copyright partner, no organization or individual may reproduce or translate without permission.

More