AI Ethics

Going Towards The Construction Of The Framework Of Artificial Intelligence Ethics And Governance

Going Towards The Construction Of The Framework Of Artificial Intelligence Ethics And Governance

Going Towards The Construction Of The Framework Of Artificial Intelligence Ethics And Governance

Since the AI Go defeated Lee Sedol in 2016, a new round of artificial intelligence boom has swept the world with unprecedented trends. While countries have proposed plans and strategies for the development of artificial intelligence, the ethical risks and governance of artificial intelligence have also become the focus of global common concern. Globally, people

Full text number

4000

Character

Reading time

12

minute

Since the AI ​​Go defeated Lee Sedol in 2016, a new round of artificial intelligence boom has swept the world with unprecedented trends. While countries have proposed plans and strategies for the development of artificial intelligence, the ethical risks and governance of artificial intelligence have also become the focus of global common concern. Globally, people's perception of the prospects of artificial intelligence is clearly polarized - on the one hand, they are excited about their expectations that they may greatly promote economic and social development, and on the other hand, they are increasingly worried about the harm and risks that it may bring about. In Asia, although the definition of value demands and rights related to privacy, data rights, and discrimination at the public and legal levels is not clear, and the ethical and governance framework has not yet been systematically built, the innovation and application of artificial intelligence are very rapid. This force makes the value conflicts and ethical choices in the development of artificial intelligence more prominent.

Artificial Intelligence Ethics_Ethical Artificial Intelligence Mind Map_The Characteristics of Ethical Artificial Intelligence

Figure 1: Heat map of national AI strategic priorities (Source: CIFAR, 2018)

"Asian Artificial Intelligence Agenda" is a research project called "MIT Technology Review Insights", and the report entitled "Asian Artificial Intelligence Agenda: Artificial Intelligence Ethics" released in July 2019 made a relatively objective comment. The report is one of its series of reports on the "Asian Artificial Intelligence Agenda". The report is based on surveys and interviews with authoritative insiders and outside the Asian artificial intelligence ecosystem, including 871 executives. Its insight roughly reflects the current development and future trend of the Asian artificial intelligence ethics agenda, and has great reference value for further reflection and construction of the Asian and global artificial intelligence ethics and governance framework.

Artificial Intelligence Ethics and Governance from Asian Position

In the past 30 years, in the process of catching up with information and communication technology and network digital technology, Asia has become the fastest scientific and technological advancement in the world. Today, on the fast track towards the development of global artificial intelligence, Asia is undoubtedly more proactive and confident than when embracing the Internet a quarter of a century ago. Faced with this wave of artificial intelligence boom, the experience of driving social development with emerging technologies, coupled with the cultural tradition of practicality and priority for people's livelihood has made governments and enterprises in Asia more optimistic and pragmatic about the basic stance of artificial intelligence. From this standpoint, not only are members of the Asian artificial intelligence ecosystem full of confidence in the benefits of artificial intelligence, but also naturally form an institutional and cultural environment relatively conducive to the exploration of artificial intelligence in the entire society.

But we should see that the ethical risks of artificial intelligence are not fiction. Applications such as data analysis, content recommendation, and face recognition directly involve and affect people's identity and behavior. The harm and negative impact of the abuse of related technologies on people will be far greater than that of traditional network and digital technologies. If this position is adhered to without reflection, the potential threats posed by artificial intelligence to society and individuals are often easily ignored or ignored. The ethical risks caused by this are not only not prevented in advance or are difficult to correct later. In the end, it will not only cause serious consequences, but will also undermine the entire society's trust in artificial intelligence and crack down on the confidence of the general public in the development of emerging technologies. The report provides a more in-depth analysis of the issues in Asian artificial intelligence ethics and governance from this position.

First of all, Asia's optimism and pragmatism stance on artificial intelligence has brought huge challenges to artificial intelligence ethics and governance. The report notes that while Asian business leaders and participants in the AI ​​ecosystem have recognized the potential ethical risks that AI applications may pose, they are generally more optimistic about the positive impact of AI on the economic, social, corporate and individual well-being. The problem that arises is that “the ethical priority of strategies from Asia is relatively lower than in other regions” or even “the bias of using artificial intelligence in Asia may be more severe.” In response to this issue, some people may suggest that ethics should be emphasized first, but in fact, innovation and ethics should not be opposed; whether ethics should be given priority depends on the negative impact of specific technical applications on relevant groups and their severity. Therefore, the key to effectively overcoming this problem is that on the one hand, technology regulators should require enterprises to conduct foreseeable ethical assessments on the negative impacts caused by technology application in related groups and then correct them; on the other hand, constructive and participatory ethical assessments should be used to allow relevant groups that may be seriously negatively affected by technology application to participate in relevant ethical assessment, debate and correction.

Secondly, the biggest challenge in Asian artificial intelligence ethics and governance lies in how to achieve synergy between ethics and innovation through inclusive and prudent supervision. The report pointed out that although dominant stakeholders such as enterprises in the Asian artificial intelligence ecosystem have held a fierce debate on the ethics of artificial intelligence and believe that the government should dominate the regulation of artificial intelligence, it is somewhat paradoxical that they also hope that relevant policies, frameworks and regulation should act prudently and “not stifle innovation.” In response to this issue, relative ethical easing should not be mistakenly regarded as the "ethical advantage" of innovation, but should be recognized that for those artificial intelligence applications with high value sensitivity, efforts should be made to seek a compound innovation that integrates ethical value into technology, so that innovation and ethics can complement each other and advance in concert. According to this idea, in content recommendation, some companies have begun to break the "information cocoon" by identifying and suppressing some bad content recommended by algorithms; in order to cope with the abuse of facial recognition technology, many confrontational designs such as random face swaps have emerged. In fact, this kind of morally sensitive design - the practice of building ethics into technical design has been around for a long time. More than a decade ago, a researcher at Carnegie Mellon University helped people plug privacy loopholes online through designed software. It can be seen that the ethics and governance of artificial intelligence can run through the entire process of innovative application of artificial intelligence. In order to achieve the coordination of ethics and innovation, the values ​​behind them must be transformed into technical goals and needs.

Third, Asia is trying to use the establishment of a trust mechanism rather than the construction of a law enforcement framework as a breakthrough in artificial intelligence ethics and governance. The report stressed that although Asian governments and businesses are more actively setting goals and guidelines for the ethical development of the artificial intelligence industry, there has been no supervision or implementation mechanism to date. Policymakers in Asia neither try to directly address ethical issues nor provide legal support and recourse to it, instead resorting to trust between consumers, AI users and AI developers to enable the entire industry to grow. There is no doubt that the construction of a continuous trust mechanism requires that artificial intelligence must develop in a responsible and transparent way. Current artificial intelligence involves a large number of data-driven intelligent applications, where the data related to people reflects personality characteristics, behavior patterns and specific behaviors. Enterprises must communicate with customers and stakeholders on how to properly collect, use and share data in order to avoid user doubts and distrust and enable innovation and applications to be deployed efficiently and orderly.

Artificial Intelligence Ethics_Ethical Artificial Intelligence Mind Map_The Characteristics of Ethical Artificial Intelligence

Figure 2: Perceived benefits and negative consequences of 12 emerging technologies

(Source: 2017 World Economic Forum, Global Risk Perception Survey)

Build a practical and feasible framework for artificial intelligence ethics and governance

In the past twenty years, emerging technologies with rapid development and high uncertainty have emerged, resulting in disruptive technologies such as artificial intelligence, the Internet of Things, gene editing, which may have fundamental, profound and universal impacts on the economy, society and the future of mankind. How to build a practical and feasible ethical and governance framework for technologies such as artificial intelligence is undoubtedly the biggest challenge that all regions and countries must face. There is neither the best answer nor the unified standards for this. But through the many experiences and wisdoms gathered in this report, we can undoubtedly obtain many useful clues to building this framework.

First, we must conduct more in-depth and systematic research on the possible social impacts of artificial intelligence, and build a social risk prevention mechanism that takes into account the worst-case scenarios. The most important issue is the impact of artificial intelligence on employment. In the report, although Asian business leaders generally believe that AI-driven unemployment troubles will be offset by the potential of human work, according to the report’s “Artificial Intelligence and Human Capital” report in the same series as the report, “Artificial Intelligence will affect one in every five Asian jobs, and automation will eliminate one in eight jobs”. The latter also found that emerging Asian economies based on labor-intensive industries and services have higher percentages of “automatizable” labor force than wealthy countries; and that those low-skilled occupations threatened by artificial intelligence have weaker ability to retrain and re-enhance skills. These analyses show that in the context of the general optimism of the AI ​​ecosystem for the future, the government-led AI ethics and governance framework must see the worst, taking into account the way out of the labor force that may be replaced by intelligent machines, so as to avoid the disruptive development of AI bringing new social inequality.

Second, we must build a framework for an artificial intelligence ethics and governance system that is both regional and global from a cultural perspective. The report pointed out with insight that due to the huge cultural differences in the "correct" way of expressing ethics, artificial intelligence ethics and governance are not "global". There is no doubt that people's different attitudes towards artificial intelligence have identified different methods and interests in developing artificial intelligence in different regions and societies. As an example, Japan with Shinto tradition and rich doll culture may be more likely to accept its research and development and application. In future intelligent unmanned systems such as unmanned cars, different mainstream social value orientations such as individualism and collectivism will also affect the value orientation and ethical choices of machines to a certain extent. Therefore, cultural and values ​​acceptability is the key to building human-machine trust and human-machine harmony in the era of artificial intelligence. But at the same time, it must be pointed out that the development of artificial intelligence will inevitably accelerate the promotion of deeper globalization. Regional artificial intelligence ethics and governance must not only be locally appropriate, but also establish a mechanism that can understand global governance. In China's "New Generation Artificial Intelligence Development Plan", it is not only envisaged to improve social management capabilities through artificial intelligence, but also promises to conduct research on privacy and intellectual property rights, information security, accountability, design ethics, etc., and actively participate in global governance. In this process, how to clarify the core value orientation and ethical demands of regional ethics and governance frameworks is the key. No matter what cultural and mainstream social values ​​are under, it is necessary to make a clear and reasonable interpretation of the basic ethical and legal concepts such as privacy, dignity, and personal rights in the era of artificial intelligence. It is necessary to make an ethical and legal accurate statement on the collection and use of data, the impact of intelligent systems on human behavior and the limits of intervention. The ultimate investigation of these issues involves not only value and ethical issues, but rather social and political choices in a larger social and historical context. This makes the dialogue between the ethical and governance framework of regional artificial intelligence the difficulty and key to its integration into a global governance structure.

Third, we must ensure the bottom line of civilization while building a harmonious future for mankind. When we try to build a harmonious future for human-machine through the development of artificial intelligence, we should realize that the relationship between humans and machines is essentially the relationship between humans and machines mediated by machines. The key to ethical regulation and governance of technologies such as artificial intelligence is to avoid doing evil to people. Therefore, the ethics and governance of artificial intelligence must impose necessary legal control and ethical regulation on negative uses such as malicious use of artificial intelligence and military use. With the development of technologies such as deep fraud, the malicious use of artificial intelligence will become increasingly greater. If timely legal controls and ethical regulations are not added, it will greatly undermine the entire society's trust in disruptive technologies such as artificial intelligence and affect its beneficial innovation and application. At the same time, it should be noted that the paradox of the development of human scientific and technological civilization is precisely that many subversive technologies, including computers, the Internet and artificial intelligence, are closely related to military confrontation and national competition. We must admit that behind this wave of artificial intelligence boom, there is a shadow of the artificial intelligence arms race, either openly or secretly. Military applications of artificial intelligence such as drones and large-scale intelligent automatic weapon systems will pose an unprecedented threat to the future of human civilization and must be effectively controlled and globally governed.

Conclusion: Going towards agile artificial intelligence ethics and governance

On June 17, 2019, the National New Generation Artificial Intelligence Governance Professional Committee issued the "Next Generation Artificial Intelligence Governance Principles - Development of Responsible Innovation", which put forward eight principles, including "harmony, friendship, fairness, inclusiveness and sharing, respect for privacy, security and controllability, shared responsibility, open collaboration and agile governance". Agile governance emphasizes that in the development of artificial intelligence, we should promptly discover and solve possible risks, promote governance principles to run through the entire life cycle of artificial intelligence products and services, and ensure that artificial intelligence always develops in a direction that is beneficial to mankind. The key to achieving this principle is to effectively incorporate the cognition of all relevant interest groups in the development of artificial intelligence into all aspects of artificial intelligence ethics and governance, so as to form a dialogue and consensus mechanism between multiple stakeholders, realize feedback, correction and iteration from micro to macro, and ultimately promote the harmonious development of artificial intelligence in a direction that conforms to human nature.

More