AI Ethics

The Path And Enlightenment Of The Ethics And Governance Of The European Union

The Path And Enlightenment Of The Ethics And Governance Of The European Union

The Path And Enlightenment Of The Ethics And Governance Of The European Union

In order to solve the ethical problems caused by the development and application of artificial intelligence, the EU has established AI ethics and governance as the key content of future legislative work.

Artificial Intelligence Ethics_Guide to Trustworthy AI Ethics_EU Artificial Intelligence Ethics Governance

Author | Cao Jianfeng Senior Researcher at Tencent Research Institute

Assistant Researcher at the Law Research Center of Fang Lingman Tencent Research Institute

The EU actively promotes people-oriented artificial intelligence ethics and governance

The development and penetration of digital technologies have accelerated the transformation and transformation of society and economy, and artificial intelligence (AI) as its core driving force has created more possibilities for social development. Generally speaking, AI refers to an algorithm or machine that realizes independent learning, decision-making and execution based on the investment of certain information content. Its development is based on the improvement of computer processing capabilities, algorithm improvements and exponential growth of data. From machine translation to image recognition to the synthesis and creation of artistic works, various applications of AI have begun to enter our daily lives. Today, AI technology is widely used in different industries (such as education, finance, construction and transportation, etc.), and is used to provide different services (such as autonomous driving, AI medical diagnosis, etc.), which is profoundly changing human society. At the same time, the development of AI has also raised challenges to law, ethics, society, etc., bringing problems such as fake news, algorithmic bias, privacy violations, data protection, and network security. Against this background, artificial intelligence ethics and governance are increasingly valued. From the government to the industry to the academic community, a global wave of exploring the formulation of artificial intelligence ethical standards has been set off. Since 2015, the EU has been actively exploring the ethics and governance measures of artificial intelligence. Although it has not taken the lead in the development of AI technology, it has been at the forefront of the world in terms of AI governance.

As early as January 2015, the European Parliamentary Legal Affairs Committee (JURI) decided to set up a working group specializing in legal issues related to the development of robots and artificial intelligence. In May 2016, JURI issued the Draft Report on Legislative Recommendations to the European Commission on Civil Law Rules for Machines, calling on the European Commission to evaluate the impact of artificial intelligence, and formally put forward a broad proposal on civil law legislation for machines in January 2017, proposing the formulation of a "Robot Charter". In May 2017, the European Economic and Social Commission (EESC) issued an opinion on AI, pointing out the opportunities and challenges brought by AI to 11 fields such as ethics, security, and privacy, and advocates the formulation of AI ethical norms and establishment of standard systems for AI monitoring and certification. In October of the same year, the European Council pointed out that the EU should have a sense of urgency to deal with new trends in artificial intelligence, ensure high-level data protection, digital rights and the formulation of relevant ethical standards, and invited the European Commission to propose methods to deal with new trends in artificial intelligence in early 2018. In order to solve the ethical problems caused by the development and application of artificial intelligence, the EU has established AI ethics and governance as the key content of future legislative work.

On April 25, 2018, the European Commission issued a policy document "EU Artificial Intelligence" (for), and the EU's artificial intelligence strategy was late. The strategy proposes a people-oriented AI development path, aiming to improve the EU's scientific research level and industrial capabilities, respond to the technical, ethical, legal challenges brought by artificial intelligence and robots, and allow artificial intelligence to better serve the development of European society and economy. The EU's artificial intelligence strategy includes three pillars: First, improve technical and industrial capabilities, and promote the wide penetration of artificial intelligence technology into all walks of life; Second, actively respond to social and economic changes, enable the education and training system to keep up with the pace of development of the times, closely monitor changes in the labor market, provide support for workers in the transition period, and cultivate diversified and interdisciplinary talents; Third, establish appropriate ethical and legal frameworks, clarify the application of product rules, and draft and formulate artificial intelligence ethics guidelines (AI). In June of the same year, the European Commission appointed 52 representatives from academia, industry and civil society to form a High-Level Group on AI (AI HELP) to support the implementation of the European artificial intelligence strategy.

In January 2019, the Industry, Research and Energy Commission under the EU Parliament issued a report calling on the EU Parliament to formulate a comprehensive EU industrial policy on artificial intelligence and robots, including legal frameworks, ethics, governance, etc. for cybersecurity, artificial intelligence and robots. In April 2019, the EU issued two important documents - the Trusted AI Ethics Guide (for AI, referred to as the "Ethics Guide") and the "Algorithm Responsibility and Transparent Governance Framework" (A for, referred to as the "Governance Framework"), which are the specific implementation of the requirements of "establishing an appropriate ethical and legal framework" proposed by the EU's artificial intelligence strategy, providing reference for the formulation of relevant rules later, and representing the EU's latest efforts to promote AI governance.

Credited AI Ethics Guide_Artificial Intelligence Ethics_EU Artificial Intelligence Ethics Governance

Construction of an ethical framework for artificial intelligence: an ethical guide to trusted AI

In order to balance technological innovation and human rights protection, the construction of an artificial intelligence ethical framework is essential. The ethical framework provides principled guidance and basic requirements for the design, research and development, production and utilization of artificial intelligence, ensuring that its operation complies with legal, safety and ethical standards. The Ethical Guide was drafted and published by AI HELP and is not mandatory and binding. The EU encourages all stakeholders to actively implement the Ethical Guidelines to promote the formation of international consensus on AI ethical standards. Overall, in addition to formulating pan-EU ethical norms, the EU hopes that the ethical governance of artificial intelligence can be guaranteed at different levels. For example, member states can establish an artificial intelligence ethical monitoring and supervision agency, encourage enterprises to set up ethical committees and formulate ethical guidelines when developing artificial intelligence to guide and constrain their AI developers and their R&D application activities. This means that the ethical governance of artificial intelligence cannot stay at the level of abstract principles, but needs to be integrated into practical activities of different subjects and levels to become a living mechanism.

According to the Ethical Guidelines, trustworthy AI must have but is not limited to three characteristics: (1) Legality, trustworthy AI should respect human basic rights and comply with the provisions of existing laws; (2) Ethical, trustworthy AI should ensure that ethical principles and values ​​are complying with "ethical purposes"; (3) Robustness, that is, from the perspective of technological or social development, AI systems should be robust and reliable, because even if the AI ​​system meets ethical purposes, if it lacks the support of reliable technology, it may still cause harm to humans inadvertently. Specifically, the ethical framework of trusted AI includes the following three levels:

(I) The foundation of trustworthy AI

Among the basic rights stipulated in international human rights law, the EU Charter and related treaties, the main requirements that can be used as AI development include: personal dignity, personal freedom, democracy, justice and law, equality, non-discrimination and unity, and legal rights of citizens. Many public and private organizations draw inspiration from fundamental rights to develop ethical frameworks for artificial intelligence systems. For example, the European Group on Ethics of Science and New Technology (EGE) has proposed nine basic principles based on the values ​​in the EU Charter and related provisions. On the basis of drawing on most existing principles, the "Ethics Guide" further summarizes and summarizes four ethical principles that meet the requirements of social development, and uses them as the foundation of trustworthy AI to provide guidance for the development, deployment and use of AI.

These principles include: (1) Respect for the principle of human autonomy. Humans interacting with AI must have sufficient and effective self-determination capabilities. AI systems should follow the people-oriented concept to serve humans, enhance human cognition and enhance human skills. (2) Principle of prevention of damage. AI systems cannot have negative impacts on humans. The AI ​​systems and their operating environment must be safe. AI technology must be robust and should be ensured not to be used maliciously. (3) The principle of fairness. The development, deployment and use of AI systems must adhere to both substantive fairness and procedures, and ensure equal distribution of interests and costs, and the protection of individuals and groups from discrimination and prejudice. In addition, individuals affected by decisions made by AI and its operators have the right to object and seek relief. (4) Explainable principle. The functions and purposes of AI systems must be open and transparent, and the AI ​​decision-making process needs to be explained to those directly or indirectly affected by the decision results.

EU Artificial Intelligence Ethics Governance_Artificial Intelligence Ethics Guide

(II) Implementation of Trusted AI

Under the guidance of AI ethical principles, the "Ethics Guide" proposes seven key requirements that should be met for the development, deployment and utilization of AI systems. Specifically, in the "Ethical Guidelines", the four ethical principles as top-level ethical values ​​will play the most basic guiding role in the research and development and application of trustworthy AI, but the seven key requirements are ethical requirements that can be implemented. This means that artificial intelligence ethics is a governance process from macro top-level values ​​to meso-ethical requirements to micro-technical implementation.

1. Human initiative and supervision

First, AI should help humans exercise their fundamental rights. When AI has the possibility of damaging basic rights due to the limitation of technical capabilities, the basic rights impact assessment should be completed before the development of the AI ​​system, and the possible impact of the AI ​​system on basic rights should be understood through the establishment of an external feedback mechanism. Secondly, AI should support individuals to make smarter decisions based on goals, and individual autonomy should not be affected by the AI ​​automatic decision-making system. Finally, establish appropriate supervision mechanisms, such as "human-in-the-loop" (i.e., human intervention can be done in every decision cycle of the AI ​​system), "human-on-the-loop" (i.e., manual intervention in the AI ​​system design cycle), and "human-in-" (supervising the overall activities and impact of AI and deciding whether to use it).

2. Technology robustness and safety

On the one hand, we must ensure that the AI ​​system is accurate, reliable and can be repeated experiments, improve the accuracy of AI system decision-making, improve the evaluation mechanism, and promptly reduce the unexpected risks brought about by system error predictions. On the other hand, strictly protect the AI ​​system to prevent vulnerabilities and malicious hackers’ attacks; develop and test security measures to minimize unexpected consequences and errors, and have an executable backup plan when there is a problem with the system.

3. Privacy and data governance

User privacy and data must be strictly protected throughout the entire life cycle of the AI ​​system to ensure that the collected information is not illegally used. While eliminating errors, inaccurate and biased components in the data, the integrity of the data must be ensured and the entire process of AI data processing must be recorded. Strengthen the management of data access protocols and strictly control the conditions for data access and flow.

EU Artificial Intelligence Ethics Guide

4. Transparency

The traceability of the data set, processes and results of AI decision-making should be ensured, so that the results of AI decision-making can be understood and tracked by humans. When the decision-making results of the AI ​​system have a significant impact on an individual, the decision-making process of the AI ​​system should be explained appropriately and promptly. Improve users' overall understanding of the AI ​​system, let them understand the interactive activities with the AI ​​system, and truthfully inform the AI ​​system of the accuracy and limitations.

5. Diversity, non-discrimination and fairness

To avoid prejudice and discrimination against vulnerable and marginalized groups, AI systems should be user-centered and allow anyone to use AI products or accept services. Follow universal design principles and relevant accessibility standards to meet the widest range of user needs. At the same time, diversity should be promoted and relevant stakeholders should be allowed to participate in the entire AI life cycle.

6. Social and environmental welfare

AI systems are encouraged to assume the responsibility of promoting sustainable development and protecting the ecological environment, and use AI systems to research and solve global concerns. Ideally, AI systems should benefit the present and future generations. Therefore, the development, utilization and deployment of AI systems should fully consider its impact on the environment, society and even democratic politics.

7. Accountability

First, an accountability mechanism should be established to implement the responsible parties in the entire process of AI system development, deployment and use. Second, establish an audit mechanism for AI systems to achieve evaluation of algorithms, data and design processes. Third, identify, record and minimize the potential negative impact of the AI ​​system on individuals, and take appropriate remedial measures in a timely manner when the AI ​​system produces unfair results.

It is worth noting that different principles and requirements may have essential tensions between each other because they involve different interests and values, so decision makers need to make trade-offs based on actual conditions while maintaining continuous recording, evaluation and communication of the choices made. In addition, the Ethical Guidelines also propose some technical and non-technical methods to ensure that the development, deployment and use of AI meet the above requirements, such as researching and developing interpretable AI technologies (AI, referred to as XAI), training monitoring models, building a legal framework for AI supervision, establishing and improving relevant industry standards, technical standards and certification standards, and education to enhance public ethical awareness.

Credited AI Ethics Guide_Artificial Intelligence Ethics_EU Artificial Intelligence Ethics Governance

(III) Evaluation of credible AI

Based on the above seven key requirements, the Ethics Guide also provides a list of credible AI evaluations. The evaluation list is mainly applicable to AI systems that interact with humans. It aims to provide guidance for the specific implementation of 7 key requirements, and help different levels of the company or organization such as management, legal department, R&D department, quality control department, HR, procurement, daily operations, etc. jointly ensure the realization of trusted AI. The Ethical Guidelines point out that the list of evaluations is not always exhaustive. The construction of trustworthy AI requires continuous improvement of AI requirements and new solutions to problems. Stakeholders should actively participate to ensure that AI systems operate safely, robustly, legally and ethically throughout the life cycle, and ultimately benefit humans.

It can be seen that in the EU's view, artificial intelligence ethics is a systematic project that requires the coupling between ethical norms and technical solutions. The construction of artificial intelligence ethics in other countries and the international community may be mostly still at the stage of extracting abstract values ​​and building consensus, but the EU has taken it a step further and began to explore the construction of a top-down framework for artificial intelligence ethics governance.

Artificial Intelligence Ethics_Guide to Trusted AI Ethics_EU Artificial Intelligence Ethics Governance

Policy recommendations for the governance of artificial intelligence algorithms:

Algorithm responsibility and transparent governance framework

The Governance Framework is a systematic study on algorithmic transparency and responsibility governance published by the European Parliament’s Team on Future and Science and Technology (STOA). Based on a series of real cases, the report clarifies the causes of unfair algorithms and their possible consequences, as well as the obstacles to achieving algorithmic fairness in a specific context. On this basis, the report proposes to use algorithm transparency and responsibility governance as tools to solve the problem of algorithm fairness. Realizing algorithm fairness is the purpose of algorithm governance, and at the same time emphasizes the role and significance of the "responsible research and innovation" (RRI) method in promoting the realization of algorithm fairness. The core of RRI is to achieve inclusive and responsible innovation with the participation of stakeholders.

The report proposes systematic policy recommendations for the governance of algorithmic transparency and accountability based on analyzing the challenges brought by algorithmic systems to society, technology and regulatory. The report starts from a high-level perspective of technical governance, discusses various types of governance choices in detail, and finally reviews the specific suggestions for algorithmic system governance in the existing literature. Based on extensive review and analysis of existing algorithmic system governance recommendations, the report puts forward four policy recommendations at different levels.

(I) Improve the public's algorithmic literacy

The prerequisite for achieving algorithm accountability is algorithm transparency. Algorithm transparency does not mean that the public can understand the various technical characteristics of the algorithm. The report points out that a broad understanding of algorithm functions has little effect on implementing algorithm accountability, and the disclosure of brief, standardized and information that may affect public decision-making or enhance public understanding of the overall algorithm system is more effective. In addition, investigative news reports and revelations also play an important role in exposing the improper use of algorithms and achieving algorithm transparency and accountability. For example, the New York Times once reported that Uber used certain algorithmic technology to mark and avoid city regulators (this news was revealed by former Uber employees). The report immediately attracted widespread attention from the media and society, and regulators also took investigative actions against the company. In addition to playing a supervisory role, news reports are committed to improving the public's understanding of algorithms in simple and easy-to-understand language. News investigations can also stimulate extensive social dialogue and debate and trigger new academic research. For example, a nonprofit report on "machine bias" in a crime risk assessment algorithm system used by some U.S. courts has triggered a series of studies on algorithmic fairness.

Based on this, the Governance Framework puts forward several policy suggestions on enhancing public awareness of algorithms: (1) Educate the public to understand the core concepts of algorithm selection and decision-making; (2) Mandatory disclosure of standardized algorithms; (3) Provide technical support for news reports on "algorithm accountability"; (4) Allow the unveiled person to be exempted from liability in violation of the terms of service or infringement of intellectual property rights for public interest reasons.

Artificial Intelligence Ethics_Guide to Trustworthy AI Ethics_EU Artificial Intelligence Ethics Governance

(II) Public departments establish an algorithmic accountability mechanism

Today, more and more public sectors are beginning to use algorithmic systems to improve office efficiency, support complex office processes and assist policy-making activities. If the algorithm has flaws, it may have an immeasurable impact on the vulnerable groups in society. Therefore, the public sector needs to establish a complete algorithm transparency and accountability mechanism. One of the governance mechanisms that can be considered is to learn from the Data Protection Impact Assessment (DPIA) mechanism in the Data Protection Law to establish an algorithmic impact assessment (ie, AIA) mechanism. This mechanism allows policy makers to understand the usage scenarios of algorithm systems, evaluate the expected uses of algorithms, and make relevant suggestions, helping to establish an algorithm accountability mechanism. According to the Governance Framework, AIA's process mainly includes: publishing the definition of "algorithm system" by the public sector, publicly disclosing the purpose, scope, expected uses, relevant policies or practices of the algorithm, implementing and publishing self-evaluation of the algorithm system, public participation, publishing the algorithm evaluation results, and regularly updating the AIA.

(III) Improve the regulatory mechanism and legal liability system

On the one hand, the EU has put forward relatively pertinent suggestions on the transparency of algorithms that are widely called for by all walks of life but are highly controversial. Algorithm transparency does not explain every step of the algorithm, the technical principles and implementation details of the algorithm. Simply exposing the source code of the algorithm system cannot provide effective transparency. Instead, it may threaten data privacy or affect technical security applications. Furthermore, considering the technical characteristics of AI, it is extremely difficult to understand the AI ​​system as a whole, and it is very effective in understanding a specific decision made by AI. Therefore, for modern AI systems, achieving transparency by explaining how a certain result is derived will face huge technical challenges and will greatly limit the application of AI; on the contrary, achieving effective transparency in the behavior and decision-making of AI systems will be more preferable and can provide significant benefits. For example, given the technical characteristics of artificial intelligence, GDPR does not require explanations of specific automated decisions, but only requires meaningful information about the internal logic and explaining the importance and expected consequences of automated decisions.

EU Artificial Intelligence Ethics Governance_Guide to Trusted AI Ethics Governance

On the other hand, in terms of AI algorithm regulation, the EU believes that for most private sectors, their resources are limited, and the impact of their algorithmic decision-making results on the public is relatively limited, so strong supervision such as algorithmic impact assessment should not be imposed. If the private sector is blindly required to adopt AIA, the result is that the financial and administrative costs it bears will be disproportionate to the risks brought by the algorithm, which will hinder the technological innovation and technology adoption of the private sector. Therefore, low-risk algorithm systems can be regulated with legal liability, allowing the private sector to exchange for stricter tort liability for lower transparency and AIA requirements of the algorithm. According to the Governance Framework, corresponding regulatory mechanisms can be established at a hierarchical level: AIA requirements can be considered for algorithmic decision-making systems that may cause serious or irreversible consequences; for algorithmic systems that only have general impacts, the system operator can be required to bear strict tort liability, and at the same time, its obligation to evaluate and authenticate the algorithm system and ensure that the system meets the best standards can be reduced. At the same time, it can be considered to establish a special algorithm regulatory agency, whose responsibilities include conducting algorithm risk assessment, investigating the use of algorithm systems of suspected infringers, providing advice on algorithm systems to other regulatory agencies, and coordinating with standard setting organizations, industries and civil society to determine relevant standards and best practices.

(IV) Strengthen international cooperation in algorithm governance

The management and operation of algorithm systems also require cross-border dialogue and collaboration. A country's regulatory intervention in algorithmic transparency and accountability is likely to be interpreted as protectionism or as misconduct to acquire foreign trade secrets. Therefore, the Governance Framework recommends that a permanent global algorithmic rule forum (AGF) should be established to absorb multi-stakeholders related to algorithmic technology to participate in international dialogue, exchange policy and expertise, and discuss and coordinate best practices for algorithmic governance.

The enlightenment of the ethics and governance of the European Union

In the past two decades of Internet development, the EU lags behind the United States and China, and differences in legal and policy are one of the main factors. As the author's view in the article "On the Relationship between Internet Innovation and Supervision - Based on the Perspective of Comparison between the United States, Europe, Japan and South Korea", the EU's institutional regulations on platform responsibilities, privacy protection, network copyright, etc. are earlier and stricter than the United States, and do not provide appropriate legal system soil for Internet innovation. Now, entering the era of intelligence, ubiquitous data and algorithms are giving birth to a new form of economic and social driven by artificial intelligence. The EU still lags behind the United States and other countries in the field of artificial intelligence. The General Data Protection Regulation (GDPR), which came into effect last year, has a particularly significant impact on the development and application of artificial intelligence. Many studies have shown that GDPR has hindered the development of new technologies and new things such as artificial intelligence and the digital economy in the EU, and has added too much burden and uncertainty to corporate operations.

EU Artificial Intelligence Ethics Governance_Guide to Trusted AI Ethics Governance

Back to the field of artificial intelligence, the EU hopes to develop, apply and deploy artificial intelligence embedded in ethical value through the construction of strategies, industrial policies, ethical frameworks, governance mechanisms, legal frameworks and other systems to lead the international stage. In this regard, the EU does have its own unique advantages. However, whether such an advantage can ultimately be transformed into the EU's international competitiveness in the field of artificial intelligence is worth pondering. Overall, the EU's exploration of artificial intelligence ethics and governance has brought us three inspirations.

(I) Explore the technical path of ethical governance

Obviously, in the face of the ethical and social impact of the future development of artificial intelligence, ethics needs to become a fundamental part of artificial intelligence research and development, and strengthen the construction of artificial intelligence ethical research and ethical governance mechanisms. As of now, the realization of the ethical governance of artificial intelligence requires more relying on the power of industries and technologies than resorting to legislation and supervision. Because of the rapid iteration of technology and business models, written legislation and regulation will find it difficult to keep up with the pace of technological development, which may bring counterproductive or unexpected results. More flexible governance methods such as standards, industry self-discipline, ethical frameworks, best practices, and technical guidelines will become increasingly important, especially in the early stages of technological development. Going further, just like in terms of privacy and data protection, the concept of designed privacy (by, referred to as PbD) has gained strong vitality in the past decade, making it an indispensable part of the data protection mechanism through technology and design a technical mechanism such as encryption, anonymization, and differential privacy play an important role. Such a concept can also be transplanted into the field of artificial intelligence, so the EU proposed "ethics through design" (by or by, referred to as EbD). In the future, we need to give vitality to the concept of "ethics through design" through standards, technical guidelines, design standards, etc., so as to transform ethical values ​​and requirements into components in artificial intelligence product and service design, and implant value into technology.

Artificial Intelligence Ethics_Guide to Trustworthy AI Ethics_EU Artificial Intelligence Ethics Governance

(II) Adopt a multi-stakeholder collaborative governance model

At present, artificial intelligence is integrated and constructed with the economy and society at an unusual speed. Its high degree of specialization, technology and complexity makes it difficult for outsiders to have accurate judgments and cognition of the risks and uncertainties. Therefore, on the one hand, it is necessary to allow regulators, decision makers, academia, industry, social public institutions, experts, practitioners, the public, etc. to participate in the governance of new technologies through collaborative participation of multi-stakeholders, to avoid disconnection between decision makers and practitioners. On the other hand, it is necessary to enhance the ethical awareness of scientific researchers and the public through scientific and technological ethics education and publicity, so that they not only consider narrow economic interests, but also reflect and warn thinking on the potential impact of technological development and application and their prevention ( ), so that they can achieve good governance of cutting-edge technologies through extensive social participation and interdisciplinary research. Therefore, the EU believes that for the ethical governance of artificial intelligence, safeguarding measures are required for different subjects at different levels, so governments, industries, public and other subjects need to establish safeguarding measures at their respective levels.

(III) Strengthen international cooperation in artificial intelligence ethics and governance

The development of artificial intelligence is closely related to data flow, data economy, digital economy, network security, etc., and the research and development and application of artificial intelligence has the characteristics of cross-border and international division of labor, and it is necessary to strengthen international cooperation and coordination in ethics and governance. For example, on May 22, 2019, OECD Member States approved the AI ​​principle, the Principle for Responsible Management of Trusted AI, which has five ethical principles in total, including inclusive growth, sustainable development and well-being, people-centered value and equity, transparency and explainability, robustness and safety and reliability, and responsibility. On June 9, 2019, G20 approved the people-oriented AI principle, with the main content derived from the OECD artificial intelligence principle. This is the first AI principle signed by governments in various countries, and is expected to become an international standard in the future. It aims to promote the development of artificial intelligence with standards that are both practical and flexible and agile and flexible governance methods under the people-oriented development concept, and jointly promote the sharing of AI knowledge and the construction of trustworthy AI. It can be seen that the people-oriented development concept of artificial intelligence is gaining consensus from the international community. Under the guidance of this concept, it is necessary to deepen international dialogue and exchanges, realize a coordinated common artificial intelligence ethics and governance framework at the international level, promote the research and development and application of credible and ethical artificial intelligence, prevent the international risks and other risks that may be brought about by the development and application of artificial intelligence, and ensure that technology is good and artificial intelligence benefits mankind.

More