Artificial Intelligence Ethical Governance Urgently Needs To Move Towards A Practical Stage
Artificial Intelligence Ethical Governance Urgently Needs To Move Towards A Practical Stage
Core Reading. Cao Jianfeng
Core Reading
Cao Jianfeng
In today's society, digital industrialization and industrial digitalization continue to promote the deep integration and development of the digital world and the physical world, and human life and production methods are increasingly being reshaped. Artificial intelligence algorithms and data, as the core driving force for this economic and social change, will be the topic maker in the coming years. There is no doubt that the combination of data and artificial intelligence algorithms and embedded in appropriate application scenarios will bring significant economic and social benefits.
But it is also necessary to pay attention to the problems that artificial intelligence algorithms may bring in in terms of fairness, security, privacy, transparency, responsibility, employment, etc. The output results of the algorithm system may cause discrimination, unfairness, exclusion, etc. to specific individuals or groups. For example, facial recognition algorithms are accused of poor accuracy in identifying people of color, and advertising algorithms and recruitment algorithms are accused of excluding female workers. Algorithm distribution may encourage the spread and spread of false information, and can also limit users' free choice of information, thus creating a "information cocoon" effect. At the same time, algorithms may also be abused or malicious, posing threats to personal rights, public interests, national security, etc., such as excessive collection and analysis of facial information, big data killing old customers, and in-depth forgery.
In these contexts, the development and application of artificial intelligence urgently requires ethical value to provide guidance and constraints, which has become a basic consensus in the field of artificial intelligence at home and abroad.
Three stages
Since 2016, with the rapid development and application of artificial intelligence technology, artificial intelligence ethics has roughly gone through three stages of development, and it is urgent to move towards the "practice" stage.
The first phase began in 2016 and can be called the Big Bang Phase of Principle. The core of this stage is that all walks of life have put forward or formulated ethical principles. Taking artificial intelligence needs to follow certain ethical principles as the starting point, from national, government agencies and international organizations, to technology companies, industry organizations and academic groups, all walks of life have proposed or formulated AI ethical principles. Mainstream technology companies at home and abroad, such as Microsoft, Google, IBM, Tencent, etc., have also responded positively and put forward their respective AI ethical principles. According to incomplete statistics, there are more than 100 related AI principle documents. In short, at this stage, all sectors are actively advocating AI ethical principles, but lacking the necessary consensus and specific practices.
The second stage starts in 2018 and can be called the consensus search stage. Different countries and organizations have proposed many AI principles and ethical frameworks, which are different, and even have a certain degree of differences and conflicts. But artificial intelligence and the digital economy are global, so people hope to achieve and formulate AI principles that are commonly recognized by the world. The artificial intelligence principles of OECD and G20 are the products of this stage.
The third stage starts in 2019 and can be called the AI ethical practice stage. At this stage, the industry began to think about how to implement and implement AI principles, and explore mechanisms, practices, tools, etc. that can transform AI principles into practice. At present, technology companies such as Google, Microsoft, and IBM are actively promoting the implementation of AI ethics, so that AI principles can be operated and implemented, and truly integrate and embed AI R&D processes and business applications. In short, AI ethics practice should become the core direction of current and future AI ethics work, because to truly implement AI ethics, it is not enough to simply advocate AI ethics. In the next stage, we need to focus on exploring the "translation of abstract AI principles." " is or "transform" into a specific practice governance path.
Five paths
From principles to practice, it is the development direction of AI ethics. At present, my country has proposed principles and frameworks related to AI ethics, such as "The New Generation Artificial Intelligence Governance Principles - Develop Responsible Artificial Intelligence". Some tech companies have also proposed similar initiatives. On this basis, it is necessary to further explore the implementation plan of AI ethics and rely more on relevant practices of ethical governance to promote the development and application of responsible, safe and trustworthy AI. Combining relevant explorations and research at home and abroad, AI ethics governance mainly has the following five practical paths.
The first is the ethics committee. If artificial intelligence is the cornerstone of the future intelligent society, then it can be said that ethics is the necessary guarantee for making this cornerstone stable. Therefore, considering the different impacts of different AI systems, as well as factors such as legislative lag and incomplete laws, technology companies need to actively fulfill their ethical responsibilities in addition to the minimum requirement of legal compliance. The Ethics Committee is the most basic mechanism for technology companies to fulfill their ethical responsibilities in AI. Establishing an ethics committee to conduct necessary ethical review of AI-related businesses and applications has become a "must-have option" in the technology industry, such as Microsoft's committee, Google's AI principles review team, IBM's AI ethics committee, etc. The main responsibility of the Ethics Committee is to formulate internal standards and processes related to AI ethics, and conduct ethical review of AI-related businesses based on this to identify, prevent and eliminate risks in security, fairness, privacy, etc. of AI-related applications. In specific operations, the ethics committee needs diversified participation, that is, the collaboration and cooperation of people in different professional fields such as technology, law, and ethics; the ethics committee is responsible for establishing cases, standards, procedures, tools, resources, etc., to become a database of institutional knowledge and to play a role in governance. The role of the subject. In addition, the Ethics Committee responds faster than the government's legislation and is able to promptly follow up on the rapid development of technological innovation and application.
The second is the framework of ethical practice. In addition to conducting ethical review of AI business, foreign technology companies are also implementing practical frameworks related to AI ethics to solve the discrimination, opacity, inexplicability, privacy and other problems caused by artificial intelligence. For example, in terms of algorithm transparency, Google launched a "model card" mechanism for artificial intelligence, and IBM launched a "AI fact list" mechanism, which is similar to the product's manual and the nutritional content list of food, for artificial intelligence models The relevant model details, uses, influencing factors, indicators, training data, evaluation data, ethical considerations, warnings and suggestions are explained so that people can better understand and understand the AI model. For example, in terms of privacy protection, the federal learning framework ( ) can not only promote data utilization, but also protect personal privacy well. In short, federated learning refers to the fact that in the process of machine learning, each participant can use the data of other parties to jointly model, but does not need to share their data resources. With the help of federated learning, problems such as incomplete data and inadequate data can be solved, while protecting personal privacy and data security. Federal Learning has great application prospects in auto insurance pricing, credit risk control, sales forecasting, visual security, auxiliary diagnosis, privacy protection advertising, autonomous driving, etc. In addition, mechanisms such as AI ethics checklist and AI fairness checklist are also increasingly valued by technology companies, and will play an increasingly important role in ensuring that AI meets ethical requirements.
The third is an ethical tool. Ethical tools focus on technically finding technical solutions to issues such as transparency, explainability, fairness, security, and privacy protection. In addition, it also includes a deep forgery, including face forgery. From the open source development of ethical tools to commercial services, large technology companies and AI ethical startups are making up for the missing link in the AI field and providing a new idea for the implementation of AI ethics. At present, Google, Microsoft, IBM and some AI ethics startups are actively developing diversified AI ethics tools and integrating them into cloud services to provide AI ethics services (ethics as a, referred to as EaaS), empowering customers and industries. EaaS is expected to become the standard configuration for cloud services and cloud AI in the future.
Fourth, standard certification. Just like the current privacy protection, artificial intelligence ethics can also be promoted through standard certification, and AI products and services that meet AI ethics standards can apply for corresponding certification. At present, international standardization organizations such as IEEE and ISO are actively organizing the formulation of relevant standards and trying to launch certification projects. In the future, my country also needs to actively invest in this area, seize the high ground of standards, and encourage and promote the development and application of credible and responsible AI through standard certification.
The fifth is ethical training. Technical R&D personnel are on the front line of AI business and are the first person responsible for technology. They need to cultivate their ethical awareness and help them actively practice ethical requirements in the actual AI business, and embed ethical requirements into the entire process of product development design and operation. . Therefore, the government and enterprises should also strengthen ethical training for their technical personnel, while universities should strengthen the construction of AI ethics-related education and training systems.
Three-pronged approach
For AI governance, ethical governance, legal governance and technical governance have their own space for action and should not be neglected. But as of now, artificial intelligence governance needs to rely more on ethical governance. Because considering the complexity and continuous iterative nature of AI algorithms, when it is not appropriate to introduce mandatory legislation hastily, ethical governance is undoubtedly the most suitable and effective way to deal with the problems brought about by the application of AI algorithms.
AI ethics is certainly important and is an important path to ensure the realization of responsible, secure and trustworthy AI. However, ethics and legislation cannot be emphasized one-sidedly to avoid affecting the innovative application of AI technology and its maximization of economic and social benefits.
For example, overemphasizing personal information and privacy protection may cause AI applications to not obtain enough available data, thereby hindering AI applications and their value to individuals and society; overemphasizing algorithm transparency may reduce the accuracy or efficiency of algorithms , and thus hinder large-scale application; one-sidedly emphasizes that humans are required to make final decisions and control in all scenarios, and AI may not be fully utilized to solve prejudice, discrimination and other problems related to human decision makers in existing decision-making links. Moreover, many high-risk AI applications are often high-value AI applications. Therefore, on the basis of weighing different interests, we must balance ethics and development, and achieve responsible technological innovation and application.
(The author is a special researcher at the Digital Law Institute of East China University of Political Science and Law)