Artificial Intelligence Ethics: Principles Do Not Mean Governance
Artificial Intelligence Ethics: Principles Do Not Mean Governance
Text|Hu Minqi, reporter of
Text|Hu Minqi, reporter of "China Science Daily"
In the "Top Ten Global Artificial Intelligence Governance Events in 2021" released by the People's Think Tank and Megvii Artificial Intelligence Governance Research Institute, data, algorithms, and ethics are the core keywords.
In the past year, the Data Security Law and the Personal Information Protection Law have come into effect one after another, and the "Regulations on the Recommendation and Management of Internet Information Service Algorithms" have been reviewed and passed, which has provided a guarantee mechanism for protecting privacy and preventing the misuse of algorithms.
However, this is not the norm in the ethical governance of artificial intelligence.
In fact, there is still a huge gap between the macro-level ethical principles and their correct understanding and reasonable practice in actual technology research and development and application.
How to implement ethical principles into specific systems and actions is a key question that must be answered in the current ethical governance of artificial intelligence.
Artificial intelligence ethics are difficult to implement
In recent years, international organizations, governments, enterprises and academic groups have issued various ethical principles, ethical guidelines and ethical guidelines, trying to promote, regulate and constrain the research and development and application of artificial intelligence technology.
"The core contents of these ethical principles are mostly convergent, such as transparency, fairness, non-harm, responsibility, privacy, beneficial, and autonomy, etc.," explained Du Yanyong, professor of the School of Marxism at Shanghai Jiaotong University.
"The key is how to ensure the implementation of ethical principles." He said bluntly, "Ethical principles themselves cannot be implemented directly, and we do not have an effective way to translate principles into practice - principles cannot be combined with every one of artificial intelligence technologies from research and development to application The links are combined, and there is also a lack of a strong execution mechanism. When the developer's behavior violates ethical principles, there is almost no punishment and correction mechanism."
The same is true for Zeng Yi, director of the Center for Artificial Intelligence Ethics and Governance of the Institute of Automation, Chinese Academy of Sciences and a member of the National New Generation Artificial Intelligence Governance Professional Committee.
He told China Science Daily that at present, the research and development activities of artificial intelligence start from the technology itself, lack an ethical review mechanism and system that can meet practice, and the situation at home and abroad is basically the same.
This makes potential ethical issues attract attention from relevant parties only after they are raised by users and the public in the application.
In terms of technological progress in research on artificial intelligence ethics, Zeng Yi mentioned that in comparison, privacy protection research is implemented faster than other topics.
For example, there are now research and practices such as federated learning and differential privacy proposed for privacy protection.
This mainly depends on the higher social attention and wider impact of privacy protection.
"But that doesn't mean we've made absolute progress. The vast majority of society's concerns about AI privacy have not yet made a real breakthrough."
Zeng Yi gave an example. For example, in terms of informed consent for user privacy, it is a great scientific challenge to achieve user authorization revocation on the artificial intelligence model. At present, many artificial intelligence models are almost unable to implement such technology.
What hinders ethical action
There are many factors that make it difficult to implement the ethical principles of artificial intelligence.
On the one hand, Zeng Yi believes that from managers to artificial intelligence scholars, to industrial innovators and applications, their ethical awareness needs to be improved urgently.
"At present, the vast majority of the backbone of artificial intelligence technology research and development and industrial innovation have not undergone specialized artificial intelligence ethics education and training. Even a considerable number of artificial intelligence innovators and practitioners believe that artificial intelligence ethics is not a scientific research and technological research and development that needs attention. The problem of this makes it difficult for them to establish relatively complete prevention mechanisms when engaging in artificial intelligence innovations related to humans and society to deal with possible ethical risks.”
On the other hand, as Zhang Xueyi, deputy director and associate professor of the Department of Philosophy and Science at Southeast University, said, scholars engaged in philosophy, especially ethics, know little about the development of specific technologies and the principles behind them, and often use general ethical principles. Starting from the perspective of reflecting on the ethical problems that artificial intelligence technology may arise, the effect can only be to "scratch it through the shoe", which is difficult to hit the key points.
"Even general ethical principles, when applied to specific empirical situations, there are conflicts and differences in the views of different theories and schools."
Zhang Xueyi said that when driverless cars encounter "tram problems", they face what kind of algorithm ethics to choose - should the manufacturer set up a specific moral algorithm for the car owner in advance, or should the algorithm setting rights be handed over to the car owner?
At present, Zhang Xueyi is conducting experimental philosophy research based on the existing unmanned driving algorithm theory, that is, to implement these algorithm theories into actual empirical situations, conduct intuitive investigations of the general public, and collect empirical evidence of the public on this issue, thereby Verify or correct existing theories.
But he also admitted that he has not had in-depth interactions with scientists in the fields of autonomous driving.
"In China, the interprofessional interaction in the field of artificial intelligence ethics research is relatively limited or stays in the shallow aspect, breaking through the boundaries and perspectives of majors, and conducting interdisciplinary and multi-field collaborative innovation research on specific issues. Scientists and humanities are still very likely to conduct interdisciplinary and multi-field collaborative innovation research. There is a certain phenomenon of separation among scholars." Zhang Xueyi said.
How to go from principle to action
Zeng Yi believes that the first problem to be solved in the field of artificial intelligence is to learn from the experience of life sciences and medicine related fields, establish an artificial intelligence ethical review system and system, and conduct moderate research and development and application involving humans, animals, environmental ecology and high-risk fields. Supervision.
Du Yanyong also mentioned that the top priority of artificial intelligence governance is to strengthen the "front-end" ethical governance of artificial intelligence technology through ethical principles and norms.
He proposed that in order to enable scientific researchers engaged in artificial intelligence technology research and development to assume forward-looking moral responsibilities, the scientific and technological ethics review system should be implemented in high-level scientific research projects such as national, provincial and ministerial levels to make it a normalized working mechanism .
At the same time, in order to improve the effectiveness of ethical intervention, he also suggested extracting about 3% of funds from high-level artificial intelligence research projects such as national, provincial and ministerial levels, and establishing ethical sub-topics in related projects, specifically for artificial intelligence ethics Collaborative governance research, with scientific and technological ethics experts as the main responsible persons.
"The responsibility of an ethicist is to elaborate on the connotation of ethical principles, translate macroethical principles into specific issues and workflows that are easy to understand and operational, and help artificial intelligence scientists to a more meticulous and more Technology research and development based on comprehensive knowledge." Du Yanyong emphasized.
"Ethicists should not only tell scientific researchers what to do, but also assist R&D personnel in solving 'how to do and what to avoid'. This requires ethicists to pay more attention to the specific work and technical details of scientists and avoid self-restraints Talking to yourself.”
In addition, Zeng Yi told China Science Daily that in order to implement the ethics of artificial intelligence from principles to actions, it is necessary to embed "multi-party governance" into the entire life cycle of artificial intelligence products and services.
For scientific research institutions and enterprises engaged in artificial intelligence research and development, they should establish and improve the mechanism of self-discipline and autonomy of artificial intelligence ethics by actively setting up artificial intelligence ethics committees, artificial intelligence ethics researchers, and artificial intelligence ethics services provided by third parties, and Carry out practice.
Industry and research institutions need to formulate standards under the guidance of the government, jointly publish best practice reports, actively open source, and open up artificial intelligence ethics and governance algorithms and tools, thereby promoting the implementation of artificial intelligence ethics technology.
On the one hand, teaching and research institutions, societies and industry organizations should actively participate in the formation, formulation and practice of relevant ethical norms, and on the other hand, they should actively promote the education and training of artificial intelligence ethics.
The public and the media are users of artificial intelligence products and services. They should actively play the role of supervisors and provide timely social needs, concerns and feedback to governments, production and research institutions and enterprises.
When formulating AI ethics and governance policies, governments and managers at all levels should adapt to the development of AI production and research progress and social feedback, and take the promotion of AI ethics and governance public services as a starting point, such as the country Ministry and national testing and evaluation centers should work with production and research institutions to provide ethical compliance testing and certification, with supervision as a supplement.