AI Ethics

Artificial Intelligence Ethics: Having Principles Does Not Mean Being Able To Govern

Artificial Intelligence Ethics: Having Principles Does Not Mean Being Able To Govern

Artificial Intelligence Ethics: Having Principles Does Not Mean Being Able To Govern

Source: [China Science News] Text | "China Science News" reporter Hu Minqi Among the "Top Ten Global Artificial Intelligence Governance Events in 2021" released by the People's Think Tank and the Megvii Artificial Intelligence Governance Institute, data, algorithms, and ethics are the core keywords.

In the "Top Ten Global Artificial Intelligence Governance Events in 2021" released by the People's Think Tank and the Megvii Artificial Intelligence Governance Research Institute, data, algorithms, and ethics are the core keywords.

In the past year, the Data Security Law and the Personal Information Protection Law have come into effect, and the "Internet Information Service Algorithm Recommendation Management Regulations" have been reviewed and passed, providing a guarantee mechanism for protecting privacy and preventing misuse of algorithms.

However, this is not the norm in artificial intelligence ethical governance.

In fact, there is still a huge gap between macro-level ethical principles and their correct understanding and reasonable practice in actual technology research and development and application.

How to implement ethical principles into specific systems and actions is a key question that must be answered in the current ethical governance of artificial intelligence.

Artificial Intelligence Ethics Is Difficult to Enforce

In recent years, international organizations, governments, businesses, and academic groups have issued a variety of ethical principles, ethical guidelines, and codes of ethics in an attempt to promote, standardize, and restrict the research, development, and application of artificial intelligence technology.

"Most of the core contents of these ethical principles are similar, such as transparency, fairness, non-harm, responsibility, privacy, usefulness, autonomy, etc." explained Du Yanyong, a professor at the School of Marxism at Shanghai Jiao Tong University.

"The key lies in how to ensure the implementation of ethical principles." He said bluntly, "Ethical principles themselves cannot be implemented directly, and we do not have an effective way to translate principles into practice - principles cannot be integrated with every link of artificial intelligence technology from research and development to application, and there is also a lack of a strong enforcement mechanism. When developers' behavior violates ethical principles, there is almost no punishment and correction mechanism."

This is also the observation made by Zeng Yi, director of the Artificial Intelligence Ethics and Governance Research Center of the Institute of Automation, Chinese Academy of Sciences and member of the National New Generation Artificial Intelligence Governance Professional Committee.

He told China Science News that at present, artificial intelligence research and development activities start from the technology itself and lack an ethical review mechanism and system that can meet the needs of practice. The situation at home and abroad is basically the same.

This makes potential ethical issues only attract the attention of relevant parties after they are raised by users and the public in the application.

In terms of technological progress in artificial intelligence ethics research, Zeng Yi mentioned that, in comparison, privacy protection research has been implemented faster than other topics.

For example, there are now research and practices on federated learning and differential privacy proposed for privacy protection.

This is mainly due to the higher social attention to privacy protection and its wider impact.

"But this does not mean that we have made absolute progress. The vast majority of society has not yet achieved a real breakthrough in its concerns about artificial intelligence privacy."

Zeng Yi gave an example. For example, in terms of informed consent for user privacy, it is a great scientific challenge to implement the revocation of user authorization on an artificial intelligence model. At present, many artificial intelligence models are almost unable to implement such technology.

What hinders ethical action

There are many factors that make it difficult to implement ethical principles for artificial intelligence.

On the one hand, Zeng Yi believes that the ethical awareness of managers, artificial intelligence scholars, and industrial innovators and users needs to be improved urgently.

"At present, the vast majority of the backbone of artificial intelligence technology research and development and industrial innovation have not received specialized artificial intelligence ethics education and training. A considerable number of artificial intelligence innovators and practitioners even believe that artificial intelligence ethics is not an issue that requires attention in scientific research and technology research and development. This makes it difficult for them to establish relatively complete prevention mechanisms to deal with possible ethical risks when engaging in artificial intelligence innovation related to people and society."

On the other hand, as Zhang Xueyi, deputy director and associate professor of the Department of Philosophy and Science at Southeast University, said, scholars engaged in philosophy, especially ethics research, know very little about specific technological developments and the principles behind them. They often start from general ethical principles and reflect on the ethical issues that may arise from artificial intelligence technology.

“Even for general ethical principles, when applied to specific empirical situations, there are conflicts and differences between the views of different theories and schools.”

Zhang Xueyi said that for example, when self-driving cars encounter the "trolley problem", they are faced with the decision of which algorithm ethics to use - should the manufacturer set a specific ethical algorithm for the car owner in advance, or should the right to set the algorithm be handed over to the car owner?

Currently, Zhang Xueyi is conducting experimental philosophical research based on existing driverless algorithm theories, which means converting these algorithm theories into actual empirical situations, conducting intuitive surveys of the general public, and collecting the public’s empirical evidence on this issue, thereby confirming or revising existing theories.

But he also admitted that he has not yet had in-depth interactions with scientists in autonomous driving-related fields.

"In China, cross-professional interactions in the field of artificial intelligence ethics research are relatively limited or remain superficial. Breaking through professional boundaries and horizons, and conducting interdisciplinary and multi-field collaborative innovation research on specific issues are still rare. There is a certain separation between scientists and humanities scholars." Zhang Xueyi said.

How to move from principles to action

Zeng Yi believes that the first problem to be solved in the field of artificial intelligence is to draw on the experience of life sciences and medicine-related fields, establish an artificial intelligence ethical review system and system, and conduct appropriate supervision on research and development and applications involving humans, animals, environmental ecology, and high-risk fields.

Du Yanyong also said that the top priority in artificial intelligence governance is to strengthen the "front-end" ethical governance of artificial intelligence technology through ethical principles and norms.

He proposed that in order for scientific researchers engaged in artificial intelligence technology research and development to assume forward-looking moral responsibilities, a scientific and technological ethics review system should be implemented in high-level scientific research projects at the national, provincial and ministerial levels and make it a normal working mechanism.

At the same time, in order to improve the effectiveness of ethical intervention, he also recommended that about 3% of the funds be withdrawn from high-level artificial intelligence research projects at the national, provincial and ministerial levels, and set up ethics sub-topics in related topics, specifically for the research on collaborative governance of artificial intelligence ethics, with scientific and technological ethics experts as the main person in charge.

"The responsibility of ethicists is to explain the connotation of ethical principles in detail, to translate macro-ethical principles into specific issues and work processes that are easy to understand and operable to a certain extent, and to help artificial intelligence scientists conduct technology research and development based on more detailed and comprehensive knowledge." Du Yanyong emphasized.

"Ethicists should not only tell scientific researchers 'what should be done', but also assist researchers in solving 'how to do it and what to avoid'. This requires ethicists to pay more attention to the specific work and technical details of scientists and avoid talking to themselves."

In addition, Zeng Yi told China Science News that in order to implement artificial intelligence ethics from principles to actions, "multi-party governance" must be embedded in the entire life cycle of artificial intelligence products and services.

For scientific research institutions and enterprises engaged in artificial intelligence research and development, they should establish and improve the artificial intelligence ethics self-discipline and autonomy mechanism and carry out practice by actively setting up artificial intelligence ethics committees, artificial intelligence ethics researchers, and artificial intelligence ethics services provided by third parties.

Industry and research institutions need to formulate standards under the guidance of the government, jointly publish best practice reports, actively open source, and open up artificial intelligence ethics and governance algorithms and tools, thereby promoting the implementation of artificial intelligence ethics technology.

Teaching and research institutions, societies and industry organizations should, on the one hand, actively participate in the formation, formulation and practice of relevant ethical norms, and on the other hand, actively promote education and training on artificial intelligence ethics.

The public and the media are users of artificial intelligence products and services and should actively play the role of supervisors and provide timely social needs, concerns and feedback to the government, industry and research institutions and enterprises.

When governments and managers and policymakers at all levels formulate artificial intelligence ethics and governance policies, they should adapt based on the progress of artificial intelligence industry and research and social feedback, and focus on promoting public services for artificial intelligence ethics and governance. For example, national ministries and commissions and national testing and evaluation centers should collaborate with industry and research institutions to provide ethical compliance testing and certification, supplemented by supervision.

More