Summary Of The 4th National Cyber Ethics And Artificial Intelligence Ethics Seminar: Information Ethics And Artificial Intelligence Ethics In The Big Data Era
Summary Of The 4th National Cyber Ethics And Artificial Intelligence Ethics Seminar: Information Ethics And Artificial Intelligence Ethics In The Big Data Era
On December 24, 2017, the 4th National Cyber Ethics and Artificial Intelligence Ethics Seminar was held in Changsha, Hunan. The conference was composed of the Science, Technology and Engineering Ethics Professional Committee of the Chinese Dialectical Research Association, the Department of Philosophy, School of Public Administration, Hunan Normal University, the Institute of Artificial Intelligence Ethics Decision-making of Hunan Normal University, and the
On December 24, 2017, the 4th National Cyber Ethics and Artificial Intelligence Ethics Seminar was held in Changsha, Hunan. The conference was composed of the Science, Technology and Engineering Ethics Professional Committee of the Chinese Dialectical Research Association, the Department of Philosophy, School of Public Administration, Hunan Normal University, the Institute of Artificial Intelligence Ethics Decision-making of Hunan Normal University, and the "Information Value Development in the Big Data Environment," a major project of the National Social Science Fund. The research on the ethical constraint mechanism of the Ministry of Education’s key research base for humanities and social sciences, Hunan Normal University, and the magazine “Ethics Research” jointly organized.
From the Chinese Academy of Sciences, Shanghai University, Shanghai Jiaotong University, National University of Defense Technology, East China Normal University, South China University of Technology, Central China Normal University, Chongqing University of Posts and Telecommunications, Central South University, Hunan University, Hunan Normal University, Changsha University of Science and Technology, Hunan Academy of Social Sciences, More than 50 representatives from universities, enterprises and media such as Hangzhou Tuzipei Data Technology Consulting Co., Ltd. and Hongwang attended the meeting.
Professor Li Lun, Vice Chairman of the Science and Technology and Engineering Ethics Professional Committee of the Chinese Dialectical Research Association, Director of the Institute of Artificial Intelligence Ethics Decision-making of Hunan Normal University, and Vice Dean of the School of Public Administration, presided over the opening ceremony, and head of the first-level discipline of philosophy at Hunan Normal University. Professor Zhang Huaicheng delivered the opening speech.
The conference focused on the theme of "Information Ethics and Artificial Intelligence Ethics in the Big Data Era" and discussed issues such as the moral philosophy, moral algorithm, design ethics and social ethics of artificial intelligence. The conference held four academic lectures, hosted by Professor Zeng Huafeng from National University of Defense Technology, Professor Lei Liang from Central South University, Professor Shu Yuanzhao from Hunan University and Professor Yi Xianfei from Changsha University of Technology.
Li Lun, director of the Institute of Artificial Intelligence Ethical Decision-making at Hunan Normal University, briefly introduced the institute's development history, purpose and tasks, research team and research projects, disciplines and R&D platforms, and conducted a entitled "Artificial Intelligence Ethics Research" Report of the four major sectors. He believes that based on the characteristics of artificial intelligence and its association with society and the current global research on artificial intelligence ethics, artificial intelligence ethics research can be summarized into four major sectors: artificial intelligence moral philosophy, artificial intelligence moral decision-making, artificial intelligence design ethics and The social ethics of artificial intelligence. These sectors are related to each other and have relatively independent discussion areas. The moral philosophy of artificial intelligence mainly focuses on how traditional moral philosophy faces the challenges of artificial intelligence and how artificial intelligence can promote the development of moral philosophy; the moral algorithm of artificial intelligence mainly studies how to understand and implement the issue of moral decision-making in artificial intelligence, so that artificial intelligence can make it consistent. Moral decision-making. Artificial Intelligence Design Ethics Research on what ethical principles should be followed in the design and manufacturing of artificial intelligence to keep the design of artificial intelligence consistent with social values. The social ethics of artificial intelligence mainly study how to make good use of and prevent the use of artificial intelligence, so that artificial intelligence can benefit mankind.
1. Artificial Intelligence Moral Philosophy
Yan Qingshan of East China Normal University believes that the issue of the moral status of artificial intelligence is a truly new moral problem. The accurate expression of this problem is: How do we determine whether artificial intelligence is or is not a personality entity? Solving this problem involves the existing "other mind problem" and its variant "public problem". However, any solution to his mental problems may be invalid for artificial intelligence. The "difficult problem" is essentially a metaethic or moral metaphysical problem, but this problem that cannot be solved at the metaethics level seems to be solved at the level of normative ethics, and we can use normative ethics to bypass this problem . He proposed the principle of treating artificial intelligence based on the rules of personality ethics and means ethics - "selective command": an entity and its operation either offends or respects people. An offensive entity should be regarded as an entity that is neither an end nor a means: when it is a spiritual entity, its freedom is restricted; when it is a non-psychic entity, its operation is suspended. The rules of operation that respects your entity should be respected: when it is a spiritual entity, respect itself; when it is a non-psychic entity, respect the person who made it.
Liu Dawei of Hunan Normal University pointed out in his report that Lucas-Penross argument refers to: using the Gödel's theorem of incompleteness, it can be concluded that the human heart is better than the machine (Turing machine), and the mind is uncalculable. Feverman points out Penrose's negligence in argument, tries to reconcile the complete opposition between mechanism and antimechanism, and proposes an open model axiom system that represents the mathematical ability of the mind. Liu Dawei believes that Lucas Penrose's argument needs to be added to some idealized assumptions, and in particular, it should clarify and supplement the assertion that "F is sound" in Penrose's argument, and learn from Lucas Penrose's argument and Based on Feverman's argument, considering the importance of mathematical understanding and the environment, he tried to propose a mental model based on a completely open mathematical formal system.
Li Xi from Central South University explained why machine learning must resort to metaphysical "good" in order to gain universality through the "no free lunch theorem", but only the basic "good" of metaphysical is far from ensuring that the behavior of the agent is in line with the mainstream of human beings. Values. To ensure human interests, it is also necessary to give human values to machines. The most direct way is to give machines a utility function that suits human interests. However, agents are calculating the expected utility to maximize their utilitarian "goodness" in order to pursue the "good" of utilitarianism. In the process, human resources will inevitably be invaded. In order to make machines in line with human interests, avoid and retain the right to shut down and minimize risks, it is necessary to cleverly integrate metaphysical "good" with utilitarian "good" to transform "transcendent" and "utility", and Flexible use of inverse reinforcement learning or value reinforcement learning.
Zhang Han of Hunan Normal University believes that the topic of extended mind has received widespread attention and discussion in the fields of cognitive science and mental philosophy in recent years. According to the topic of extending the mind, the psychological states such as beliefs and desires that are important components of the mind are not limited to the boundaries of the human physiological system. In some cases, they can also be realized by external physical carriers. Once the topic of extended mind was proposed, it provoked a protracted "mind boundary dispute". Philosopher and cognitive scientist Richard Hesmink advanced the "Ministry Boundary Fight" to "Ministry Boundary Fight", advocating that the skull and skin is not a fixed boundary between the self and the world, and that the self and the mind can also extend to the world. Among them. He conducts a critical analysis of the so-called "extended self" and defends the traditional boundaries of himself.
2. Artificial Intelligence Ethical Algorithm
Sun Baoxue of Hunan Normal University believes that the connotation of the ethical research on artificial intelligence algorithms is the issue of the moral acceptability of algorithms. This value orientation of acceptability will make researchers' interest and concerns mainly focus on certain issues, especially the following two types of algorithms: one is that it is difficult for humans to predict the consequences of their behavior; the other is the facts caused by the algorithm The algorithm that is difficult to explain indescribable in the decision logic behind it. The ethical problems caused by algorithms are significantly different from those caused by other technologies. It is in this sense that these two types of algorithms have brought new challenges to ethics, and therefore they are more valuable for discussion. The problem domain of algorithm ethics needs to be defined from three aspects: the autonomy characteristics, application scenarios and attribution dilemmas of the algorithm, and the ethical risks of the algorithm are analyzed based on this.
Zhang Wei of Central China Normal University believes that with the emergence of autonomous vehicles, the "tram problem" has shown new characteristics: First, the moral decision makers in the traditional "tram problem" are humans, while the moral decisions in the autonomous driving scenarios are It is an artificial intelligence system installed in a car. Second, the decision makers in the traditional “tram puzzle” are themselves not direct stakeholders of the decision results. The autonomous driving scenario is different, and the owner who purchases and uses autonomous vehicles is both the decision-makers of the incident and the direct stakeholders of the incident. The dilemma of autonomous vehicles in the face of inevitable harm becomes: the autonomous driving program should be based on the principle of minimizing casualties, or to protect passengers in the car at any cost.
Yu Lu from Hunan Normal University discussed the ethical algorithm and its limits of autonomous vehicles using the Rawlsian algorithm as an example. She believes that Rawls' algorithm may face the following three aspects of objection in the application of autonomous vehicles: First, which is more important, survival or survival value; second, it will bring reverse injustice and a higher safety factor of autonomous driving Cars will become the "target"; thirdly, this is counterintuitive. For these objections we need to reconsider Rawls’s “maximization minimum rule”. First, maximization minimum rule is not a reliable guide for uncertain choices, it is only valid in certain situations with specialized characteristics. Secondly, this rule has other restrictions: the theme of justice is the basic structure of society, not the temporary allocation of certain amounts of property or other things to individuals; the priority of freedom; and fair opportunities take precedence over the principle of difference.
Li Yang from Chongqing University of Posts and Telecommunications talked about autonomous vehicles from the perspective of experimental ethics and made suggestions on the moral decision-making of artificial intelligence. The results of his research show that: 1. People have high expectations for artificial intelligence; 2. People turn their moral responsibilities to other absent rather than artificial intelligence; 3. Using experimental and survey methods in the ethical research of artificial intelligence can Provide us with sufficient data resources for analysis and judgment, and then we can build an artificial intelligence moral system based on people's attitude towards artificial intelligence, so that they can make moral decisions that meet public expectations and understand whether human behaviors are moral or not. Attitudes and expectations, thus conceive a set of ethical norms and moral decision-making procedures for artificial intelligence adapted to public moral awareness to guide the moral decision-making of artificial intelligence.
Wang Shuqing of Hunan Normal University believes that to make the actions of artificial intelligence comply with relevant ethical norms, it is best to make them have the ability to make moral decisions. One of the conceivable necessary tasks is to formalize ethical principles or rules. Although the original intention of formal ethics is not to be artificial intelligence, its approach applies to artificial intelligence. The formal expression of ethical norms must be based on a certain logical language and its reasoning rules, while classical logic is not enough, so it is necessary to develop logic related to action and morality to adapt to the needs of formal ethics. From the perspective of a single subject, agency logic is the most appropriate logic for formal ethical expression. To achieve collaborative action between artificial intelligence bodies or between human-machine machines, the most feasible logical basis at present is multi-context systems.
3. Artificial Intelligence Design Ethics
Yan Kunru from South China University of Technology discussed the design ethics of artificial intelligence robots from five aspects. First, clarify the purpose of the robot from the perspective of the robot design purpose. Clarify the design uses of robots, make the types of intelligent robots highly specialized, reduce the "Everything" type of robots, and enable robots to be responsible for clear and specific tasks in human daily life, so that they can be used exclusively for special aircraft. Second, robots are designed with safety. This requirement is mainly from the perspective of robot safety. When robots participate in human life, they must strictly abide by laws and regulations. They must not use the legal vacuum of robots to do things that exceed the law, and they must not use the convenience of development to steal users, collectives and countries. information and confidentiality. Third, robot designers must make sufficient arguments about possible ethical issues. How to make robots make the most beneficial choices for humans when facing similar ethical dilemmas is a question that designers need to think about in advance. Fourth, robot design should focus on fairness and justice. Robot design should not cause widening the gap in power and status between subjects, resulting in actual unfairness and injustice.
Wen Xianqing of Hunan Normal University believes that artificial intelligence as a technology must have value preferences. First, the social system, practice and attitudes in which technology appear have already existed social value in advance; second, technology itself has conditions; finally, emergencies will always occur in the technology use environment. In other words, the design of artificial intelligence cannot be value-neutral, it is always influenced by individual subjective intentions and social values. If design ethics only reveal that artificial intelligence as a technology inevitably has value preferences and inevitably leads to ethical problems, then the application ethical problems brought about by artificial intelligence will further unfold these specific problems.
Dong Jun of the Chinese Academy of Sciences takes the analysis of cardiovascular diseases focusing on implicit knowledge as an example, and believes that after more than 60 years of growth, the basic concept of artificial intelligence seems to have "flying into the homes of ordinary people", and its progress still needs macro-methodological guidance and micro-methodological. Supported by cognitive neuroscience conclusions. Starting from the core of intelligent simulation, thinking simulation, one of the problems is the mining and portrayal of indescribable hidden knowledge in the long-term accumulated experience, including object feature recognition, refinement of inference rules, etc., which requires machine learning and logical reasoning. Fusion.
Chen Zifu, Beijing Huayuan International Tourism Co., Ltd., analyzed several classic conflicts in the history of artificial intelligence from the perspective of the problem transformation of scientific research program methodology, and believed that the diversified competitive landscape of the theory of artificial intelligence as a scientific scientific will be long-term. and technical practice have strong ability and tolerance to experiential abnormalities, and it is difficult to form a problem transformation of a research program degradation based on individual judgment experiments, which often require more abnormalities and longer time. Therefore, based on these characteristics, more comprehensive predictions or judgments should be made on how the future theory of artificial intelligence will develop.
4. Social ethics of artificial intelligence
Sun Weiping from Shanghai University believes that artificial intelligence is a disruptive technology that profoundly changes the world, has a bright future, and is at the same time it is difficult to accurately predict the consequences. At present, artificial intelligence is growing rapidly at an exponential rate, but people's ideas and concepts are lagging behind, policy orientations are unclear, ethical regulations are lacking, and laws and regulations are not sound, which has led to huge uncertainty and risks. Therefore, it is necessary to conduct a comprehensive value reflection on artificial intelligence and its application consequences, and put artificial intelligence on the track of healthy development. He believes that artificial intelligence has a positive social effect, and it also brings many social value conflicts and ethical dilemmas, such as challenging the essence of human beings and forcing people to reflect on themselves; impacting traditional ethical relationships and challenging human moral authority; Create a "digital divide" and thus deconstruct society; even replace, control or rule humanity. Therefore, it is necessary to formulate the value principles and comprehensive countermeasures of an intelligent society. First, we must base ourselves on the "possibility" and carefully decide what to do. Secondly, we must establish the basic value principles that cannot be overcome. Finally, we must start the social system project of "promoting profits and eliminating disadvantages".
Tong Guangzheng of Hainan Vocational College of Political Science and Law pointed out that the information ethics and artificial intelligence ethics issues in the big data era are important issues that cannot be avoided in the development of big data and artificial intelligence. While vigorously developing technology, we must attach great importance to the social problems caused by this. It needs to be constrained and guided by ethical and legal norms. Secondly, this standard system must be technically manipulated and implemented, but also conform to a certain value orientation. This requires the scientific and technological community, ethics community, and law community to jointly carry out exploration. Finally, the development of artificial intelligence changes the employment structure, impacts the law and social ethics, violates personal privacy, affects economic security and social stability, etc. Faced with these new challenges, we must discuss and propose a set of normative value systems, such as order, security, equality, justice, efficiency and other values, and on this basis, we propose specific norms for the application of artificial intelligence in various fields to ensure artificial intelligence. Safe, reliable and controllable development.
Du Yanyong of Shanghai Jiaotong University believes that the rapid development of artificial intelligence technology has caused widespread concerns about its security issues. Regardless of whether artificial intelligence will surpass human intelligence or not, studying the security issues of artificial intelligence is a core issue in the social ethics of artificial intelligence. He believes that there are at least two ways to solve security problems: internal and external approaches. From the perspective of internal approaches, first of all, ethical design of artificial intelligence products is one of the basic approaches to solve their security problems. Secondly, in order to ensure the harmony and stability of human society, people need to limit the application scope of some technologies that are not yet mature in development and are prone to causing security issues and social controversy. Again, limit the degree of autonomy and intelligence of artificial intelligence, and establish artificial intelligence security standards and specifications. From the perspective of external approaches: First, scientists’ social responsibility and international cooperation. Second, public acceptance and adjustment of concepts. Third, artificial intelligence security assessment and management.
Shi Haiming of the National University of Defense Technology believes that in the artificial intelligence society, the military significance of the new era will undergo major changes. He started this discussion on four levels. First, the post-human era: the inspiration of the path of human evolution; second, the view of nature and the view of war: technology subverts ideas; third, the rise of post-human war: budding and future; fourth, artificial intelligence: leading to post-human war bridge.
Zhang Huang from the National University of Defense Technology believes that with the continuous improvement of the intelligence level and autonomous capabilities of the drone system, it has changed from a pure combat method to gradually possessing some functions of the main body of the war. Under the conditions of unmanned combat, human-machine integration plays the main role together, which impacts the inherent understanding of the responsible subjects and the allocation of responsibility in the theory of Just War, and has caused a series of controversy. The dilemma of responsibility allocation caused by drones is also reflected in how to deal with the problem of "responsibility transfer". In addition, drones with intelligent characteristics are increasingly becoming the main body of war responsibility, which may also weaken the moral foundation that defines military responsibilities.
Tu Zipei, a famous big data expert and chairman of Tu Zipei Data Technology Consulting Co., Ltd., first reviewed the three stages of the development of artificial intelligence, and believed that behind each development is a breakthrough in computing power and data capacity. In the first stage, artificial intelligence is the task of performing reasoning and analysis after humans give it knowledge; after artificial intelligence develops to the second stage, humans give it knowledge; and the third stage is machine learning. He discussed the rapidly developing technologies such as image face recognition and speech recognition, and believed that the relationship between humans and technology showed an alienation. People are becoming more and more like machines, and machines are becoming more and more autonomous. These problems require Extensive discussions from all walks of life.