Big Names Talk About AI Ethics, Experts: To Be Convenient Or Privacy, Decision-making Power Should Be Handed Over To Users
Big Names Talk About AI Ethics, Experts: To Be Convenient Or Privacy, Decision-making Power Should Be Handed Over To Users
The development of artificial intelligence technology has brought new challenges in security, privacy and ethics. How to solve these problems? On May 27, at the Artificial Intelligence High-end Dialogue Forum at the China International Big Data Industry Expo, industry guests discussed these issues.
The development of artificial intelligence technology has brought new challenges in security, privacy and ethics. How to solve these problems? What do industry celebrities think? On May 27, at the high-end artificial intelligence dialogue forum at the China International Big Data Industry Expo, industry guests discussed these issues.
Nie Zaiqing, a professor at Tsinghua University, believes that it is necessary to solve security problems through technical means, such as using technology to deal with telecommunications fraud. In addition, when designing a system, security and controllability should be considered, and permissions should be downgraded so that it can be safe at every moment. In terms of responsibility ownership, Nie Zaiqing said that behind every intelligent system is people designing it, and people must be responsible for the security of the system.
Chen Weiqiang, senior vice president of Hisense Group, pointed out that privacy protection is a very big obstacle to the development of artificial intelligence. First of all, we must protect privacy technically. For example, at the sensor level, try to use non-invasive sensors. In addition, not only enterprises, but also public powers such as the public security and the government, the use of personal data should be subject to law. He emphasized that "I am the master of my data" and companies cannot regard user data as their own assets.
Regarding privacy issues, Zhang Guoyan, vice president of SenseTime, believes that technology has two sides. In order to obtain convenience, privacy is sometimes sacrificed. If privacy is to be protected strongly, the application of technology may be very narrow and many technologies cannot be promoted. A very important way to solve this problem is to strengthen popular science education for artificial intelligence, because many times fear comes from the unknown. "When we understand this technology more, we first know what the technology is doing, how should I protect my own data and personal privacy. Because I understand it, I am not afraid of it, I can embrace and accept it, so that it can bring me more convenience."
Nie Zaiqing also believes that in terms of privacy, there is indeed a trade-off between convenience and privacy. He said that when facing this problem, the decision-making power should be left to the user, and whether users should use this convenience to decide. However, even if the user decides to use this convenience, we hope that the algorithm will be "available and invisible". Algorithms can use this data, but the data cannot be seen by people. Moreover, it should be used for unified functions of users, not for other functions.
Qi Guosheng, founder of Guoshuang Technology, put forward different views. He believes that in the field of artificial intelligence ethics, the issue of privacy is second, because the key to privacy is that people with systems must be self-disciplined, which can be completely solved by standardizing them through corresponding laws and regulations. "Pretence" is the most difficult ethical problem for artificial intelligence to solve.
Qi Guosheng said that artificial intelligence will definitely cause bias, especially the new generation of artificial intelligence is mainly based on deep learning and is based on training samples. The result based on training samples is that whoever contributes the sample will benefit. Most of the silent majority who do not contribute the sample will be unlucky. This principle is the same as when we sometimes say that the democratic system is not good enough. People who disagree will definitely speak, while those who agree will not like to speak. Some people contributed samples of machine learning and finally used for everyone, which created technical bias. "This is the category of engineering ethics and a criterion that all people who do artificial intelligence must take in their hearts," said Qi Guosheng.