Artificial Intelligence Ethical Issues And Thoughts On Seeking Advantages And Avoiding Disadvantages
Artificial Intelligence Ethical Issues And Thoughts On Seeking Advantages And Avoiding Disadvantages
At present, large language model artificial intelligence technology and products such as ChatGPT are booming, and they bring known ethical risks mainly in the following aspects. ChatGPT technology feeds on massive amounts of information, and then imitates human thinking patterns according to needs
Mathematical Intelligence Lecture Hall Ruan Shiwei
At present, similar large language model artificial intelligence technology and products (hereinafter referred to as "quasi-technology") are booming, which bring known ethical risks mainly in the following aspects.
One is the risk of AI plagiarism. Such technology feeds on massive amounts of information, and then imitates human thinking patterns to process, organize, and output relevant information according to needs. This process can be understood as a high-level "manuscript cleaning" process, and its ability to avoid "duplication checks" will become stronger and stronger. In the future, any field that needs to output text information, such as scientific research, news, administration, advertising, etc., may encounter ubiquitous plagiarism risks, resulting in more academic misconduct, creative plagiarism, intellectual property infringement, etc.
The second is the risk of information leakage. Quasi-technology is not picky about "eating" information, including personal privacy information, sensitive government information, company trade secrets, etc. This will bring about an increasingly higher level of intelligence. In the future, we will have to make a balance and choice between improving AI capabilities and protecting data security.
The third is the risk of protecting minors. Such technology can be used as a search engine with highly evolved capabilities, with strong information retrieval and integration capabilities. This gives minors another super weapon to obtain bad information through the Internet. It should be noted that it is not too difficult for teenagers with strong learning ability to deceive and obtain bad information.
The fourth is the risk of value penetration. Like all previous network technologies and products, quasi-technology itself does not have a value orientation, but the individuals and organizations that develop, train, and maintain it can easily implant certain values they want to convey and achieve "ulterior purposes" by "entraining private goods."
The fifth is the risk of unemployment among workers. As a technology developed by humans, the core function of quasi-technology must be to serve humans and bring convenience to society. But like all technological progress, the large-scale application of AI will inevitably lead to the replacement of some existing human jobs by machines (AI). Unlike machines that have mainly replaced manual workers since the industrialization era, Chat GPT will replace a considerable number of intellectual workers, what we commonly call "white-collar workers."
Sixth, there is the risk of malicious transformation. After similar technology is put on the market, everyone becomes its "trainer". This means that if this “kid” who is “unacquainted with the world” is deliberately fed bad information, it is likely to be cultivated into a “bad youth.”
In response to the above ethical risks, the author will give thoughts and suggestions from the following six points.
The first is about dealing with AI plagiarism risks. It is recommended that when developing tools, anti-plagiarism tools should be developed simultaneously to protect the rights and interests of original authors and facilitate their rights protection when they are plagiarized. In addition, it is necessary to formulate normative requirements for active annotation of various serious documents when generating content using Chat? GPT technology.
The second is about responding to information leaks. Fundamentally, we still need to strengthen information security management, such as adding labels that are easily identifiable by AI to all kinds of information and data on the Internet, so that technology and its products can clearly know which information they "eat" is private, confidential, and controversial information that cannot be disclosed, so that it can be avoided as much as possible when generating content.
The third is about the protection of minors. It must be like the current video APP, requiring the development of a green version for minors. This green version can be directly opened to all Internet users. For a more powerful and open full-featured version, users can be required to provide relevant proof that they have reached adulthood just like managing online games.
The fourth is about dealing with the risk of value penetration. The most important thing in this regard is to strengthen the research and development of my country's own products and minimize the dependence on "imports" of such products. Only when independent products have the absolute upper hand can the values advocated have an overwhelming advantage in the output of such products. Enterprises that provide products and services can be managed in accordance with the law from the "roots", allowing business entities to proactively avoid penetration risks.
The fifth is about dealing with the risk of unemployment among workers. The emergence, development, and application of similar technologies will further liberate humans and free them from more arduous and repetitive intellectual work, which will do more good than harm to human society. We must adjust and adapt as soon as possible. In the era after similar technologies are widely used, factors such as whether they will be replaced must be considered when cultivating talents, just like the impact of smart manufacturing technology must be considered when cultivating technicians.
Sixth, dealing with the risk of malicious transformation. Comprehensively improve the technical ethics standards related to AI, and mandate that AI products must have basic judgments of right and wrong that are consistent with public order and good customs. At the same time, developers are guided to strengthen the ability of similar technologies to identify malicious information, so that they can have a "rejection" reaction to malicious information, actively refuse to "eat" it, and not use it as "raw materials" for their own growth and evolution.
This type of technology is still on the eve of its explosion and is still undergoing rapid iteration, with unlimited possibilities in the future. For AI technology and robot-related technologies, we must spare no effort to ensure that technology applications comply with common ethical standards for mankind, serve personal development, obey national norms, and benefit all mankind. This cannot be deviated from.
(The author is Vice Chairman of the Fujian Provincial Committee of the Chinese People's Political Consultative Conference, member of the Central Standing Committee of the China Democratic League, and Chairman of the Fujian Provincial Committee)