AI Ethics

Multi-perspective Dialysis—Legal Norms For The Ethics, Security And Risk Control Of Artificial Intelligence

Multi-perspective Dialysis—Legal Norms For The Ethics, Security And Risk Control Of Artificial Intelligence

Multi-perspective Dialysis—Legal Norms For The Ethics, Security And Risk Control Of Artificial Intelligence

Introduction With the widespread application of AI technology, the many ethical, governance and security issues it brings have also attracted great attention from all walks of life. Especially the rapid progress of generative artificial intelligence technology represented by Chat-GPT and Sora

one

introduction

With the widespread application of AI technology, the many ethical, governance and security issues it has brought have also attracted great attention from all walks of life. In particular, the rapid advancement of generative artificial intelligence technologies represented by Chat-GPT and Sora inevitably triggers society's concerns about the disorder, unemployment and out of control brought about by artificial intelligence. Generative AI is a powerful type of AI that can create new original content by learning patterns in data, with the goal of generating creative and logical content or information. Generative AI can be applied to a range of purposes, and although it currently focuses on the generation of text, computer programming, images and sound, in the future, generative AI will be applied to a wide range of fields including drug design, architecture and engineering. [1] Compared with traditional rules and logic-based artificial intelligence, generative artificial intelligence pays more attention to the understanding and generation ability of language and text. The most popular generative artificial intelligence Chat-GPT currently uses a large number of parameters and data to pre-train large-scale text corpus and reinforcement learning from human feedback, and ultimately realize the simulation of human cognitive mechanisms.

Generative artificial intelligence has broad development prospects, but at the same time it also faces a large number of gaps in regulatory and regulatory levels, which makes the development of generative artificial intelligence accompanied by many difficulties and challenges. This article will sort out and analyze the ethical, governance and security difficulties that generative artificial intelligence may face, and summarize existing cases and bills, aiming to provide insights into the future development and governance approaches of generative artificial intelligence.

two

Data and algorithm bias

While generative artificial intelligence brings innovation and convenience, it has also caused many ethical dilemmas, especially in terms of data and algorithm bias. Because such AI systems rely on large amounts of data for learning, they may inadvertently learn and amplify biases in training data, resulting in unfair and discriminatory results in decision-making and output. Developers can implant their preferences into training data by cutting databases or manipulating algorithms, so that the content output by generative artificial intelligence presents a certain value that developers expect. In addition, the training of generative artificial intelligence needs to rely on a large amount of data and achieve self-learning and improvement through interaction with humans to improve accuracy. Bias and discrimination will be instilled into artificial intelligence in data training and human-computer interaction. As generative artificial intelligence is being used in more and more fields, the harm caused by data and algorithm bias is also expanding.

1. False and forged content

The rapid development of generative artificial intelligence has brought about an urgent problem that it may become a tool for false and fake content. In today's society's information explosion era, fake news has become commonplace. The excellent ability of generative artificial intelligence makes false information more realistic and poses severe challenges to the credibility of network information. Generative artificial intelligence can generate seemingly real articles, news, comments and other content, and it is difficult to distinguish between truth and falsehood. According to statistics, on mainstream audio and video websites and social media platforms at home and abroad, the number of in-depth synthetic videos produced by generative artificial intelligence in 2021 has increased by more than 10 times compared with 2017. In addition, its attention has also increased exponentially. Through statistics on interactive data, the number of likes received in 2021 has exceeded 300 million. [2] This brings the public the risk of possible misleading and confusion, and damages people's trust in the information.

2. Data and privacy leaks

Image editing apps such as Meitu Xiuxiu and Xingtu have launched "AI Photo" services. Users only need to upload their own photos and wait for a few minutes to obtain exquisite photo art photos, but users cannot ensure that the photos they upload will only be used in their authorized photo production. There are three main reasons for data leakage in generative artificial intelligence: First, generative artificial intelligence developers or service providers have the possibility of directly trading and selling data to third parties for their own interests; second, generative artificial intelligence uses human-computer interaction, that is, the user's active information input as an important way to learn and improve and develop. If the information entered by the user contains privacy-sensitive text data, such as medical records, judicial documents, personal communication records, etc., then the model may learn this information and leak it in future versions. [3] Finally, the security vulnerabilities of the generative artificial intelligence model itself are also an important consideration. These vulnerabilities can be exploited by attackers, posing a potential threat to the model. Attackers can attack by modifying the output of the model, tampering with the training data of the model, or stealing the parameters of the model. These attacks may lead to the leakage of model outputs, which in turn affects the privacy and confidentiality of the data.

three

At this stage, the governance approach of generative artificial intelligence in various countries

1. China

In terms of data and algorithm bias, my country has jointly issued the "Regulations on the Recommendation and Management of Internet Information Service Algorithms" by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation in January 2022. [4] This regulation regulates the management of algorithm discrimination, "big data killing old customers", inducing addiction, etc., and requires the establishment of an algorithm hierarchical and classified security management system.

In terms of false and forgery of content, the "Regulations on the Recommendation Management of Algorithms of Internet Information Services" also requires that the information generated by the algorithm is marked with significant identification, and requires that the algorithm recommendation service providers and users who provide Internet news information services shall not generate synthetic false news information. The "Regulations on the Management of Deep Synthesis of Internet Information Services" [5] more detailedly stipulates the requirements that deep synthesis services that include intelligent dialogue, intelligent writing, face generation, face manipulation, posture manipulation, etc. that have the functions of generating or significantly changing information content should comply with, including prompting the public in the form of prominent logos. The provision aims to promote the healthy development of the AI ​​industry while preventing the abuse of technology. It requires that synthetic videos and photos produced through deep synthesis technology must be clearly marked to avoid public confusion.

In terms of data and privacy leakage, my country's "Data Security Law" has clearly classified data into national core data, important data, and general data, and differentiated protection of different severity. When processing important data, the data security person and management agency should be clarified, and the data security protection responsibilities should be implemented. The development institutions should strengthen risk monitoring when carrying out data processing activities. When discovering risks such as data security defects and vulnerabilities, they should immediately take remedial measures; when a data security incident occurs, they should immediately take disposal measures, inform users in a timely manner in accordance with regulations and report to the relevant competent authorities. The processor of important data shall conduct risk assessments on their data processing activities regularly in accordance with regulations and submit risk assessment reports to the relevant competent authorities. [6] In addition, Article 9 of the "Regulations on Network Data Security Management (Draft for Comments)" also stipulates that "in principle, systems that process important data should meet the requirements of network security level protection and critical information infrastructure security protection requirements, and systems that process core data should be strictly protected in accordance with relevant regulations."

However, in the sub-field of generative artificial intelligence, my country is still in the early stages of governance construction. In order to deal with the dilemma of bias and discrimination in generative artificial intelligence algorithms, many ministries and commissions in my country jointly issued the "Interim Measures for the Management of Generative Artificial Intelligence Services" [7], responding to the dilemma of data and algorithm bias, content false and forgery, data and privacy leakage faced by generative artificial intelligence. This is an attempt by my country to deal with the impact of the new technological impact of generative artificial intelligence. However, my country has not established its complete generative artificial intelligence review and supervision mechanism. Further refine laws and regulations in the field of artificial intelligence and designing a governance mechanism that fits emerging technologies is the main path for my country to strengthen governance in the field of artificial intelligence.

2. United States

In February 2023, Biden signed the Executive Order of Promoting Racial Equality and Supporting Underserved Communities Through the Federal Government, which contains provisions that “instruct federal agencies to eradicate bias in the design and use of new technologies, including artificial intelligence, and protect the public from algorithmic discrimination.” This is the United States' attempt to eliminate data and algorithm biases brought about by generative artificial intelligence. The United States has implemented a governance model that encourages and regulates the development of new technologies such as generative artificial intelligence. In October 2022, the White House issued the Blueprint of the Artificial Intelligence Bill of Rights, proposing that "automated systems should work for the American people", and listing five guiding design, use, and deployment principles, including "security and effectiveness, prevention of algorithmic discrimination, data privacy, notification and interpretation, and human participation in decision-making." [8]

As artificial intelligence generates false information further spreads and is widely used to commit fraud, the United States has strengthened its legislative supervision. In May 2023, the Republican National Committee of the United States released a political advertisement made by generative artificial intelligence. U.S. Rep. Ivette Clark () proposed the "Real Political Advertising Act" and put forward the disclosure requirements for artificial intelligence-generated content for campaign advertisements. In June, U.S. Rep. Ritchie Torres proposed the Artificial Intelligence Disclosure Act, proposing to add a disclosure statement to any content generated by artificial intelligence. In order to deal with the forged content produced by generative artificial intelligence, the United States has implemented the Malicious Injunction Act and the Deep Forgery Report Act of 2019 to prevent the forged content generated by generative artificial intelligence from interfering with the election. In addition, California issued the AB 730 and AB 602 laws, prohibiting the use of generative artificial intelligence to interfere in elections and produce pornographic content.

3. EU

The European Union issued the Artificial Intelligence Act in June 2023, classifying Chat-GPT as a "high-risk" technology, and proposed to conduct necessary and comprehensive ethical reviews on it to reduce risks and ensure user autonomy, fair treatment and privacy. The EU advocates that artificial intelligence should comply with human ethics and should not deviate from the basic human morality and values. Compared with general artificial intelligence, the EU is more cautious about the ethical review mechanism of generative artificial intelligence, emphasizing the maintenance of basic ethical order and the protection of citizens' basic rights. In addition, the EU has issued the Code of Ethics of Artificial Intelligence, which regards privacy and data management as one of the seven elements that trustworthy AI needs to meet. Article 37 of the EU General Data Protection Regulation (Data) [9] stipulates that all data processors should establish "data protection officials" to be responsible for data protection related work. In terms of regulating false and forgery of content, the EU updated the Code of Anti-False Information Code in June 2022, requiring social media companies to remove in-depth forgery and other false information from their platforms, otherwise they will be fined up to 6% of the company's global revenue. The EU's Digital Services Act (DSA) also requires platforms to display mechanisms for reporting and deleting illegal content.

4. Summary

The governance of generative artificial intelligence is a complex issue that requires comprehensive consideration of technical, ethical, legal and social factors. In the future, the governance of generative artificial intelligence needs to ensure that its application and development are carried out in a framework that is in line with moral, legal and social values.

1. Ensure transparency and establish accountability mechanisms to ensure that developers and operators of generative artificial intelligence can transparently demonstrate the capabilities and limitations of their systems to the public. Promote system review and evaluation through open and transparent algorithms, data sources and model training processes. In addition, establish an accountability system to hold accountable for system errors, prejudice or abuse.

2. Strengthen data ethics and privacy protection and take measures to protect personal privacy and data security. Ensure the legality and transparency of data collection, storage and use. Follow data protection regulations, respect individual data sovereignty, and clearly inform users of how data is collected and used.

3. Advocate for diversified participation and impartiality to encourage diversified participants to participate in the decision-making process of generative artificial intelligence, including diverse backgrounds, expertise and stakeholders. Ensure that the application of generative AI is impartial among different communities and stakeholders and reduces the risk of unfair and discriminatory outcomes.

4. Improve protective measures and strengthen security measures for generative artificial intelligence systems to prevent potential malicious attacks and abuses. Establish a powerful network security protection mechanism, including vulnerability repair, identity authentication and data encryption, to ensure the security and reliability of the system.

5. Establish and improve the governance framework and strengthen international cooperation to establish a governance framework and guiding principles for generative artificial intelligence, based on cooperation at the national level or transnational organizations. Promote global governance and cooperation in generative artificial intelligence through collaboration sharing best practices, jointly addressing challenges and developing common standards.

Four

Case extension

1. Amazon AI recruitment model

In 2014, Amazon set up a team in Edinburgh to explore an innovative approach to automated recruitment. The team developed 500 computer models that extracted approximately 50,000 keywords by analyzing resumes of onboard employees over the past decade. Amazon has high hopes for this technology, hoping that it can simplify the recruitment process: just enter 100 resumes into the system, and the algorithm can automatically filter out the top five candidates, thereby greatly improving recruitment efficiency. However, this AI system seems to be biased against female job seekers. After in-depth investigation, the team found that the root of this bias lies in the training data on which the algorithm relies. Since most resumes have come from male job seekers over the past decade, the algorithm has unknowingly learned a gender bias that male job seekers are more reliable. Therefore, when processing resumes containing women's vocabulary, the system will automatically lower the ratings of these resumes. All resumes containing the word "female", such as "female chess club captain" and "female university" are given low scores by the model. This was a major blow to Amazon’s automated recruitment program, which ended in 2017.

It not only reveals the gender bias problem that artificial intelligence can generate in data processing, but also poses serious challenges to how the technology industry as a whole deals with algorithmic bias. This experience from Amazon reminds us that while utilizing artificial intelligence to simplify workflows, we must be alert to its potential bias issues and take steps to ensure the impartiality and transparency of the algorithm.

2. Crime risk assessment model

( for ) algorithms are used in the US judicial system to assess the risk of criminal reoffending to assist in decision-making, such as determining bail, sentencing, etc. However, this system has received widespread criticism for its racial bias in the assessment process. According to a 2016 investigation report, the algorithm is nearly twice as likely to be mistakenly marked as high risk than the white defendants when predicting the risk of reinstatement. The analysis also pointed out that although the algorithm is roughly the same accuracy in predicting the probability of recidivism among white and black defendants, it has significant differences in classification methods, resulting in black defendants facing higher risk of misjudgment. This raises serious questions about the impartiality of the system's algorithm, as it not only concerns individual freedom and future, but also may exacerbate racial inequality in the judicial system. In addition, since the internal working mechanism and decision logic of the algorithm are not disclosed, this increases concerns about the "black box" operation of the system, making it difficult for the individual being evaluated to raise effective challenges to their evaluation results.

These issues not only triggered doubts about the algorithm itself, but also inspired extensive discussions on the application of artificial intelligence in the entire criminal justice system. Many have called for the need for more regulation and transparency of the algorithm's decision-making process to ensure the impartiality of the algorithm and reduce the potential harm to marginal groups. This case also prompted the need for algorithm transparency and interpretability, as well as in-depth reflection on the application of algorithms in sensitive areas.

3. The first Chat-GPT crime case in China

On April 25, 2023, a report about a train accident in Gansu Province appeared on Baijiahao, claiming that a train hit a road worker, killing nine people. However, police investigations found that the report was actually a fake news, published by a self-media company in Shenzhen, Guangdong. The company's man surnamed Hong was suspected of using Chat-GPT technology to fabricate multiple versions of fake news by modifying elements in real news, such as time, place, date and gender, to avoid the platform's plagiarism check function, thereby attracting traffic and making illegal profits. The behavior of a man surnamed Hong is suspected of provoking trouble. The police have taken criminal coercive measures against him and may face up to five years in prison.

This incident is the first case in China to use artificial intelligence to generate fake news, which highlights the potential risks of generative AI technology in the spread of false information and the urgent need to regulate such behaviors. At the same time, this case also shows that with the development of AI technology, the threshold for generating false content has been reduced and the cost has been reduced, making it more difficult to spread and distinguish false information on the Internet. Therefore, more technical, regulatory and educational measures are needed to address these challenges and ensure the healthy development of AI technology and social stability. This includes, but is not limited to, strengthening supervision of AI-generated content, improving the public's ability to identify false information, and encouraging the development of more effective AI detection tools.

4. Deep counterfeiting technology

Deep forgery technology can use artificial intelligence algorithms to forge facial expressions and vocals in real time, and synthesize video and audio with high simulation. As early as 2018, director Jordan Pierre used deep forgery technology to generate a fake speech video of former US President Barack Obama, in which Obama attacked his successor Donald Trump with filthy language. In 2019, a fake video of U.S. House Speaker Nancy Pelosi circulated on social media. In the video, Pelosi looked like he was drunk, speaking in an abnormal and funny voice, and the video was later confirmed to be generated by deep fake technology. The rise and popularity of this technology provide new criminal means for fraudsters. In 2021, hackers successfully built a fraud platform called "Wire" (deep fake wires) using generative artificial intelligence technology. They cleverly used Chat-GPT technology to create a fake customer service robot and disguise it as a "virtual character", thus carrying out a series of frauds. With the continuous advancement of technology, the generation quality and fidelity of deep-falsed content are also increasing, which further increases the risk of its use for illegal purposes.

5. AI data breach incident

AI, an artificial intelligence startup, is controversial for the facial recognition technology it develops. The company's technology is able to identify individuals by forming a huge database of biometric information by crawling more than 3 billion photos from online social media. However, the AI's database was leaked in a hack, causing data from more than 2,000 customers to be at risk, including the US Immigration Bureau, the Department of Justice, the FBI and other important law enforcement agencies. The incident has sparked widespread attention on data privacy protection and security, while also raising legal lawsuits and official investigations. The AI ​​case highlights the risks that AI technology can pose in terms of privacy protection and data security, especially in the collection and use of its biometric information without the consent of the photographic subject.

6. Samsung data breach incident

Samsung experienced three employee data leaks within 20 days of launching Chat-GPT, including sensitive information such as semiconductor equipment measurement, yield/defects, and internal meeting content. [10] To prevent similar accidents from happening again, Samsung is developing relevant protective measures and strengthening internal management and employee training. In addition, Samsung has also reduced the limit on asking questions to less than 1,024 bytes per time. If a similar incident occurs again, Samsung may cut off services and impose penalties on relevant personnel. These events highlight the risks that generative artificial intelligence may pose in enterprise data security management, and also indicate that enterprises need to take more cautious and strict data protection measures when using such technologies.

Note (swipe up to read):

[1] 2023 World Economic Forum, "Top Ten Emerging Technologies in 2023", June 2023

[2] Tsinghua University Artificial Intelligence Research Institute, "Ten Trends Report on Deep Synthesis (2022)", February 2022 [3] Zheng Xiaodong, On the Data Security Risk and Responsive Governance of Generative Artificial Intelligence, September 2023

[4]

[5] State Internet Information Office, "Regulations on the In-depth Synthesis Management of Internet Information Services", November 2022

[6] Articles 27, 29, and 30 of the Data Security Law of the People's Republic of China

[7]

[8] White House Office of Science and Technology Policy, "Blueprint of the Artificial Intelligence Bill of Rights", October 2022

[9]

[10] World Internet Conference Artificial Intelligence Working Group, "Developing Responsible Generative Artificial Intelligence Research Report and Consensus Document", November 2023

END

More