AI Ethics

Lookout | Face The Ethical Challenges Of Artificial Intelligence

Lookout | Face The Ethical Challenges Of Artificial Intelligence

Lookout | Face The Ethical Challenges Of Artificial Intelligence

The ethical challenges of artificial intelligence are reflected not only in the "technical gap", but also in a wider field.

In 2023, with the emergence of a new generation of generative artificial intelligence applications, the international community's discussion on the ethical challenges of artificial intelligence is also increasing. More and more observers have found that the rapid development of artificial intelligence may exceed the preparations of human society and believe that the risks it brings cannot be ignored. Ethical challenges have become the most prominent topic in the widespread controversy brought by artificial intelligence, and will also have a profound impact on the future interaction process between artificial intelligence and human society.

Robot exhibits at the Shanghai Science and Technology Innovation Achievements Exhibition (photo taken on November 29, 2023) Photo by Fang Zhe/This issue

Looking at the ethical challenges of artificial intelligence in four dimensions

Similar to the birth of the Internet, artificial intelligence will also bring major changes to the world. This impact is usually a double-edged sword, and new technologies have transformed and impacted the world, and not everyone can benefit equally from it. The ethical challenges of artificial intelligence are reflected not only in the "technical gap", but also in a wider field. These challenges are concentrated in four dimensions.

The first dimension stems from the "autonomousness" of artificial intelligence, which means that this technology is easier to escape human control than other cutting-edge technologies. The relevant ethical challenges are mainly reflected in whether artificial intelligence will deceive and control human consciousness, and whether it will reduce human development opportunities.

Compared with the Internet and social media, artificial intelligence can have a more comprehensive understanding of individuals, "perceive" and predict what users want. This ability combined with "deep forgery" technology will further aggravate the "control" and "deception" against different groups. Through targeted information feeding, artificial intelligence may create a more rigorous "information cocoon" and a deeper "consciousness manipulation". In 2023, a British court ruled that an artificial intelligence chatbot encouraged a man to assassinate the queen, which reflected such risks.

The continuous iteration and progress of generative artificial intelligence represented by it also allows the business community to see an increasingly broad "human substitution" scenario. According to McKinsey Consulting, by 2030, as many as 375 million workers may face reemployment problems as technology such as artificial intelligence advances. The research company Oxford Economics also came to a similar conclusion. By 2030, about 20 million manufacturing jobs around the world will disappear, and these jobs will turn to automation systems, and even if the manufacturing labor force is transferred to service jobs, it will also face the situation of being replaced by machines. Among many positions, the types of jobs that are replaced by artificial intelligence technology include technical jobs such as programmers, software engineers, data analysts, media jobs such as advertising, content creation, technical writing, and news, as well as lawyers, market analysts, teachers, financial analysts, financial consultants, traders, graphic designers, accountants, customer service, etc. These positions generally have high education, and unemployment of personnel means huge human capital losses, which will aggravate the structural unemployment problem in some countries.

The second dimension stems from the "non-transparency" of artificial intelligence, which means that the hidden risks of this technology are more difficult to detect, and problems cannot be disclosed in a timely manner and attract social attention.

The application of artificial intelligence cannot be separated from the support of computing power and algorithms, but both important resources are opaque. For generative AI big models, hundreds of millions of parameters and data are called for each content generation, making it almost difficult to explain its decision-making process. The opacity of the process and content makes artificial intelligence more susceptible to hidden dangers. The lack or aggressive design of large models may cause problems such as private information leakage, excessive data collection and abuse, and uncontrollable algorithms. The output content of generative artificial intelligence may be misleading, contain unreal and inaccurate information, misleading human decision-making process. Some criminals may also use "data poisoning" and "algorithm poisoning" to mislead artificial intelligence systems, causing a larger range of systemic failures.

In recent years, the militarized deployment of artificial intelligence has been the most worrying. Artificial intelligence systems are being accelerated by countries in systems with offensive weapons, which has led to the increasing risk of making mistakes in the decision-making of "intelligent combat" systems, which can "frozen fire" or even detonate and worsen war.

The third dimension stems from the "scaling" of artificial intelligence, which means that the technology can be used by various groups and organizations, which may include some people with ulterior motives.

Artificial intelligence is easy to transplant, easily transformed, and easy to integrate. Technical breakthroughs are easy to spread. The same algorithm may serve completely contrary purposes. Some criminals can bypass model security strategies to get "dangerous knowledge" from artificial intelligence, and they can also transform artificial intelligence into a criminal tool. The American Forbes website said that artificial intelligence has become the most powerful weapon in the field of telecommunications fraud, and it is difficult for any country to escape this catastrophe that has spread to the world. Telecom fraud after being empowered by artificial intelligence may become the most harmful organized crime in the world.

The fourth dimension originates from the "monopoly" of artificial intelligence, which means that the technology is highly dependent on capital investment, and has a high threshold for the use of advanced algorithms, including the algorithm preferences formed by designers and data, which is easy to expand hierarchical differentiation.

First, artificial intelligence may intensify monopoly behavior. Artificial intelligence has become the most powerful "weapon of mass destruction" in the marketing field, changing the marketing strategy of enterprises in all aspects. However, this more precise marketing may also encourage behaviors such as "one thousand people, one thousand prices".

Secondly, artificial intelligence may aggravate real-world discrimination. The algorithms used by artificial intelligence are driven by data, which cover specific labels such as race, gender, belief, disability, and infectious diseases, reflecting the complex values and ideological characteristics of human society. Once bias is introduced into the training of relevant application models, the content output by the algorithm may have bias or preference for individuals, groups, and countries, causing fairness issues.

Finally, artificial intelligence may bring about development injustice. The key professional knowledge and cutting-edge technologies of artificial intelligence are concentrated in a few companies and countries, which have first-mover advantages, which will inevitably lead to a disparity in the development of the global artificial intelligence industry and will further deepen the global "digital divide". At the same time, countries leading in artificial intelligence technology and rules are in a cycle of rapid accumulation of technological advantages. This advantage is likely to become a "bottleneck" tool in the field of semiconductors, hindering the progress of late-developed artificial intelligence countries.

Before the women's discus finals of the track and field at the Hangzhou Asian Games, a staff member placed the discus on the robot dog used to carry discus (photo taken on October 1, 2023) Photo by Jiang Han/This issue

The first year of global artificial intelligence ethical governance

The above ethical challenges are attracting widespread attention from the international community. In 2023, governments and international organizations began to discuss the ethical issues of artificial intelligence intensively, and issued a series of statements, visions and policies to try to regulate the development path of artificial intelligence.

The United Nations has played a more important role in the ethical governance of artificial intelligence. In 2023, the United Nations has made certain progress in promoting countries to build consensus, discuss security risks and governance cooperation. In March, UNESCO called on countries to immediately implement the "Recommendation on the Ethics of Artificial Intelligence" issued by the organization in November 2021. In July, the United Nations held its first press conference with humanoid robots and humans. Nine humanoid robots accepted questions from experts and various media; held the "Artificial Intelligence Benefits Humanity" Global Summit to discuss the future development and governance framework of artificial intelligence; and the Security Council held its first public debate on the potential threat of artificial intelligence to international peace and security. In October, UN Secretary-General Guterres announced the establishment of a high-level consulting agency for artificial intelligence, with 39 experts from around the world joining together to discuss the risks and opportunities of artificial intelligence technology and provide support for strengthening international social governance. It can be seen that the United Nations has incorporated artificial intelligence ethics into the global governance agenda and will promote the formation of more formal and binding organizations and governance norms in the future.

The EU has specifically legislated for artificial intelligence and implemented comprehensive supervision of the technology. The European Commission proposed the draft negotiation authorization proposal for the Artificial Intelligence Act in 2021, strictly prohibiting "artificial intelligence systems that pose unacceptable risks to human security", requiring artificial intelligence companies to maintain control over algorithms, provide technical documents, and establish risk management systems. After marathon negotiations, the European Parliament, EU member states and the European Commission reached an agreement on the Artificial Intelligence Act on December 8, 2023, which became the world's first comprehensive regulatory regulation in the field of artificial intelligence.

The United States has introduced regulatory policies, but the legislative process is slow. Compared with the EU, the United States has fewer regulatory requirements, mainly emphasizing the principle of safety and encouraging corporate self-discipline. In January 2023, the National Institute of Standards and Technology (NIST) officially released the "Artificial Intelligence Risk Management Framework", aiming to guide organizations to reduce security risks when developing and deploying artificial intelligence systems, but the document is a non-mandatory guidance document. In October, Biden signed the most comprehensive AI regulatory principle in the United States to date, the Executive Order on the safe, reliable and credible development and use of artificial intelligence, which surpasses voluntary commitments made by companies such as Google and Meta earlier this year, but still lacks enforcement effectiveness. After the executive order is issued, the Biden administration urged Congress to introduce relevant legislation as soon as possible. Senate Majority Leader Schumer has hosted two AI Insight Forums in September and October to collect industry advice and hope to prepare artificial intelligence legislation within months, but it is still unknown whether such bills can be passed smoothly in the future.

Britain invests more resources in artificial intelligence governance diplomacy. In November 2023, the first Global Artificial Intelligence Security Summit was held in Bletchley Park, UK. Representatives from the United States, the United Kingdom, the European Union, China, India and other parties attended the meeting. The meeting finally adopted the Bletchley Declaration, emphasizing that many of the risks of artificial intelligence are international in nature and therefore "it is best solved through international cooperation." Participants agreed to work together to create an "internationally inclusive" cutting-edge AI security scientific research network to deepen their understanding of AI risks and capabilities that are not yet fully understood. The UK has carried out a lot of preliminary preparations and diplomatic mediation to host the summit, aiming to establish its country as the "convener" of global artificial intelligence governance. In the future, more and more countries will invest more resources in artificial intelligence governance diplomacy and compete for the right to speak in this emerging field.

China attaches great importance to the issue of artificial intelligence governance, and its governance concept focuses on balanced development and security. In April 2023, the State Internet Information Office drafted the "Regulations on the Management of Generative Artificial Intelligence Services (Draft for Comments)". The "Interim Measures for the Management of Generative Artificial Intelligence Services" was officially announced in July. It makes specific provisions on generative artificial intelligence in terms of technological development and governance, service specifications, supervision and inspection, and legal responsibilities, and will come into effect on August 15. It is the world's first special legislation on generative artificial intelligence. At the same time, China has also launched a series of regulations on specialized fields such as in-depth synthesis and algorithms within the year, such as the "Regulations on Deep Synthesis Management of Internet Information Services" that came into effect in January and the "Regulations on Recommended Management of Internet Information Services Algorithm" that came into effect in March. In October, China proposed the Global Artificial Intelligence Governance Initiative, which put forward specific principles, guidelines or suggestions on personal privacy and data protection, data acquisition, algorithm design, technology development, risk level testing and evaluation and ethical standards.

Why is the United States slow to move

Compared with the United States' development speed in the application of artificial intelligence technology, the United States' policy and legislation on artificial intelligence regulation are slow, which is the result of the joint action of many factors.

First, the United States is unwilling to give up its advantage in artificial intelligence.

People from the U.S. government and strategic circles generally believe that artificial intelligence is one of the strategic technologies that determine whether the United States can win in the next round of global technology competition. Since the Obama era, the US government has put forward several relevant national plans and visions. Both the Trump administration and the Biden administration’s artificial intelligence executive order emphasize maintaining “the leadership of American artificial intelligence” and regard this goal as the fundamental goal of the United States’ governance of artificial intelligence. Compared with governance risks, the United States is less willing to strictly limit the development of the technology before ensuring its technology is absolutely ahead. After the hottest, relevant regulatory policies in the United States have been introduced one after another. The goal is not only to deal with governance risks, but also to prevent the rapid spread of this technology and weaken the United States' leading advantage.

Secondly, the issue of artificial intelligence ethics has been politicized in the United States, and it is difficult for the two parties to coordinate differences to reach a governance consensus.

In recent years, the polarization of the United States has intensified, and the two parties have opposed their opinions on almost all social issues, especially in the ethics of artificial intelligence involving people's lifestyles. On issues, the Democrats are more concerned about issues related to diversity, such as personal privacy, algorithmic discrimination, and algorithmic fairness; the Republicans are more concerned about security issues such as national security and artificial intelligence crime. In terms of risk prevention, the Democratic Party believes that the application of artificial intelligence fraud and rumors is the most prominent risk, and requires strengthening the responsibility for intermediate communication channels such as social media; the Republican Party believes that such governance measures have political motivations and are not conducive to Republican candidates. Affected by the 2024 general election, the contradictions and debates between the two parties have become more acute, which has significantly lagged behind the development of the situation. It can be seen from the Biden administration’s successive launch of a series of artificial intelligence governance policy documents at the end of 2023 that the Democratic Party intends to take the lead in breaking the deadlock and regard artificial intelligence governance as a potential campaign issue to accelerate the legislative process of the issue.

Finally, the United States also faces some institutional obstacles to controlling artificial intelligence.

The so-called "freedom first" and "individual first" in the American political tradition are not conducive to controlling decentralized, risk-dispersed and rapid spreading technologies and applications. This tradition can easily create regulatory gaps between states and make it difficult for the government to use administrative resources to eradicate illegal interest chains. The proliferation of guns and drug crimes in the United States are all related to this, and dangerous applications of artificial intelligence may also become the next social risk that will be rampant in the United States.

This hesitation may lead to an increase in the risk of a global "arms race" for artificial intelligence. As the world's leading country in the research and development of artificial intelligence technology, the United States has the obligation to become the earliest participant in promoting regulatory measures related to artificial intelligence in the world. However, the United States has not yet formed regulatory legislation, and the agenda for promoting artificial intelligence governance at the global level has also slowed down, which will cause more and more countries to ignore control and blindly pursue technological leadership, and then join the algorithm "arms race". This competitive color may cause artificial intelligence to gradually deviate from its healthy and orderly development direction, which will undoubtedly bring more obstacles to subsequent global legislation and governance, and will also increase the risk of vicious competition in various countries around artificial intelligence.

(Li Zheng: Assistant Director of the American Institute of China Institute of Contemporary International Relations; Zhang Lanshu: Assistant Researcher of the American Institute of Contemporary International Relations)

More