AI Ethics

EU Releases Ethical Guidelines For Artificial Intelligence

EU Releases Ethical Guidelines For Artificial Intelligence

EU Releases Ethical Guidelines For Artificial Intelligence

The European Commission released ethical guidelines for artificial intelligence on April 8 to enhance people's trust in the artificial intelligence industry. The guidelines were drafted by the EU High-Level Expert Group on Artificial Intelligence

The European Commission released ethical guidelines for artificial intelligence on April 8 to enhance people's trust in the artificial intelligence industry. The European Commission also announced the launch of a trial phase of the ethical code for artificial intelligence, inviting business enterprises, research institutions and government agencies to test the code.

The guidelines, drafted by the EU High-level Expert Group on Artificial Intelligence, list seven key conditions for "trustworthy artificial intelligence" - human agency and supervisory capabilities, security, private data management, transparency, inclusiveness, social well-being, and accountability mechanisms to ensure that artificial intelligence is sufficiently safe and reliable. The European Union defines "artificial intelligence" as "systems that display intelligent behavior" that can analyze the environment and exercise a certain degree of autonomy to perform tasks.

According to the official explanation, "trustworthy artificial intelligence" has two necessary components: first, it should respect basic human rights, rules and regulations, core principles and values; second, it should be technically safe and reliable, and avoid unintentional harm caused by insufficient technology. For example, if AI diagnoses a person with a certain condition in the future, EU guidelines can ensure that the system does not make a biased diagnosis based on the patient's race or gender, nor does it ignore the objections of human doctors, and the patient can choose to get an explanation of the diagnosis.

Maria Gabriel, the European Commissioner for Digital Economy and Society, said any decisions made by algorithms must be verified and explained. When an insurance company's system denies a claim, the customer should know in detail why. Gabriel also stressed the need to ensure fairness in AI systems. If the algorithm used by the recruitment system is generated by data from a company that only hires male employees, its algorithm may screen out female applicants, "such biased data input will create ethical dilemmas."

Regarding this code, Andrews Ansip, Vice President of the European Commission and Vice President of the EU Single Digital Market Strategy, said: "Artificial intelligence that meets ethical standards will bring a win-win situation and can become Europe's competitive advantage. Europe can become a leader in human-centered artificial intelligence that people trust."

The EU's move has also been questioned by some. Mathias Spielkamp, ​​co-founder of the non-profit organization "Algorithm Watch", believes that although the formulation of guidelines is a good idea, the core concept of "trustworthy artificial intelligence" around which the guidelines revolve is not yet clearly defined, and it is still unclear how future supervision will be implemented; some industry insiders are worried that over-refinement of the guidelines will make it difficult for many companies, especially small and medium-sized enterprises, to operate; in addition, Thomas Messinger, a philosophy professor at the University of Mainz in Germany who participated in the drafting of the guidelines, criticized the EU for not banning the use of artificial intelligence to develop weapons. (Reporter Fang Yingxin)

More