AI Risks

Mariana Todorova | Artificial Intelligence And Human Rights: Beware Of Risks And Seize Opportunities

Mariana Todorova | Artificial Intelligence And Human Rights: Beware Of Risks And Seize Opportunities

Mariana Todorova | Artificial Intelligence And Human Rights: Beware Of Risks And Seize Opportunities

Artificial Intelligence and Human Rights: Beware of risks and seize opportunities, Mariana Todorova, a professor at the Bulgarian Academy of Sciences, is very honored to participate in the "China-Europe Human Rights Symposium". Let me talk about artificial intelligence and human rights. This is an important issue involving the social and political level

AI ethical dilemma_Artificial intelligence vigilance_Artificial intelligence human rights risks

Artificial Intelligence and Human Rights: Beware of risks and seize opportunities

Professor of the Bulgarian Academy of Sciences

Mariana Todorova

I am very honored to participate in the "China-Europe Human Rights Seminar". Let me talk about artificial intelligence and human rights. This is an important issue involving the socio-political level, and it has profoundly influenced the exercise of our rights and the protection of human dignity.

As artificial intelligence systems are widely used in areas such as justice and health care, it is redefining the boundaries of rights protection and may even have risks in terms of human rights protection. This technology is a double-edged sword—bringing new opportunities such as promoting equality, empowering society and promoting global justice, but also accompanied by new types of risks that cannot be ignored that can affect civil liberties.

The current challenge lies in accurately identifying and defining the ethical dilemma brought about by artificial intelligence. Furthermore, an imperfect technical perspective may reinforce dangerous stereotypes. Facial recognition technology in the United States is a typical case: the system has a significantly higher misrecognition rate for dark-skinned people; the algorithmic bias against female executives in the human resources platform has also led to unfounded differentiated treatment. And this type of technology has been applied in border control or policing.

Not only that, artificial intelligence may also expose discrimination to people with disabilities, specific language groups, or those with lower socioeconomic status. This bias caused by automation systems continues the regional and structural inequality formed by history.

We must remain highly vigilant about the “illusion” issue of artificial intelligence. The content generated by these systems often has a confusing professional appearance, but can actually contain serious mistakes - especially dangerous in key areas such as law and medical care. There are cases that AI tools may recommend wrong treatment plans, or make up non-existent legal cases. When such false content appears in the form of legal documents, it will not only harm individual rights and interests, but will also fundamentally shake the public's trust basis for the democratic system.

We also observed that although monitoring tools such as facial recognition, image recognition and predictive analysis help prevent risks, they inevitably erode individual freedom - especially freedom of association, because the continuous monitoring state will cause people to feel psychological fear. As AI systems begin to interpret emotional states through biosensors, we are facing a whole new ethical challenge. It is worth noting that the control range of artificial intelligence is continuing to expand, and its influence has reached an unprecedented level.

Source: China Human Rights Research Association

Comprehensive manuscript: He Haoyuan

More