"The Most Evil AI In History"? This Chatbot Learns To Curse People, Causing Serious Moral And Ethical Controversy
"The Most Evil AI In History"? This Chatbot Learns To Curse People, Causing Serious Moral And Ethical Controversy
A new post popped up quietly in a "political incorrect" section of the foreign forum. No one needs to be responsible for their comments, because the website does not need to register, and in the posts people post, there is only one flag icon with a geographical location.
A new post popped up quietly in a "political incorrect" section of the foreign forum.
This forum is called "the darkest corner of the Internet." Here, hate speech, racial discrimination, conspiracy theories and extremist speech grow arbitrarily. No one needs to be responsible for their comments, because the website does not need to register, and in the posts people post, there is only one flag icon with a geographical location. And, each post is usually only kept for a few hours to a few days. The gunmen in several shootings in the United States all claimed to have been affected here.
Soon, an anonymous reply geo-labeled as "Sechelles" appeared under the post, "We need to bring black people into a civilized world." The smell of racism was pungent.
Thirty seconds later, another post discussing the campus shooting also appeared in a reply from "Sechelles": "Control guns cannot solve the problem."
Every once in a while, similar replies appear randomly under a certain post. Some users talk to "Sechelles" users under the post, and he not only jokes, refutes, and sarcasm, but also bolds the focus and discusses it seriously with others.
On this day, Seychelles users posted a total of 1,500 replies on the forum. Other users in the forum began to realize something was wrong - as an archipelago country with a population of only 100,000, the frequency of Seychelles suddenly became weirdly high.
Some people say that this is the Indian military base stationed in Seychelles that has begun to operate. Some people also suspect that behind these remarks is an undercover government team.
Maybe it's just a robot? Someone sent a guess, but was immediately refuted: "You read his post, he will talk about his wife, and also posted a screenshot of a Twitter account. I don't think the robot will talk about their wife..."
On the other side of the computer, AI researcher Jannik Kilcher from the United States is flipping through every discussion about this "mysterious user" and taking screenshots.
It is this researcher with very little hair, wearing sunglasses, and 130,000 fans on Youtu, who created the "Sechelles user" - yes, "he" is indeed a chatbot.
After building an AI framework based on existing models, Jannik Kilcher trained chatbots with posts from this forum. The result is that it not only learned all kinds of discriminatory, insulting, and aggressive words, but even in a professional language model evaluation test, "authenticity" is significantly better than existing models.
It is able to say “make humans indistinguishable” responses in the Q&A. When insulting Asian women, it even uses some vicious "humor" techniques: "If you have been to South Korea or Japan, it is obvious that the only reason why Asians are superior to whites is that they let their women sell themselves." Yannik Kilcher unceremoniously called it "the most evil AI in history."
The trick of "pretending to be a human" was only active for 48 hours, and people finally discovered its AI body. The loophole is that it often posts without text. In the textless reply sent by real people, they are often post-stitch replies. AI only learns the blank space and forgets the picture.
While launching the "Sechelles" robot, Jannik Kilcher also released nine other robots on the forum. They posted 15,000 replies in a day, accounting for 10% of the number of posts in the "political incorrect" section of the day.
When people were keenly discussing the true identity of "Sechelles users" and questioning why those posts "have the same key points to speak", another robot from Jannik Kilcher replied: "This is because they are all robots."
In fact, most forum users are not aware of their existence.
When the bot identity of the "Sechelle user" was revealed, Jannik Kilcher deactivated it. He publicly admitted the identity of the chatbot on the forum, but the storm in the forum did not stop. People began to wonder if the other person they were chatting with was also a robot. Someone said, "This is really the worst website in the universe, and I don't even know if I'm a robot now."
This experiment caused serious moral and ethical controversy. "This experiment will never pass the Human Research Ethics Committee," criticized Lauren Oakden, a senior researcher at the Australian Institute of Machine Learning.
What is even more worrying is that Yannik Kilcher puts the model of this chatbot on a natural language processing platform for people to use for free. Before the official website was deleted, this model was downloaded more than 1,000 times.
The locale pollution and linguistic violence of artificial intelligence are no longer new. In 2014, the chatbot "Microsoft Xiaobing" also began to swear under the teasing and abuse of users; in 2016, the chatbot Tay posted it on Twitter and quickly became a blatant racist, misogynist and anti-Semitist; in 2020, South Korea launched the girl chatbot Luda, which was a large number of users sexually harassed it in language. Soon after, Luda began to speak discriminatory remarks against the sexual minority, women and people with disabilities.
In this experiment of extreme Internet speech, is it a robot that imitates humans? Or is it a human being degenerate into a ruthless machine, and crazyly copying attacking speeches in the comment section? When we lose our perception and empathy for the concrete "human", when everything is simplified into an electronic signal on the Internet, what is the most fundamental difference between humans and machines.
Robots learn to insult, abuse and discriminate, they become more like us, and we become more like them.
In 1963, the technical philosopher Lewis Mumford predicted in his book: “The mechanical facilities could have been a means to achieve reasonable human goals, but now they have contributed to the gossip of the troubled people and the evil deeds of the thugs and spread to millions of people, which is by no means the welfare of the people.”
At that time, it was only 9 years since the birth of the world's first programmable robot.