AI Ethics

Research On Artificial Intelligence Ethics Is Making Practitioners Feel Extremely Tired

Research On Artificial Intelligence Ethics Is Making Practitioners Feel Extremely Tired

Research On Artificial Intelligence Ethics Is Making Practitioners Feel Extremely Tired

When Margaret Mitchell () realized she needed a break, she had just worked at Google for two years.“I started having mental breakdowns regularly,” said Mitchell, who founded and co-led the company’s AI ethics team. “This is something I’ve never experienced before.”After talking to a psychotherapist, she understood the problem: work exhausted her. In the end, due to too much pressure, she chose to take sick leave.Mitchell is now the chief ethics scientist at artificial intelligence startup Face, and her experience is not an isolated case.Burnout is becoming increasingly among teams associated with “responsible artificial intelligence ”, said Abhisek Gupta, founder of the Institute for Artificial Intelligence Ethics, Boston Consulting Group. universal.Tech companies are facing increasing pressure from regulators and activists to ensure their AI products are developed with some approaches to mitigate potential hazards.As part of their response, they invested and formed teams to evaluate how people’s lives, social and political systems are affected by the way artificial intelligence systems are designed, developed and deployed.But employees told MIT Technology Review that teams studying responsible AI can often only

When Margaret Mitchell () realized she needed a break, she had just worked at Google for two years.

“I started having mental breakdowns regularly,” said Mitchell, who founded and co-led the company’s AI ethics team. “This is something I’ve never experienced before.”

After talking to a psychotherapist, she understood the problem: work exhausted her. In the end, due to too much pressure, she chose to take sick leave.

Mitchell is now the chief ethics scientist at artificial intelligence startup Face, and her experience is not an isolated case.

Burnout is becoming increasingly among teams associated with “responsible artificial intelligence (AI)”, said Abhisek Gupta, founder of the Institute for Artificial Intelligence Ethics, Boston Consulting Group. universal.

Tech companies are facing increasing pressure from regulators and activists to ensure their AI products are developed with some approaches to mitigate potential hazards.

As part of their response, they invested and formed teams to evaluate how people’s lives, social and political systems are affected by the way artificial intelligence systems are designed, developed and deployed.

Tech companies such as Meta are forced to provide compensation and additional mental health support to employees such as content auditors who often need to screen for pictures and violent content that can cause trauma.

But employees told MIT Technology Review that teams studying responsible AI can often only "seek for themselves" even though this work is as exhausting as content reviews.

Ultimately, this may make people on the team feel underrated, affecting their mental health and leading to burnout.

Rumman Choudhury () leads Twitter’s machine learning ethics, transparency and responsibility team and is a pioneer in applying artificial intelligence ethics, who has faced the same problems in her previous positions.

"For a moment, I felt extremely tired and a little desperate."

All practitioners interviewed by MIT Technology Review talked enthusiastically about their work: it is driven by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But without the right support, this sense of mission may be unsustainable.

"You feel like you can't relax for a moment. The group of people working in tech companies are their job to protect users on this platform. I have a feeling that if I go on vacation, or if I don't have 24 hours a day," Choudhury said. Keep your attention and something bad will happen.”

Mitchell is also continuing to study the ethics of artificial intelligence, she said, “because it is so needed, and it’s clear that few people who are actually engaged in machine learning are aware of it.”

But there are also many challenges at the moment. Technology companies put huge pressure on individuals to solve major systemic problems without proper support, while the company itself faces continuous radical criticism on the Internet.

What is the ethics of artificial intelligence_Artificial intelligence_The characteristics of ethics of artificial intelligence

(Source: /MITTR | )

What is the ethics of artificial intelligence_The characteristics of ethics of artificial intelligence_Artificial intelligence ethics

Cognitive dissonance

The roles of AI ethicists or members of AI ethics teams vary widely, from analyzing the social impact of AI systems, to developing responsible strategies and policies, to solving technical problems.

Often, these workers are also asked to find ways to mitigate the harms of AI, from algorithms that spread hate speech, to systems that distribute housing and welfare in discriminatory ways, to the spread of violent images and language.

This includes trying to solve deep-rooted problems such as racial discrimination and gender discrimination in artificial intelligence systems, such as personally analyzing large data sets containing various toxic content.

Artificial intelligence systems can reflect the most serious problems in our society, such as racism and gender discrimination, and even exacerbate them.

Techniques in question include facial recognition systems that classify black people as gorillas, and deep-fake software that replaces pornographic video characters.

Dealing with these issues is especially laborious for women, people of color and other marginalized groups, putting more pressure on the work of ethics of AI.

While burnout is not unique to AI ethics practitioners, all the experts interviewed by MIT Technology Review say they face particularly tricky challenges in the field.

"The work you do makes you hurt every day. It makes the fact that discrimination exists worse because you can't ignore it," Mitchell said.

But while the public is increasingly aware of the risks posed by artificial intelligence, ethicists find themselves still needing to work hard to get recognition from colleagues in the field of artificial intelligence.

Some even belittle the work of artificial intelligence ethicists. Emad Mostak, CEO of AI company that develops open source text-to-image AI systems, said on Twitter that the moral debate surrounding his technology was "paternal." Neither Mostak nor the company responded to a request for comment from the MIT Technology Review.

"People who work in the field of artificial intelligence are mostly engineers, and they are not interested in humanities," said Goffi, an artificial intelligence ethicist and founder of the think tank Global Institute for Artificial Intelligence Ethics (AI).

Goffey said the company wanted only a quick technical fix; they wanted someone to "with three slides and four key points to explain how it is ethical." Goffey added, The ethical thinking needs to be deeper and it should be applied to the way the entire organization works.

“Psychologically, the hardest part is that you have to compromise every day, every minute, between your faith and what you have to do,” he said.

Mitchell said the attitudes of tech companies, especially machine learning teams, exacerbated the problem. "Not only do you have to work on these tough problems, you have to prove to them that these problems are worth solving. This is not simply a support, but a reduction in resistance."

"Some people think (AI) ethics is a worthless field, and believe that we ethical workers are negative about the advancement of AI," Choudhury added.

Social media also makes researchers more vulnerable to attacks from critics. Chaudhry said it doesn’t make sense to contact people who don’t value their work, “but if you’re tagged or specifically attacked, or your job is brought up, it’s hard not to contact these people.”

What is the ethics of artificial intelligence_Artificial intelligence_The characteristics of ethics of artificial intelligence

Rapid development and crisis

The rapid development of artificial intelligence research has not helped, and new breakthroughs in the field of artificial intelligence are always many and fast.

In the past year alone, tech companies have launched AI systems that generate images from text, and in just a few weeks we have seen more impressive AI software that generates videos from text.

This is an impressive progress, but every new breakthrough can bring new potential harms and cause persistent negative impacts.

Artificial intelligence from text to images can be copyright infringement, and it can be trained on datasets filled with toxic content, resulting in unsafe results.

"Chasing on the hot topics on Twitter is exhausting," Choudhury said.

She said that ethicists cannot be experts on the myriad different issues brought by every new breakthrough, but she still feels that she must keep up with every twist in the AI ​​information cycle in order not to miss something important.

Choudhury said that being part of a resource-rich team on Twitter made her feel at ease because she didn’t have to bear the burden alone.

"I know I can leave for a week and there won't be much problem because there are other colleagues," she said.

But Choudhury works for a large tech company that has enough funds and will hire a team to study the ethics of artificial intelligence. Not everyone is so lucky.

Vivek, a data analyst at Australian data analytics ethics research startup, said people working in smaller AI startups will face greater business development pressure from the investment side and contracts signed with investors. It will not be clearly written that “spends XXX to build responsible technology”.

Katyar said the tech industry should ask more venture investors to recognize that “they need to pay more for more responsible technology.”

The problem is that, according to a 2022 report by MIT Sloan Management Review and a Boston Consulting Group, many companies can’t even see that they have problems.

42% of respondents believe that AI is the primary strategic priority, but only 19% say their organization has built responsible AI projects.

Gupta said some people may think they are thinking about how to reduce the risk of AI, but they have not hired the right people to take the right role and have not given them the necessary resources to put responsible AI into practice.

“People start to feel frustrated and burnout,” he added.

Artificial intelligence Ethics_What is the ethics of artificial intelligence_The characteristics of ethics of artificial intelligence

Growing demand

Regarding whether to take action to support AI ethics, companies may no longer be able to choose independently in the near future, as regulators are beginning to draft laws on AI.

The upcoming AI bill and AI accountability law in the EU will require companies to record how they mitigate the harm.

In the U.S., lawmakers in New York, California and elsewhere are working to regulate AI in high-risk industries, such as the recruitment field.

In early October 2022, the White House unveiled the Artificial Intelligence Rights Act, which lists five rights Americans should have in terms of automation systems.

The bill could push federal agencies to strengthen scrutiny of artificial intelligence systems and companies.

Although the turbulent global economy has caused many tech companies to freeze recruitment and carry out massive layoffs, responsible AI teams have never been more important as they are now, as launching unsafe or illegal AI systems could put companies in huge fines or the risk of requiring its algorithm to be deleted.

For example, in the spring of 2021, U.S. regulators discovered that Weight Watchers ( ) illegally collected data on children, and its algorithm was forcibly deleted.

Developing AI models and collecting databases is a major investment, and being forced by regulators to completely delete them is a huge blow to the company.

Exhaustion and a persistent sense of underestimation may lead people to completely leave the field of AI ethics, which may harm the entire field of AI governance and ethical research.

Those with the most experience in solving the harms caused by artificial intelligence may often be the most exhausted, so the risk of their disheartening is particularly high.

“Just losing one person can seriously affect the entire organization because the expertise accumulated by one person is extremely irreplaceable,” Mitchell said.

At the end of 2020, Google fired co-leaders of its AI ethics team and fired Mitchell a few months later, with several other members leaving in the coming months.

Gupta said this loss of talent poses a “serious risk” to advances in AI ethics and makes it harder for companies to comply with their projects.

In 2021, Google announced that it would double its researchers in AI ethics, but has since commented on its progress.

Google told MIT Technology Review that it provides mental health resilience training, colleagues’ mental health support programs for colleagues, and provides employees with digital tools to help achieve mindfulness.

Google also provides employees with online mental health services, but it does not answer questions about Mitchell's hours at the company.

Meta said it has invested in benefits, such as providing free (psychological) treatments to employees and their families 25 times a year.

Twitter said it offers employee counseling and coaching courses, as well as burnout prevention training.

Twitter also has a peer support program focused specifically on mental health. None of these companies mentioned what support is provided specifically to employees of the AI ​​ethics team.

Gupta said demand for AI compliance and risk management is growing, and tech executives need to make sure they invest enough money in responsible AI projects.

These changes are from top to bottom. "Executives need to discuss the funds, time, resources they need to allocate," he said. Otherwise, those who work in AI ethics "are bound to fail."

Successful responsible AI teams need enough tools, resources and people to solve problems, but they also need relationships across institutions and organizations, as well as the power needed to push change, Gupta added.

Many of the mental health resources of tech companies are focused on time management and work and life balance, but for those who are terrible in their work nature, more support is needed. She added that mental health resources provided specifically to technicians with AI ethics will also help.

“People haven’t yet developed enough recognition for those who do this kind of work, and they won’t support or encourage people to get out of it,” Mitchell said.

“The only mechanism by which big tech companies deal with this is to ignore its reality.”

Support: Ren

original:

Artificial intelligence ethics_The characteristics of ethics of artificial intelligence_What is the ethics of artificial intelligence

More