AI Ethics

Artificial Intelligence Ethics Research Is Making Practitioners Feel Extremely Tired

Artificial Intelligence Ethics Research Is Making Practitioners Feel Extremely Tired

Artificial Intelligence Ethics Research Is Making Practitioners Feel Extremely Tired

Margaret Mitchell had just been working at Google for two years when she realized she needed a break. "

Artificial Intelligence Ethics_Artificial Intelligence Ethicist Role Differences_Artificial Intelligence Ethics Team Member Responsibilities

cognitive dissonance

The role of an AI ethicist or AI ethics team member varies widely, from analyzing the social impact of AI systems, to developing responsible strategies and policies, to solving technical problems.

Often, these workers are also asked to find ways to mitigate the harms of artificial intelligence, from algorithms that spread hate speech, to systems that allocate housing and benefits in discriminatory ways, to the spread of violent imagery and language.

This includes trying to solve deep-rooted problems such as racial discrimination and sexism in artificial intelligence systems, such as personally analyzing large data sets containing various toxic contents.

Artificial intelligence systems can reflect the most serious problems in our society, such as racism and sexism, or even exacerbate them.

Questionable technologies include facial recognition systems that classify black people as gorillas, and deepfake software that replaces the heads of people in pornographic videos.

Addressing these issues is particularly taxing for women, people of color, and other marginalized groups, which puts greater pressure on AI ethics work.

While burnout is not unique to AI ethics practitioners, all the experts MIT Technology Review interviewed said they face particularly thorny challenges in the field.

"The job you do puts you in harm's way every day," Mitchell said. "It makes the fact that discrimination exists that much worse because you can't ignore it."

But despite growing public awareness of the risks posed by AI, ethicists still find themselves struggling to gain buy-in from their colleagues in the AI ​​field.

Some have even disparaged the work of AI ethicists. Emad Mostak, CEO of an AI company that develops an open-source text-to-image AI system, said on Twitter that the ethical debate surrounding his technology was "paternalistic." Neither Mostak nor the company responded to MIT Technology Review's request for comment.

“Most of the people working in AI are engineers and they are not interested in the humanities,” said Emmanuel Goffi, an AI ethicist and founder of the think tank Global Institute for Ethics in Artificial Intelligence (AI).

Companies just want a quick technical fix, Goffey says; they want someone "with three slides and four bullet points who can explain to them how to be ethical." Goffey adds that (real) ethical thinking needs to go deeper and that it should be applied to how the entire organization operates.

"Psychologically, the hardest part is that you have to compromise between what you believe in and what you have to do every day, every minute," he said.

Mitchell said the attitude of tech companies, especially machine learning teams, exacerbates the problem. "Not only do you have to work to solve these tough problems, you have to demonstrate to them that these problems are worth solving. It's not about gaining support at all, it's about reducing resistance."

Chaudhry added: “Some people view (artificial intelligence) ethics as a worthless field and believe that those of us who work in ethics are negative about the progress of artificial intelligence.”

Social media also makes researchers more vulnerable to attacks from critics. There's no point in reaching out to people who don't value their work, Chaudhry said, "but if you're being tagged or targeted specifically, or your job is being brought up, it's hard not to reach out to those people."

Artificial Intelligence Ethicist Role Differences_Artificial Intelligence Ethics_Artificial Intelligence Ethics Team Member Responsibilities

Rapid development and dangers

The rapid development of artificial intelligence research has not helped. New breakthroughs in the field of artificial intelligence are always coming in thick and fast.

In the past year alone, tech companies have launched AI systems that generate images from text, and just weeks later we're seeing even more impressive AI software that generates videos from text.

This is impressive progress, but each new breakthrough may bring new potential harms, causing lasting negative effects.

Text-to-image AI can violate copyrights, and it can be trained on datasets filled with toxic content, leading to unsafe results.

“It’s exhausting chasing what’s trending on Twitter,” Chowdhury said.

She says it's impossible for an ethicist to become an expert on the myriad of different issues raised by each new breakthrough, but she still feels she has to keep up with every twist and turn in the AI ​​information cycle lest she miss something important.

Chaudhry said being part of a well-resourced team at Twitter gives her peace of mind because she doesn't have to shoulder the burden alone.

"I knew I could go away for a week and it wouldn't be too much of a problem because there would be other colleagues," she said.

But Choudhury works for a large tech company that has plenty of money and would hire a team to study AI ethics. Not everyone is so lucky.

Vivek, a data analyst at the Australian Data Analysis Ethics Research Startup, said that people working in smaller artificial intelligence startups will face greater business development pressure from investors, and the contracts signed with investors will not clearly state "XXX will be spent to build responsible technology."

Katyal said the tech industry should demand that more venture investors realize "that they need to pay more for more responsible technology."

The problem is, many companies can’t even see they have a problem, according to a 2022 report from MIT Sloan Management Review and the Boston Consulting Group.

Forty-two percent of respondents believe AI is a top strategic priority, but only 19% say their organizations have built responsible AI programs.

Gupta said some may think they are thinking about how to reduce the risks of AI, but they are not hiring the right people for the right roles or giving them the necessary resources to put responsible AI into practice.

“This is where people start to get frustrated and get burnt out,” he added.

Artificial Intelligence Ethics Team Member Responsibilities_Artificial Intelligence Ethicist Role Differences_Artificial Intelligence Ethics

growing demand

Companies may no longer have a choice in the near future about whether to take action to support AI ethics, as regulators begin drafting laws targeting AI.

The EU's upcoming Artificial Intelligence Bill and AI Responsibility Law will require companies to document how they mitigate harm.

In the United States, lawmakers in New York, California and elsewhere are working to regulate artificial intelligence in high-risk industries, such as recruiting.

In early October 2022, the White House released the Artificial Intelligence Bill of Rights, which lists five rights Americans should have when it comes to automated systems.

The bill could push federal agencies to increase scrutiny of artificial intelligence systems and companies.

While a turbulent global economy has led to hiring freezes and mass layoffs at many tech companies, responsible AI teams have never been more important, as rolling out unsafe or illegal AI systems can put companies at risk of hefty fines or demands to remove their algorithms.

For example, in the spring of 2021, U.S. regulators discovered that Weight Watchers ( ) illegally collected children's data, and its algorithm was forcibly removed.

Developing AI models and amassing databases is a significant investment, and being forced by regulators to remove them entirely is a huge blow to a company.

Burnout and a persistent feeling of being undervalued may lead people to leave the field of AI ethics altogether, which could harm the entire field of AI governance and ethics research.

Those with the most experience in addressing the harms posed by AI may often be the most exhausted, so they are particularly at risk of becoming disillusioned.

“Losing just one person can severely impact the entire organization because the expertise that one person has accumulated is extremely difficult to replace,” Mitchell said.

In late 2020, Google fired the co-leader of its AI ethics team and fired Mitchell a few months later, with several other members leaving in the following months.

Gupta said this brain drain poses "serious risks" to progress in the ethics of AI and makes it harder for companies to comply with their projects.

In 2021, Google announced it was doubling its staff of researchers working on AI ethics, but has since not commented on its progress.

Google told MIT Technology Review that it offers mental health resiliency training, peer-to-peer mental health support programs, and provides employees with digital tools to help practice mindfulness.

Google also offers online mental health services to employees, but it did not respond to questions about Mitchell's time at the company.

Meta says it has invested in benefits such as 25 free therapy sessions a year for employees and their families.

Twitter said it offers employee counseling and coaching sessions, as well as burnout prevention training.

Twitter also has a peer support program focused specifically on mental health. None of the companies mentioned what support was provided specifically for employees working on AI ethics teams.

Gupta said demands around AI compliance and risk management are growing, and tech company executives need to ensure they are investing enough in responsible AI projects.

The changes are from top to bottom. “Executives need to talk about the money, the time, the resources they need to allocate,” he said. Otherwise, those working on AI ethics are “doomed to fail.”

Gupta added that successful responsible AI teams need enough tools, resources and people to solve problems, but they also need relationships across agencies and organizations, and the authority needed to enact change.

Many of the mental health resources at tech companies focus on time management and work-life balance, Chaudhry said, but for people whose jobs are inherently terrible, more support is needed. She added that mental health resources for technologists specifically on AI ethics would also be helpful.

"There's not enough recognition for people doing this type of work yet, much less support or encouragement for people to move away from it," Mitchell said.

“The only mechanism for big tech companies to deal with this is to ignore the reality of it.”

Support: Ren

original:

Artificial Intelligence Ethics Team Member Responsibilities_Artificial Intelligence Ethicist Role Differences_Artificial Intelligence Ethics

More