Recommendations From Experts In Artificial Intelligence Ethics
Recommendations From Experts In Artificial Intelligence Ethics
As a computer scientist who has been immersed in the field of artificial intelligence ethics for about a decade, I have witnessed the development of this field firsthand. Nowadays, more and more engineers are finding that while developing artificial intelligence solutions, they also need to consider complex ethical issues.
As a computer scientist who has been immersed in the field of artificial intelligence ethics for about a decade, I have witnessed the development of this field firsthand. Nowadays, more and more engineers are finding that while developing artificial intelligence solutions, they also need to consider complex ethical issues. In addition to technical expertise, responsible AI deployment requires a meticulous understanding of ethical implications.
As a global leader in IBM's AI ethics, I've found a significant shift in the way AI engineers work. They no longer just talk to other AI engineers about how to build relevant technologies, and now they need to engage with more people, such as those who know how to use their creations to influence the application community. A few years ago, we realized at IBM that AI engineers need to add additional steps to their development process, including technical and administrative steps.
We have created systematic operating guides that provide the right tools to test issues such as bias and privacy. It is crucial to understand how to use these tools correctly, for example, there are many different definitions of fairness in artificial intelligence. To determine which definition to apply, it is necessary to negotiate with affected communities, customers, and end users.
Education plays a key role in this process. When trying our AI ethics guide with the AI engineering team, there was a team who felt that their project had no bias issues because the project did not include protected variables such as race or gender. They did not realize that other characteristics such as postal codes were also closely related to protected variables. Engineers sometimes believe that technical problems can be solved by technical solutions. While software tools are useful, they are just the beginning. The bigger challenge is learning to communicate and collaborate effectively with different stakeholders.
The pressure to release new AI products and tools quickly can conflict with a comprehensive ethical assessment. Therefore, we have established centralized AI ethical governance through IBM's Artificial Intelligence Ethics Committee. Often, individual project teams face issues such as deadlines and quarterly results, which can make it difficult for them to fully consider the wider impact on reputation or customer trust. Principles and internal procedures should be centralized. Our customers (other companies) are increasingly asking us to provide solutions that respect certain values. In addition, regulations in some regions now require ethical considerations. Even large AI conferences require papers to explore the ethical impact of research, prompting AI researchers to start considering the impact of their work.
At IBM, we are already working on developing tools that focus on key issues such as privacy, interpretability, fairness and transparency. We have created an open source toolkit for each focus, including code guides and tutorials to help engineers implement them effectively. However, with the development of technology, ethical challenges are also developing. For example, in generative artificial intelligence, we face new problems such as potentially aggressive or violent content creation, as well as hallucinations. As part of IBM's model family, we have developed input hints and outputs for protection models that evaluate issues such as factuality and harmful content. The functions of these models can meet both our internal needs and our customers.
The corporate governance structure must have sufficient flexibility in order to adapt to the development of technology. We will continue to evaluate the amplification or reduction of new advances such as generative artificial intelligence and proxy artificial intelligence on certain risks. When some models are released open source, we evaluate whether they introduce new risks and what safeguards are needed.
For AI solutions that may trigger an ethical crisis, we have an internal review process that may implement modifications. Our assessments go beyond the scope of technology (equality, interpretability, privacy) and include how it is deployed. Deployment may respect and may undermine human dignity and agency. We conduct risk assessments for each technology use case, and we recognize that to understand the risk, we need to understand the environment in which the technology will operate. This approach is consistent with the framework of the European Artificial Intelligence Act, which does not mean that generative artificial intelligence or machine learning is risky in itself, but that certain scenarios may be high or low-risk. High-risk use cases require additional scrutiny.
In this rapidly developing environment, responsible AI engineering requires constant vigilance, adaptability, and commitment to the ethical principle of putting human well-being at the centre of technological innovation.
Author: Rossi
IEEE
"Technology Overview"
Official WeChat public platform
Previous recommendations