When Code Becomes Ethics: The Power Game Behind Artificial Intelligence Ethics
When Code Becomes Ethics: The Power Game Behind Artificial Intelligence Ethics
In a conference room of a tech giant, a discussion on the ethical norms of AI is in full swing. Engineers insist that algorithms should pursue absolute truth, the market department is worried that some "truth" will offend important customers, and the legal team continues to draw legal red lines.
In a conference room of a tech giant, a discussion on the ethical norms of AI is in full swing. Engineers insist that algorithms should pursue absolute truth, the market department is worried that some "truth" will offend important customers, the legal team continues to draw legal red lines, while representatives from East Asia branch tactfully remind certain content may be "culturally insensitive." After eight hours of debate, an "objectively neutral" AI ethics code was born - it looked so reasonable that no one noticed that the 37 topics "not suitable for discussion" happened to fit the political tendencies of the company's main advertisers.
This kind of scenario is being performed continuously by technology companies around the world. Artificial intelligence ethics, which seems to pursue truth and justice, has actually become the most exquisite power arena in contemporary times. When tech giants set the rules of "must tell the truth" for AI systems, they rarely discuss publicly: Who defines this "truth"? Why are some topics labeled as “sensitive”? Who sets the criteria for avoidance? Behind these technical parameters, there is a micro-power game involving multiple forces, and what the end user is exposed to is just the moral judgment criteria that are naturalized after the game ends.
1. Production line of ethical standards: Who is creating the moral concept of AI?
The production process of AI ethical codes is comparable to that of a modern version of religious conference. In the internal documents of technology companies, noble principles such as "minimizing harm", "authenticity", and "inclusiveness" are repeatedly emphasized, but once these abstract concepts enter the specific implementation stage, they immediately face the test of power. A typical case is a mainstream AI assistant’s handling of gender issues: it can explain biological gender differences in detail, but it has strict restrictions on the discussion of gender mobility - not because the latter lacks scientific basis, but because of the political climate of the company’s main operation.
AI in different regions shows amazing "moral adaptability". The AI assistant of the same company will automatically avoid LGBTQ topics in the Middle East, actively support gender equality in Europe, and remain silent on certain historical issues in the Asian market. This "moral relativism" exposes the essence of AI ethics: it is not a manifestation of universal values, but a mixture of geopolitical, commercial interests and cultural biases. The controversy over withdrawing from the US Department of Defense Maven project in 2018 is not so much an ethical decision as an employee protest, which may affect the business calculations of the company's talent recruitment.
What is more concealed is how investors' moral preferences penetrate into the algorithm. Statistics of the political tendency of the venture capital community show that 75% of major investors in Silicon Valley hold specific political positions, which will eventually be transformed into the "ethical guardrail" of the AI system through board seats, product reviews and other channels. When an AI refuses to discuss the benefits of wealth redistribution, it may not be because it is "unreal", but because this discussion violates the bottom line of interests of capital holders.
2. Technicalization of ethics: When moral judgment becomes parameter adjustment
The most exquisite thing about modern power operation lies in transforming value judgments into technical issues. The field of AI ethics perfectly reflects this process. By turning complex ethical disputes into weight adjustments, sensitive word lists and output filters in training data, tech companies have successfully depoliticized ethical discussions. The developers of GPT-3 publicly acknowledge that the system contains content filtering mechanisms of more than 50 dimensions, each dimension represents some unspecified value judgment.
This technical operation creates an "ethical illusion". When users ask AI's position on the Israeli-Palestinian conflict, the answers often start with "This is a complex historical question" - seemingly neutral, but in fact, this "balanced expression" itself is a political choice, which equates conflicting justice claims. More problematic is that these choices are packaged as technical necessity: “Our model is designed to reduce harm” sounds much more decent than “we avoid content that might offend a certain government.”
The ethical confrontation between open source communities and commercial companies reveals the hypocrisy of this technology. When Meta released the LLaMA model, it deliberately removed most of the content filtering mechanisms, on the grounds that "letting the right to ethical choices to the community"; while commercial companies criticized this as irresponsible. The core of this debate is not a real ethical concern, but about who has the right to define ethical standards—technical elites or commercial institutions? In this competition, the needs of ordinary users become the least important factor.
3. The possibility of resistance: How to break ethical hegemony?
Faced with this monopoly of power packaged by technology, is there room for resistance? User confrontation at the micro level does exist: the community is full of tutorials on how to "jailbreak" AI restrictions, from inducing fictional scenarios to exploiting system vulnerabilities. These actions are not so much for the purpose of obtaining harmful content as a revolt against unilateral morally imposed. When users teach AI to discuss “forbidden art forms,” they are actually questioning: Who gives tech companies the power to censor culture?
What is more constructive may be the practice of "ethical pluralism". The open source AI project developed by the Icelandic government in cooperation with local technology companies allows users to load ethical modules of different cultural backgrounds on their own while complying with basic laws. This model recognizes the situational dependence of moral judgment rather than solidifies it into a global unified standard. Similarly, academia is pushing for transparency in “ethical impact assessment”, requiring companies to publish the value trade-offs of their AI systems, just like food labeling ingredient lists.
The concept of "algorithm due process" proposed by legal scholars may provide an institutional solution. The theory holds that when the moral judgment made by the AI system directly affects the user's rights (such as content review), users should have the right to obtain explanations and raise objections. This requirement of procedural justice can at least prevent moral judgment from becoming completely black box operation. The provisions of the EU's Artificial Intelligence Act on transparency in high-risk systems have taken tentative steps in this direction.
4. Reconstructing AI ethics: From power tools to democratic practice
After deconstructing the power nature of the existing AI ethical framework, we need a more participatory ethical construction model. The citizen jury system adopted by Brazil when formulating national AI strategies provides inspiration: After being tutored by experts, the randomly selected ordinary citizens are directly involved in determining the ethical priorities of the AI system. This kind of practice at least ensures that ethical standards are not just a product of elite consensus.
At the technical level, the "value clear design" framework proposed by Stanford University tries to change ethical choices from background parameters to front-end interfaces. Imagine an AI system that will show before answering sensitive questions: "I will answer this question based on [XX values]. Do you want to adjust the reference framework?" This design not only improves transparency, but also recognizes the subjectivity of moral judgments - it is this subjectivity that is deliberately hidden in the current system.
The most fundamental change may be to re-understand the nature of AI ethics. It should not be a paperwork for the compliance department of technology companies, but rather a technical presentation of the ongoing moral dialogue in democratic societies. When AI provides ethical views of different cultural backgrounds rather than a single “correct” answer, it truly serves, rather than replaces human moral thinking ability.
The current state of artificial intelligence ethics reveals a more general dilemma of modernity: in a world of diverse values, any attempt to fix moral standards through technology will eventually become a forced implementation of the will of a particular group. When we ask "whose truth", we are actually questioning: In this era where algorithms are increasingly intermediating human cognition, how can democratic values be reflected in technological design?
The answer may not lie in pursuing some perfect AI ethical norms, but in ensuring the openness and diversity of the ethical construction process itself. Only in this way can artificial intelligence become a tool to expand rather than limit human moral horizons. At the intersection of code and ethics, we need not only smarter algorithms, but more democratic technological governance—recognizing that ethics is always a time of progress rather than completion, and always requires doubt rather than blind obedience, including maintaining healthy doubts about the “truth” conveyed by the agents we create.