Taming AI With Law And Morality? Warnings On Regulatory Hallucinations丨Tsinghua Economic Management Say
Taming AI With Law And Morality? Warnings On Regulatory Hallucinations丨Tsinghua Economic Management Say
In recent years, global artificial intelligence has shown a blowout, and AI is reshaping our lives at an unpredictable rate. This shaping not only comes from the innovation brought by emerging technologies
In recent years, global artificial intelligence has shown a blowout, and AI is reshaping our lives at an unpredictable rate. This shaping not only comes from the innovation brought by emerging technologies, but also deconstructs the foundation of traditional governance philosophy at the cognitive level - the regulatory framework we are used to is based on the preset of "human beings as the only intelligent subject", and AI systems are creating new cognitive entities carried by silicon-based carriers. However, the existing AI governance system is far behind the development of AI itself in the speed of updates and iterations. When the government is still supervising existing AI risks through cumbersome procedures, new risks have long been generated. When government governance lags behind, people pin their new hopes to regulate AI risks on moral governance. However, excessive dependence and trust in moral governance is itself an obvious hidden danger. It will only cover up our anxiety and confusion in the face of uncertainty in AI regulation, and is ignoring and covering up rather than solving AI regulation problems.
What are the risks arising from the development of AI? Why can't the government take effective measures to regulate AI risks? What new flaws will the moralization of AI governance have? How should we face up to the new risks brought about by the sustainable development of AI? Where does human governance of AI go? In response to the above issues, the team of H., professor of sociology at the National University of Singapore, published his main views in Volume 2025 Risk.
Recently, Feng Runhuan, a professor at the Department of Finance at Tsinghua School of Economics and Management, interpreted the professor's views on the paper and published an article "Tacling AI with Law and Morality? Warning on Regulatory Illusions" in the Caixin Views column. Now forward the full text for the benefit of readers.
Types of risks arising from AI development
As described in many Hollywood science fiction movies, the biggest risk of AI development is that it causes species extinction. Obviously, this risk exceeds the current level of human cognition. At present, academics believe that this risk exists because there is a major turning point in the development of science and technology - the "singularity of artificial intelligence" (AI). After the turning point, artificial intelligence will evolve into super intelligence and explosively enhance itself. Epistemologically conservative observers believe that this will never be achieved, while others estimate that this may be a distant possibility, and that it will take decades if it can be achieved.
In addition to the extreme risk of species extinction, current AI technology mainly derives eight high-probability non-systemic risk scenarios: the out-of-control system of autonomous weapons causes inhumane killing; the combination of bioengineering and algorithms has spawned artificial epidemics; technology empowers extremists to obtain tools for large-scale destruction; deep forgery technology leads to the collapse of information trust in the post-truth era; algorithm bias aggravates social hatred and ethnic confrontation; the intelligent upgrade of financial fraud; political purpose manipulation of algorithms threatens social system stability; monitoring technology breaks through the bottom line of privacy protection. Although these non-systematic risks have not yet reached the level of species extinction, their superposition effects may systematically disintegrate the foundations of modern society. Paradoxically, advanced countries and the AI industry claim that technological development is unstoppable, and on the other hand, they call for urgent legislation and supervision, but in fact they expose the inherent conflict between regulatory consensus and commercial interests - advanced AI countries and technology giants want to build moats through rules, but are also afraid that policies will restrict R&D speed. This contradictory mentality leads to a serious disconnect between technology rush and actual governance, forming a policy deadlock that runs in parallel with "reduction and racing".
Limitations of government governance of AI risks
AI is a technology that will change politics, military, business and scientific research at the same time, and these four fields each have their own "playing methods" and interest demands. Politicians value the governance and power tools brought by AI, the military is eager to use it to upgrade weapons systems, enterprises are profit-making oriented, and research institutions pursue new discoveries that subvert cognition. It is precisely because everyone is rushing forward that relevant laws and regulations often cannot keep up with the speed of technology and will soon be outdated as soon as they are released. What's worse is that it's almost impossible to get the whole world to agree with a set of regulatory rules - no one wants to let his opponent take the initiative because of adhering to the rules. Without a global "super international organization" that controls AI, and countries have proposed rules and are worried about affecting competition, it is even more difficult to form a joint force. At the same time, AI models are becoming cheaper and smaller, and can even run on mobile phones, which can be downloaded, modified and used by anyone. In addition, popular education has allowed ordinary people to have the ability to use AI, and the "France Net" of supervision is full of gaps. In other words, AI is both fast and wide, and supervision is slow and shattered. It is very difficult to truly "control" it.
The idea of seeing the state as a regulator of AI risk is essentially based on three outdated cognitive frameworks. The first thing to bear is the metaphorical trap of "social organism" - this 18th-century political philosophy compared society to the human body, believing that the government can coordinate all organs to function together like the brain. This theory ignores the real picture of modern society: highly differentiated political, economic and technological subsystems each follow independent operation logic, just like smart terminal clusters with different operating systems. They do not have a unified instruction set, but also lack a natural collaboration mechanism. The second cognitive bias derived from this is to transform modern government into a precision central processor. In essence, the regulatory agencies in reality are loose alliances composed of countless "self-interest protection" departments, and departmental interest games often overwhelm the consensus on risk control. This structural contradiction ultimately points to the third fundamental misjudgment: assuming that the state machine has a "God perspective" that is transcendent to the social subsystem, and Lumann's modern system theory reveals that all social systems, including the government, are essentially self-referential closed cycles, and their decisions prioritize serving the system's own expiration needs rather than the overall interests. When AI supervision involves military secrets or tax interests, the so-called "risk control" is inevitably alienated into a utilitarian consideration of maintaining power, which is the inevitable limitation encountered by the government in the era of computing power management.
Uncertainty and ambiguity of Ethical Regulation AI Risks
The potential risks brought by AI stem from programs carefully written by humans and cannot be completely resolved by moral self-discipline or lagging legislation alone. These risks are not fiction of science fiction stories, but are products "created" by skilled engineers, talented inventors and entrepreneurs. As generative AI enters the public eye, people are beginning to worry that it will gain autonomy and pose a threat to humanity. As the Nobel Prize winner pointed out, even designers are unable to fully explain how AI systems make decisions, meaning we know nothing about their motivations and potential directions. Then, based on the analogy of biological evolution, smarter species tend to marginalize or replace weaker species, but if this logic is simply applied to AI, it may fall into the trap of "personalizing" the machine. In this regard, although Nick warned not to underestimate the possibility that AI will be better than human intelligence in the future, it did not fully consider the other side: fully autonomous AI may not desire power and be indifferent to humans, but will instead put us in a position of no pain. More optimistic post-humanists believe that AI may be integrated with other cutting-edge technologies to help humans complete a "conscious" evolution and "upgrade" our species into a new form of existence. If modern people are replaced by superior intelligence, they may not be as terrible as the end of the world, because this is a natural continuation of technological development.
When it comes to morality and ethics, although ethics provides us with a framework for value judgment, it does not have the ability to enforce it, just as legal norms without law enforcement support are difficult to implement. The reason why the law can maintain order to a certain extent is because there is a complete law enforcement and monitoring system behind it, and violators may face severe consequences; while ethical norms lack written clauses and external punishments, and rely more on individuals or groups' self-discipline. Especially in highly competitive fields such as business, scientific research and education, abiding by self-established rules is often tempted by tempting interests. Once the substantive interests are touched, the rules may be in vain. In other words, we cannot fully trust moral self-discipline, and we cannot expect lagging legislation and supervision to play an absolute role in the reality of rapid iteration of technology. Our response to AI risks must face up to the "black box" characteristics of the technology itself and the structural limitations of social governance.
In the classic fable of Homer's epic "Odyssey", Ulysses adopted a double protection mechanism to resist the deadly temptation of Siren Siren: not only let the crew seal the ears with wax to isolate the sound waves, but also actively tie himself to the mast - this wise man clearly foresees that when the sound of charm penetrates the rational defense line, only physical constraints can prevent self-destruction. This foresighted self-binding wisdom can provide inspiration for today's AI governance. Just as the silen's singing is wrapped in the sugar-coated "a bright future", the promise of convenience brought by AI technology also hiding risks, and relying solely on moral discipline is like expecting wooden masts to resist the storm independently - only by transforming ethical standards into institutional constraints with coercive power (such as legal frameworks and technical guardrails) can a real risk isolation mechanism be built. We might as well push the discussion to a deeper level: Since the reality gap between soft advocacy of technical ethics and legal enforcement has become a consensus, the realistic mapping of AI catastrophic risks is more constructive. When technology breaks through critical scale, those predictions that have been questioned as alarmist will eventually develop in complex systems. If you still rely on moral conscious protection effectiveness at this time, it is tantamount to expecting Ulysses to fight against the Siren Moyin by relying solely on willpower. Historical experience has long proved that the absence of the institutional protection mechanism will inevitably lead to systematic failure.
To question the effectiveness of moral supervision, we must also see the concept of high ambiguity and diversity of moral theory itself: the more loud the moral argument, the further it is from the actual operation, and must be translated into specific precepts before it can be implemented. This has also led to the emergence of applied ethics, and ethical diversity means that countless conflicting value systems coexist, and even if it is difficult to reach an agreement within the same theory, human beings cannot expect to find a clear operating guide in ethical works. More subtly, this ethical ambiguity is not entirely unhelpful. It just so happens to provide policymakers with rationalized self-judgment, allowing them to confidently execute established plans while minimizing the cost of legitimacy, just as we condemn others’ hypocrisy and opportunism in our daily lives, but we find the same tendencies in ourselves from time to time. What follows is that once moral discourse intervenes, it divides society into a binary world of "our right" and "other side wrong". At the least, it falls into a self-righteous war of words, and at the worst, it intensifies confrontation and conflict, and it is impossible to prove that "our"'s moral judgment is superior to others; on the contrary, unless anyone can put on the cloak of a god, they should admit that they are often unable to determine or guarantee what absolute justice is. When we maintain our recognition of this, we can see more clearly that morality is often used by those who are power or become a self-deceived spiritual chicken soup. Admittedly, this does not mean that we must completely deny moral discourse—a society that completely abandons morality will also be a wasteland, but requires ethics to avoid exceeding its promise when calling for action and always maintain critical reflection on its own limitations and historical failures.
Looking to the future—A virtuous cycle of human and AI cognition
Professor H.'s research interprets a core idea: it is difficult to resist the catastrophic risks brought by AI by relying solely on government regulation and moral constraints. We must face up to the deep interaction between technology and social structure. If we focus our hopes on resolving AI risks entirely on government forces and moral standards, we will easily fall into the trap of self-deception, because law and ethics have significant limitations in understanding and governance of complex social systems. As social theory reveals, any regulatory act means intervening in a social world that is both complex and often idealized, and we do not fully grasp the working mechanism of that world. At the same time, contemporary social sciences generally avoid investment in theoretical construction, believing that it is difficult to be practical and consumes resources, which weakens our ability to deeply reflect and systematically manage AI risks. What is even more serious is that the real social structure often excludes or even destroys those truly effective regulatory attempts, so that even if the rules are good, they may not necessarily bring the expected positive results.
Obviously, a large number of cases have recorded the past experience of failed AI supervision, but we still seem to be willing to repeatedly seek answers in the same misleading script, and it is difficult to truly learn lessons: in the AI face-changing software incident in 2019, the platform transferred legal responsibilities through the "global permanent portrait authorization", and it took three months for the regulatory authorities to issue targeted regulations. This lagging regulatory model forms a mirror image with the lack of ethical review exposed by Stanford University's sexual orientation recognition algorithm in 2017 - both present a passive rhythm of "technology first, governance recognition". The insights into the limitations of human rationality further show that the hope that human rationality and self-control to manage AI risks are equally fragile. Human decision-making is not completely rational, and is often influenced by heuristic thinking and cognitive biases - blind spots such as excessive self-confidence and loss aversion are everywhere.
However, we do not completely deny the rationality of government and moral governance, but emphasize that people should correctly understand AI risks and the shortcomings of current governance methods. Risks can never be eliminated, but new means can be tried to mitigate them. Professor H. mentioned at the end of the article: "Maybe some cognitive upgrades of AI will not be a bad thing after all, and it can some extent enhance the future 'our' ability to fight the destructive tendencies of AI." Therefore, it is better to think about a new path, which is to use AI's updates and replacement synchronization to promote human cognitive upgrades, help us correctly and timely identify AI risks, and help improve the vulnerabilities of existing governance methods. Simply put, AI is not only a productivity tool in continuous iteration, but also a "cognitive partner": we can use AI's simulation and visualization capabilities to reveal deviations in our decision-making in real time, and guide users from intuition to more careful analysis and thinking through continuous dialogue with AI; at the same time, AI can remind us of which views or behaviors are susceptible to emotions and inertia through big data feedback. In this way, with the upgrading of AI capabilities, humans are constantly correcting their cognitive models - the two form a "virtuous cycle", which not only makes AI understand human nature better, but also makes us better at driving AI, so that we can jointly build a more solid line of defense when facing possible risk of out-of-control in the future.

Feng Runhuan
Professor of Finance, School of Economics and Management, Tsinghua University, Executive Editor of the International Journal Risk, Actuary of North American Association of Actuaries, and Registered Risk Analyst. Before joining Tsinghua University, Feng Runhuan was a chair professor at the State Farm Group at the University of Illinois at Urbana, Champagne, and head of finance and insurance at the Chicago Innovation Cooperation Institute at the Illinois State University System. He is also a full actuary and registered corporate risk manager of the North American Association of Actuaries. Its main research interests and directions include risk governance, pension finance, commercial insurance and social security, and financial technology.
