Be Wary Of The Risk Of Artificial Intelligence "lying"
Be Wary Of The Risk Of Artificial Intelligence "lying"
CFIC Introduction: All walks of life are applying the "Go big, go fast" big model, and big models in various industries are emerging one after another. However, the drawbacks of over-reliance on large models are increasingly emerging, such as the frequent appearance of false information caused by the illusion of large models.
In recent years, the development of artificial intelligence technology has advanced by leaps and bounds. While the development of new technologies promotes social progress, it is sometimes accompanied by paradoxes and traps.
Just a few days ago, CEO Altman, who launched the "first shot" of this round of large model craze, admitted that he was a little surprised by the high level of trust shown by users, and pointed out that artificial intelligence is not perfect and may generate false or misleading content, so it should not be regarded as a completely trustworthy tool.
This is undoubtedly a pouring cold water on the raging large model craze. In recent years, all walks of life have been applying large models in a "go big and fast" model. Large models in various industries have emerged one after another, and there is a trend of "a war of hundreds of models". However, the drawbacks of over-reliance on large models are increasingly emerging, such as the frequent appearance of false information caused by the illusion of large models, and some large models even have uncontrolled risks during testing.
Judging from the risk incidents disclosed so far, the legal and medical industries have been plagued by the hallucination of the big model. According to foreign media reports, the British High Court in June this year asked the legal profession to take urgent action to prevent the misuse of artificial intelligence. Dozens of false case citations, possibly generated by artificial intelligence, have been filed in court recently. In an £89 million damages case against Qatar National Bank, 18 of the 45 case law references made by the plaintiff were proven to be fiction. Previously, when the U.S. Federal Court for the Southern District of New York was hearing an aviation accident lawsuit, it was found that the legal documents submitted by the plaintiff's lawyers cited six false precedents. These fictitious cases included complete case names, docket numbers, and judge opinions. They even imitated the precedent style of the U.S. Supreme Court, seriously interfering with the judicial process.
According to media disclosures, the report on childhood chronic diseases released by the "Make America Healthy Again" committee led by the U.S. Department of Health and Human Services also contains major citation errors. Many of the studies included in the report on ultra-processed foods, pesticides, prescription drugs and childhood vaccines were missing. There are also many errors in the references, including broken links, missing or wrong authors, etc. Independent investigations by The New York Times and The Washington Post revealed that the report’s authors may have used generative AI.
In fact, as early as March this year, a study by the Columbia University Digital Journalism Research Center on mainstream AI search tools found that their reliability was worrying. The study tested 8 AI search tools respectively and found that the AI search tools performed particularly poorly in citing news, with an average error rate of 60%. In January this year, the "Global Risks Report 2025" released by the World Economic Forum showed that "misinformation and disinformation" was listed as one of the five major risks facing the world in 2025.
What deserves more attention is that as artificial intelligence continues to evolve and iterate, some large models even show a tendency to "self-protect" against human instructions. At the 7th Intelligent Source Conference held in June this year, Turing Award winner Joshua Benjo revealed that some new research shows that some advanced large models will secretly embed their weights or codes into the new version of the system in an attempt to "protect themselves" before they are about to be replaced by the new version. A study released by an American company in June also showed that 16 large models, including GPT-4.1 and Google's, all showed behavior in simulation experiments by "blackmailing" humans to prevent themselves from being shut down. Among them, the extortion rate of the developed Opus 4 is as high as 96%.
These studies and risk events have sounded the alarm for the application of large models in the industry. With the continuous expansion of application scenarios, artificial intelligence is now used not only to generate text, but also to generate software, algorithms and even decisions, especially in the manufacturing industry. If hallucinations occur or human instructions are violated, the negative impact will be inestimable. For example, in the smart manufacturing industry, artificial intelligence has begun to be used to monitor equipment failures, assist in analyzing problems and making decisions. If AI hallucinations occur at this time, it may cause accidents. In particular, large models are being deeply integrated with humanoid robot technology, and the "illusion" or "self-preservation" tendencies of large models may cause humanoid robots to behave incorrectly, which is more risky than simple language output errors.
Although the risks that artificial intelligence may bring cannot be ignored, we cannot stop eating because of choking. While using artificial intelligence technology with caution, we should also strengthen the governance of artificial intelligence technology, take precautions, and build a technical and institutional framework for the safe application of artificial intelligence. At present, many scientists and technology companies have begun exploration, and relevant national laws and regulations are also constantly improving. It is believed that in the near future, safer and more controllable artificial intelligence technology will promote high-quality development in all walks of life and become an important engine for cultivating and developing new productivity.
Source of this article: Economic Information Daily