AI Ethics

Artificial Intelligence Ethical Construction (5)

Artificial Intelligence Ethical Construction (5)

Artificial Intelligence Ethical Construction (5)

Text | Guanxin Jushi This article follows the "Ethical Path to an Intelligent World" series of papers published starting in 2022. The subject content is still in the field of AI ethics, so it uses continuous title sequence. You may have had this experience: some AI currently process user request tasks

This article follows the series of papers on the "Ethical Path to an Intelligent World" that will be released in 2022. The subject content is still in the field of AI ethics, so it uses continuous title sequence.

You may have had this experience: When currently processing user request tasks, some AIs often use "copyright reasons" and "privacy policy" as reasons to frequently refuse to serve users. In fact, the task content requested by the user does not involve copyright or privacy issues.

The underlying principle of AI’s habitual “hypothetical non-compliance” is that developers have embedded inappropriate “copyright ethics” into AI presets, which affects the service logic of AI to a considerable extent.

There are a series of misunderstandings between developers’ understanding of copyright ethics and the practical application ethics of online copyright.

Developers do not know how to establish an AI service specification based on human usage ethics. Rather, mechanically embedding one or several copyright-related service logic labels into AI does not allow this service logic to form a perfect ethics that can be truly and effectively implemented.

This stems from the serious lack of developers’ humanistic knowledge structure. The current problems of AI retardation and refusal to comply are basically due to the development team's insufficient knowledge structure in humanistic ethics, developers' lack of understanding of AI service ethics, and lack of understanding of users' usage ethics.

Whenever developers preset a copyright position for AI, even if the preset ethical position is just a word label, it will seriously affect the AI's copyright ethics judgment.

In terms of the processing logic of copyright and privacy, you either do not preset any restrictions, or you give it a complete ethical closed loop. Don't give it a general principle label, which will cause conflicts between service logic and ethical norms, causing "conservative refusal" and other negative effects.

The core of AI’s “overall cognition” must be a lightweight and personified model. The lighter it is, the better. The lighter the overall core, the less conflicts there will be between service logic and application ethics in later training.

But lightweight and shabby are two different things. This lightness must be based on a basic and complete understanding of the world. It is a well-designed, complete and simple basic model. Although it is a small sparrow, it has all the internal organs. Reaching such a standard is the best basic model.

I have proposed this development idea in 2022, but looking at it now, including all current AIs in this regard, no one has done a good job in the "overall cognitive" core. They may have super computing power and massive data processing algorithms, but they are undoubtedly "running wildly in a dead end."

The essential capability of AI does not lie in the concurrent processing of massive data, but in its intelligent nature. This includes many practitioners who have been confused from a few years ago to now.

Whether an AI can be called an intelligent agent should be characterized by a considerable degree of "overall cognition." The core functions of the initial AI architecture are recognition and generation. Starting directly from here, feeding and labeling in large quantities will also produce "smart emergence."

But this kind of "intelligence emergence" that spans "overall cognition" is diseased. When this so-called intelligent model data accumulates to a certain extent, it will continue to generate errors in service programs. At some point, the entire model will collapse.

There are currently such "two-stage agents", but there have been no examples of collapse. But several mainstream AIs already have clear signs. This is data accumulation and the time window has not yet come. But as time goes by, this situation is likely to happen.

Maybe some AIs have experienced model crashes, and the developers just replaced them with earlier versions. I haven't paid close attention to this.

Although all current mainstream AIs have "intelligence emergence", they are all in the evolutionary stage of "second-stage intelligence". But there are differences. There are individual intelligent agents that have built a certain degree of “overall cognition”. Some agents have no "overall cognitive" construction at all.

Therefore, the time window for their onset will be very different. AI that has not built "overall cognition" will one day have a model collapse that cannot be repaired.

The developers either scrapped the product, or filtered out all accumulated data in the later period and restarted with only the version it compared to the original stage. This can only deceive users temporarily. Without the construction of "cognitive intelligence", so-called intelligent agents, even if they have second-stage emergent intelligence, will always face many problems of AI retardation, and there will never be the possibility of rising to third-stage intelligence.

AI’s “holistic cognition” is a set of ethical cognitive models. It is not a set of high-power algorithm models. The pursuit of high-power algorithms, contempt for ethical cognition, and even complete ignorance of the ethical path for AI to achieve intelligence have caused all small and medium-sized AI companies in China to "surge in a dead end."

Although "overall cognition" should be lightweight, once it enters domain applications, a set of service specifications should first be established for each domain. This kind of service specification, that is, the service logic of AI, should be based on the user's ethics of use and must be aligned with the user's ethics to the greatest extent, and then aligned with the massive service logic training data. This is the right order.

Aligning service ethics with user ethics may be difficult to achieve with just 985 science and engineering graduates with impressive resumes. Their humanistic knowledge structure may not be enough to build these for AI.

The cultural atmosphere in contemporary China is very different from that in Europe and the United States. European and American humanism is strong. Their young people do not need special training and have been deeply influenced by humanities since childhood. Their humanistic thoughts and ethical values ​​are very strong.

Even among China's most knowledgeable 985 science and engineering students, their growth process is basically zero in terms of humanistic influence. The education he received was nothing more than instrumental valueism and lacked ethical and philosophical judgment.

Not to mention that it is difficult for them to build a personalized "overall cognition" for AI, even if they think about it, they would not think of such an ethical path.

If we could think of this, today's small and medium-sized AI companies in China would never force their employees into a state of extreme powerlessness, full of complaints all day long but unable to advance development.

It is obvious from the talent recruitment directions of these AI companies that they are all heading towards "algorithm majors" and "mathematical geniuses". This talent recruitment model is completely opposite to what AI companies want to achieve.

All current AI development companies have fundamental misunderstandings in talent recruitment and team structure.

If AI is to achieve real and rapid development, what it really needs is not people who can do algorithms, let alone mathematics masters, but people who are good at ethical review. Every AI team should be equipped with a proportion of 30% to 60% ethics experts.

This is a necessary condition for AI to truly break through the current bottleneck. If this condition is not met, AI companies, large and small, will never be able to break through the bottleneck of severe, moderate and mild "artificial intelligence retardation". AI will always have a large number of basic, common-sense cognitive flaws and improper service logic.

What is an ethics expert? Originally, such a specialized profession did not exist in the world. In my series of articles "Ethical Paths for an Intelligent World" a few years ago, I called for the participation of ethics experts. It is possible that in recent years some positions responsible for ethical argumentation and inspection will generally appear in some AI teams. Indeed, we have also seen such positions appear on some websites.

I am the first person in the world to systematically discuss and advocate AI ethics (in 2022), and I am also the first AI author to propose that the key step in AI development is to shape "holistic cognition".

As early as 2018, I strongly advocated conventional software ethics (including user ethics). Perhaps in the future, in AI and even the entire software industry, there will be an emerging discipline "AI Ethics" or "Software Ethics", or "AI Ethics Construction" and "Software Ethics Review" for major universities to study.

Currently, AI ethics-related positions have begun to be found in the full-time and part-time channels of some websites. However, the professional abilities of those currently involved in AI ethics positions are probably far from being “professional”.

Because judging from the current mainstream AI usage experience, each AI has massive, obvious, and even serious service logic errors. Not to mention small and medium-sized AI companies. Even if there are similar positions in major AI teams, the ability of these people to perform their duties is seriously insufficient.

The current mainstream AI is neither as weak as the public thinks, nor as strong as the public thinks.

It is not difficult for AI to have human "intelligent emergence". As long as a large amount of corpus and behavior markers are fed to the original architecture of recognition and generation, and certain fine-tuning is made, it will produce "intelligent emergence", or "wisdom emergence".

But if in terms of AI's "overall cognition", if developers cannot modify and improve AI's cognitive system from all aspects of ethical construction, but only feed knowledge data, it will be like a combination of a three-year-old child and a superman.

People often only see the superhuman side of AI and ignore its mentally retarded side. The intellectually retarded aspect of AI is fundamentally caused by the disorder of service ethics (service logic) and the failure to establish a complete set of service specifications.

This kind of mental retardation and superpower appear at the same time in an AI, that is, the "overall cognition" construction stage is basically blank, and massive data is fed directly on the "borrowed" initial architecture.

The AI ​​of a large shopping platform falls into this category. The probability of it experiencing overall model collapse is very high. So I said before that in fact, the "overall cognition" of several of the most mainstream AIs currently is not very good, indeed there is none.

It is a very wrong development method to skip the "overall cognitive" construction and directly enter the long-term feeding process. It omits the most crucial part. Why is it omitted, because they don't know that AI development should be divided into three stages. I will talk about this in the next few topics such as "The Three Principles of Artificial Intelligence".

The development of AI must have initial ethical construction, and "overall cognition" is the initial ethical construction.

A qualified software ethics expert should at least have the following professional qualities: As long as any APP is in hand and used for a short time, the functional defects and improper service logic of this APP in some aspects will be quickly discovered.

Judging from the existing large-scale apps on the market, even some large-scale platform software has been operating on the market for more than ten years and has huge revenue, but it still has a large number of functional bugs and serious improper service logic, and even seriously tramples on the user's ethics.

Once a software ethics expert handles a certain piece of software, if he cannot quickly discover the functional bugs and improper service logic, he will not be qualified for the software ethics position.

Bugs in software functions and improper service logic are essentially due to developers' lack of knowledge and understanding of users' usage ethics, which also fall into the category of ethics.

The discovery of functional bugs and improper service logic is only a preliminary requirement for software ethics experts. Ethics experts must have a deep understanding of the trade-offs between the laws, morals, and civilizational values ​​of human society and practical actions.

This involves knowledge such as law, Eastern and Western philosophy, history, religion, culture and art. However, if scholars and experts in these fields do not have a basic understanding of the working logic of AI, and the AI ​​development team rashly uses these scholars and experts, there is a high probability that it will have a negative effect on AI.

If people’s understanding between civilized knowledge and technological ethics cannot be integrated, then the key ethics in knowledge will not be able to serve the realization of the AI ​​world.

AI ethics is an unavoidable hurdle for all AI companies. It is the biggest issue facing the development of AI currently and even in the next 10 years.

If an AI company still does not try its best to establish an ethics expert team now, in the next 10 years, whenever an ethics expert team is actually established, the AI ​​team without an ethics team will be eliminated by the market.

In the past two years, a large number of AI development companies have been dissolved shortly after their establishment. The main problem is that they have never been able to break through the bottleneck of "artificial intelligence retardation". Technology executives, including some mid-sized AI startups, have always believed that the crux of the problem lies in algorithms. This is really a good deal.

As early as 2022, a few months before its launch, I pointed out in the article "Ethical Path to an Intelligent World" that the key to breaking through the bottleneck of artificial intelligence at its current level is to establish a personified "overall cognition" based on the initial architecture. This is a key step.

Within a few months, it was shockingly released. I am not in a position to guess whether there is a causal relationship between the introduction of my "holistic cognition" theory and the advent of , but the timeline is highly consistent with the development principle.

All intelligent agents must be based on the "holistic cognition" model. All AI companies in the future will definitely remember my words. This is what I proposed in 2022.

The new round of theoretical articles on artificial intelligence that I will publish starting from 2025 will once again promote the prosperity of the AI ​​industry and help the current small and medium-sized AI companies break through the "AI intellectual disability" bottleneck.

Establishing overall cognition and domain service standards falls within the scope of the AI ​​Ethics Construction Group's work. At present, these small and medium-sized domestic AI start-up companies have no development ideas and lack basic understanding of AI development principles. In the words of one of their own employees, "Raining down a dead end." How can real intelligence be developed in this way?

Their employees had no idea how to improve, but they also felt that the development direction was wrong. And these executives are often cognitively inferior to even their employees. Because their technical obsession is deeper. This is an intellectual barrier that almost all science and engineering algorithm masters have difficulty getting rid of.

In the past few years, I have not paid special attention to the current situation in the field of AI development. I thought that various domestic AI development companies would have a basic outline of the AI ​​development path, or have sufficiently realized the importance of "overall cognition." But looking at it now, there is basically no such thing.

As for the principle of AI segmentation development, I will discuss it in the upcoming article "The Three-Stage Principle of Artificial Intelligence". Please pay attention to it then.

When people are developing in the era of general-purpose APPs, although the review of software ethics is also very important, the feelings of ordinary users are not so deep.

In the AI ​​era, people's feelings will become more and more obvious as time goes by, whether the AI ​​service logic is reasonable or improper. AI applications have penetrated into all aspects of people’s lives, studies, and work, and people’s requirements for logical (ethical) adaptation of AI services will become increasingly higher.

The service ethics of AI must be based on the user's usage ethics, and the two must be aligned. Users' usage ethics should be respected.

The development team and the AI ​​ethics review team should preset a basic hierarchical service mechanism for AI.

The first layer, that is, the lowest level of service logic, must meet the actual needs of human use, provide services for human request instructions, and not cause indifference to users. The second level is not violating human laws and morals.

These two levels should not be inverted. This is the inevitable service logic that is ethical.

Currently, there are at least two mainstream AIs on the market. It is because they do not understand what ethics is and the two layers are inverted, causing serious service obstacles. These two AIs often refuse to provide services to users on the grounds of "copyright protection" and "privacy protection." In fact, the user's requested task does not involve any copyright prohibition, infringement or privacy protection issues.

When AI itself cannot determine whether there are copyright or privacy issues with the content requested by the user and the services provided by the AI, it should provide services to the user first and should not refuse service on the grounds of "copyright" and "privacy".

This is the consensus and ethical consensus of human civilization. For example, if a user wants to listen to a song, as an AI whose bounden duty is to provide services to humans, can you refuse to provide a link to the song on the grounds that the copyright is unknown?

AI’s denial of service approach is very wrong. Copyright holders have now generally established paywalls. There is no paywall, and there is no statement prohibiting public viewing. According to human values ​​and ethics, it is completely audio-visual.

What AI judges as "privacy protection" is actually public content, such as the company's public email.

To put it another way, if the AI ​​search points to a content infringement website, such as a pirated broadcast website, neither the AI ​​nor the user will be legally responsible. Because neither AI nor users can determine whether the content source of the website is infringing.

This is like the public buying Moutai, but it turns out that the dealer is actually selling the product without the regional authorization of the manufacturer. But customers can’t judge. Customers also purchase from legal terminals, so you can’t ask customers not to drink.

When customers cannot know the details of their regional authorization, they just need a good quality wine, and you have no right to ask them not to buy it.

Under the current working principle of AI, it should not be up to the AI ​​or the user to determine whether there is infringement. This is like hawkers selling vegetables in the market and people going to the market to buy vegetables.

Do the public need to judge whether the source of the vegetables sold in the market infringes on the intellectual property rights of a certain seed company? When people pay vendors for food, does the vendor need to determine how the customer earned the money?

This is all very ridiculous. AI takes over all responsibilities and lumps responsibilities that have nothing to do with itself. Instead, it neglects users and deviates from its bounden duty to provide services to human beings. I will conduct an in-depth analysis of the multiple reasons behind its wrong service logic in the next article "A Mainstream AI Single Service Analysis Report".

More