AI Ethics

Public Discourse On Artificial Intelligence Ethics: From Expert Discipline To Everyday Ethical Practice

Public Discourse On Artificial Intelligence Ethics: From Expert Discipline To Everyday Ethical Practice

Public Discourse On Artificial Intelligence Ethics: From Expert Discipline To Everyday Ethical Practice

The distinction between expert discourse and public discourse: from disciplinary logic to algorithmic imagination. The composition of expert discourse in artificial intelligence ethics. Artificial intelligence ethics aims to make artificial intelligence conform to socially recognized norms and values ​​in the process of research and development, deployment and use.

The distinction between expert discourse and public discourse:

From disciplinary logic to algorithmic imagination

Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination_Artificial Intelligence Ethics Expert Discourse Public Discourse

The composition of expert discourse on artificial intelligence ethics

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Artificial intelligence ethics aims to make artificial intelligence conform to socially recognized norms and values ​​during the process of research and development, deployment and use. Expert discourse constructs an institutional framework for artificial intelligence ethics. Currently, academic circles generally believe that trustworthy artificial intelligence should cover six major themes: human supervision, security, privacy protection, transparency, fairness and accountability. These constitute the key pillars of the future artificial intelligence governance and legal framework.

The main content of expert discourse can be roughly divided into three categories: engineering ethics discourse, normative discourse and institutional policy discourse. Overall, it has the characteristics of universality, technical rationality and one-way discipline, but there are also problems such as insufficient implementation, vague concepts and lack of cultural adaptability. For example, relevant documents often lack clear definitions on issues such as prejudice and discrimination.

Public Discourse on Artificial Intelligence Ethics

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Cognitive assembly: the generative basis of public discourse

The public's ethical judgment on artificial intelligence does not come from technical knowledge or isolated rational thinking, but is based on perceptual experience and cognitive reflection. It often comes from its subjective construction of "what", "what it should be" and "how it works" of the algorithm, and is constantly triggered in specific situations. Therefore, the formation of public discourse has the characteristics of emotion, embodiment and collectiveness.

According to the "cognitive assembly" theory, users' understanding of technology is generated in the interactive network of humans, technical objects, and computing systems. On the one hand, the public "perceives" the existence of algorithms through explicit functions such as system recommendation, video generation, and data processing, and thus forms interaction patterns and understanding frameworks, and even creates anxiety about "whether artificial intelligence will replace humans." On the other hand, if users encounter problems such as bias, data abuse, or false synthesis when exposed to artificial intelligence, they will perceive the ethical transgression of artificial intelligence, thus triggering emotional reactions and ethical expressions.

Ethical folds: the generative power of public discourse

With the help of Deleuze's concept of "folds", the public's understanding of artificial intelligence can be seen as a multi-dimensional superposition: it includes concerns about technical risks such as job crowding, inequality, and loss of control, as well as existential, functional, and social ethical issues, as well as experiences of emotional dependence, anxiety, and the deceptiveness of virtual companionship.

With the deepening of the application of artificial intelligence, the public's algorithmic imagination is also constantly evolving at the psychological level. For example, when faced with the collection of personal information, users' feelings have shifted from the discomfort of "not being respected" to the cognitive judgment of "privacy being spied on"; on the issue of algorithmic discrimination, some people have cognitive dissonance due to the recognition of discriminatory behavior, and then restart their value assessment.

“Two-way domestication”: the generation process of public discourse

The public’s feedback on the ethics of artificial intelligence not only remains at the perceptual and psychological level, but also reflects the continuous “taming” of technology in daily practice. For example, the output of artificial intelligence can be gradually optimized through repeated questions, detailed questions, precise questions, etc., thereby reducing misunderstandings and biases. This transformation process from perception to action shows that the public can transform ethical concerns into practical interaction strategies and express their imagination and expectations for the development of artificial intelligence through actions in a technology-agnostic situation.

At the same time, the "subjectivity" displayed by artificial intelligence is also shaping public ethical discourse in reverse. When artificial intelligence shows "intentional" characteristics in the form of recommendation algorithms, automatic generation, etc., the public is forced to re-examine the power relationship between humans and machines. Public discourse is also dynamically generated in interactions, thus becoming an important participatory force in the construction of artificial intelligence ethics.

The difference between expert discourse and public discourse

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Public discourse exhibits perceptual and emotional characteristics and focuses on the emerging risks of generative artificial intelligence. Public discourse is mostly formed spontaneously by ordinary users on social media, often expressed through surprise, joking, fear or alarm. Their concerns are no longer limited to fairness, transparency, responsibility and privacy, but extend to emerging risks such as loss of originality, content homogeneity, cultural bias, misinformation and technology abuse. These public discourses directly reflect the impact of artificial intelligence on individual life and social perception after its implementation, and constitute an important entrance to understand the social impact of technology and an important generation mechanism for the underlying consensus in artificial intelligence governance.

Expert discourse mainly comes from academia, policymakers and legal institutions, emphasizing the support of interdisciplinary knowledge, using standardized ethical terminology and institutionalized analysis frameworks, and focusing on macro governance issues such as data compliance, fairness assessment, transparency and accountability mechanisms. They often propose audit, regulatory and compliance paths from the perspectives of system design, legal liability and long-term public interest. The advantage of expert discourse is to provide operational systems and standards, but it is relatively insufficient in responding to the public’s emotional concerns and actual use experience.

As a force that cannot be ignored in the construction of artificial intelligence ethics, public discourse has expanded the topic boundaries of expert discourse to a certain extent, highlighting the "long tail value" it contains. Although there is a "cognitive gap" and "normative tension" between experts and the public in artificial intelligence ethical discourse, public discourse, as an important reflection process in social development, breaks the stereotype of limiting artificial intelligence ethics to a normative framework, and also provides an important observation perspective for understanding the effectiveness of ethical practice.

The practical path of "everyday ethics":

From abstract principles to public experience

Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination_Artificial Intelligence Ethics Expert Discourse Public Discourse

Folk Theory: Users’ Selective Acceptance of Ethical Principles

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

"Folk theory" is the speculation and explanation of the operating mechanism, functional boundaries and potential intentions of technology through observation by non-professional public in daily life practice. Folk theory is markedly informal, often constructed around concrete interaction scenarios, and focuses on cultural values, social identities, and emotional experiences. In addition, folk theories also serve as explanations and predictions, helping users understand technological behavior patterns, judge its reliability, speculate on "intentions", and adjust their own usage strategies accordingly, shaping people's collective imagination of artificial intelligence in terms of technological transparency and trust.

The public’s interaction with artificial intelligence is affected by their social identity. On the one hand, individuals interact with algorithms based on their existing identities such as social occupations and often form unique action strategies. On the other hand, algorithms can also inspire new social identities among the public and generate new ethical issues. When an individual's self-identity cognition diverges from the algorithmic system's cognition, it often triggers an "ethical defense" of public subjectivity.

Folk theory is not only the public’s spontaneous interpretation of technology, but also an important foundation for establishing daily ethics of artificial intelligence in the post-human era. First, folk theory reveals the points of technological impact that the public really cares about in their daily lives. Secondly, they provide supplements and corrections to expert discourse, allowing AI ethical governance to incorporate more user experience and sociocultural factors. Finally, understanding these theories can help improve the explainability design of artificial intelligence and human-computer interaction experiences, and introduce a more inclusive public perspective into the governance process.

Storytelling Understanding: Users’ Narrative Regulation of Human-Computer Relationships

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Users' "imagination" of artificial intelligence algorithms does not come directly from mastering the internal logic of the algorithm, but is generated through the process of socialization and narrative. These narratives may come from personal observation and speculation, or they may be second-hand narratives from friends, news media, or online discussions. From this, individuals form their cognition and imagination of algorithms, which not only constitutes a cognitive adjustment mechanism at the micro level, integrates their own cognition and behavior into the meaning system of daily life, but also develops a collective understanding of algorithms. This kind of "public story" often precedes expert discussion, forms the basis of the public's daily ethical awareness, and develops into a reflective cognitive adjustment process.

The significance of "storytelling understanding" is to use fragmented and non-linear stories to accumulate empirical cognition that explains the operating logic of algorithms and the boundaries of technological ethics. It not only strengthens the interactivity and dissemination of public ethical discourse, but also becomes a cultural expression and social practice of the ethical meaning and boundaries of artificial intelligence by telling, forwarding and rewriting stories of human-computer interaction. This “storytelling understanding” will change with technological development and experience iteration. When the public realizes that artificial intelligence has ethical issues, it often triggers moral reflection and retells the relationship between humans and artificial intelligence accordingly.

Situated trust: a behavioral anchor for users’ ethical practices

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

The rise of public discourse marks that artificial intelligence ethics is shifting from an expert-led normative orientation to an empirical orientation of public practice, and promotes further attention to "everyday ethics", emphasizing the perception, narrative and situational judgment of human subjects in technological practice, and highlighting the role of individuals in meaning construction and interactive regulation. Situational trust lays the foundation for the "daily" transformation of artificial intelligence ethics.

Acceptance and trust in artificial intelligence are important behavioral anchors for ethical practice. The complexity of ethical issues in artificial intelligence determines that trust must be combined with specific situations. Specific backgrounds, user characteristics, and complex human-machine relationships need to be taken into consideration, and they cannot all be solved through universal normative principles. This kind of "contextualized trust" often depends on the public's social position, technological expectations, usage experience, etc., which not only reveals the true logic of the public's acceptance of artificial intelligence, but also further illustrates the embedded characteristics of public ethical judgments. It also provides an important perspective for understanding artificial intelligence ethical practices and users' adaptive behavior to technology.

The public's contextualized trust is also affected by information processing and cultural interpretation. When faced with deep fake content, the public may form a unique information processing model under the combined effect of user expectations, humor styles and disclaimers, which promotes the loosening of artificial intelligence ethical cognition and spawns public discourses such as "algorithmic gossip" and "platform routines".

The public discourse generation of “everyday ethics”:

From instrumental rationality to “user center”

Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination_Artificial Intelligence Ethics Expert Discourse Public Discourse

Machine heuristics: the formation mechanism of user cognition

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Machine heuristics reveal users' cognitive shortcuts in complex information processing. The core idea is that in order to save cognitive resources, people often rely on a simplified psychological rule when facing information or decision-making results - believing that "machines are more accurate, more objective, and more reliable than humans." When the source of information is labeled as "machine" or "algorithm", users are often more inclined to accept the results without sufficient verification, and show a higher sense of trust in scenarios such as news recommendations and personalized algorithms.

Machine social attribution reveals the underlying anthropomorphic psychological processes behind user trust. When users view AI as an intelligent agent with "autonomy" or "agency," their trust and attitudes will be affected by the reinforcing effect of this heuristic. This “unconscious anthropomorphism” reflects the way people interpret artificial intelligence behavior through social interaction scripts. This tendency is also moderated by users’ understanding of the nature of machines, technological literacy, and generational differences.

Anthropomorphic Psychology: Connection Patterns of User Emotions

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Artificial intelligence ethical issues are shifting from purely technical normative issues to comprehensive issues involving public psychological cognition and social value. The public is not only the user, but also gives social and cultural significance to artificial intelligence during the interaction process, bringing the human-machine relationship into a new stage of parallel emotionalization and meaning construction. This interactive relationship based on emotional cognition is first manifested in the public assigning different social roles to artificial intelligence, resulting in diverse emotional responses. For example, treating them as assistants, friends, partners, or even opponents, accompanied by emotional experiences such as anxiety, anger, and trust. The emotional dimension is becoming an important basis for public ethical judgment - the public's preference and dislike for artificial intelligence no longer only depend on the accuracy and objectivity of the algorithm, but also focus on whether the technology shows "goodwill", whether it is trustworthy, and whether it is easy to use.

Differentiated Acceptance: The Generation of Users’ Trust in Artificial Intelligence

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

The technical rationality paradigm reveals the limitations of early artificial intelligence ethical governance that only focused on rule setting, design constraints and process optimization at the source of technology. This paradigm that overemphasizes logical controllability and explainability often leads to conflicts between theory and practice in the public's interaction with artificial intelligence.

Take users’ trust in artificial intelligence as an example. On the one hand, although improving transparency is generally regarded as an important means to enhance trust, its effects vary depending on users' psychological characteristics, thus revealing the complexity of trust generation. on the other hand. Research on explainability also shows that public trust is not solely determined by transparency, but is affected by the triggering timing of explanations, the accuracy of the content, and the fit of the presentation method.

The diverse paths of generating public trust are also related to its empirical ethical explanation. In daily use, the public has developed "empirical ethical explanations" based on human-machine cognition. This explanation does not rely on the abstract principles of expert discourse, but is derived from algorithmic experience and algorithmic consciousness. For example, disclaimers for artificial intelligence-generated content may have the opposite effect under different designs. Frequent or vague prompts may weaken the authority of the system and even reduce users' willingness to use it.

Conclusion:

Co-construction of artificial intelligence ethics beyond “one size fits all”

Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination_Artificial Intelligence Ethics Expert Discourse Public Discourse

Compared with the "expert discourse" system that regards artificial intelligence ethics more as a "problem governance" tool, the public increasingly regards algorithmic experience and algorithmic strategies as part of self-identity expression and social negotiation reflection, and its ethical cognition and practical actions are further activated.

Public attention to the soft impacts of technology has expanded the boundaries of ethical governance. In terms of understanding the social consequences of technology, public discourse emphasizes the "soft impacts" of artificial intelligence, that is, the social side effects that are difficult to quantify and unpredictable, such as incremental changes in algorithmic consciousness, interpersonal relationships, ethical responsibility, cultural cognition, digital identity, etc.

stimulate public moral expectations

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Public discourse is an important force that inspires the moral expectations of the social community. The public's imagination and discussion of artificial intelligence not only shape targeted ethical mechanisms, but also reflect human beings' simple value demands and expectations for action. Public discourse develops ethical judgments about artificial intelligence in practice and situations, and eventually settles into collective ethical perceptions, promoting the evolution of technology in a human-friendly direction.

Public discourse continues to form new moral boundaries in its dynamic evolution. In the early days, the public had emotional expectations for artificial intelligence, hoping that it would be more humane and credible; as the experience deepened, the public gradually formed judgments about moral boundaries and began to question and criticize; eventually, these attitudes and judgments settled into life habits of interacting with technology. As a result, public discourse practice connects the macro structure of technological development and the micro practice of individual actions, and has become an important driving force for the generation and change of artificial intelligence ethics.

Promote public ethical expression

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

Public discourse not only places the common expectations of society, but also plays an actual ethical constraint function through expression and communication. In human-computer interaction, the attitudinal preferences formed by the public, such as doubts about algorithm recommendations or automated information, are an "invisible" experiential alertness, which itself is a potential binding force.

Public expression promotes the transformation of individual ethical reflection into collective norms. The public continues to discuss issues such as algorithmic discrimination, customized propaganda, and data privacy on social media. It can exert moral pressure on algorithmic platforms through accumulation and diffusion, and gradually form a socially binding collective consensus, forcing developers and regulatory agencies to respond to these ethical concerns and incorporate them into system optimization and social normative systems.

Constructing a co-construction logical framework for artificial intelligence ethics

Artificial Intelligence Ethics Expert Discourse Public Discourse_Artificial Intelligence Ethics_Cognitive Assembly Algorithm Imagination

The public’s experience-based perspective is irreplaceable in resolving the tension between abstract ethical principles and complex social practices. Public discourse provides a practical basis for moral perception and a new logical starting point for artificial intelligence ethical governance.

People-centeredness, stimulating participation and contextual embedding are the core strategies of the co-construction framework. First of all, putting people first is the core value of ethical co-construction. Secondly, stimulating participation is the guarantee for the effectiveness of ethical co-construction. It is necessary to ensure the substantive participation of the public in application and supervision, and to improve the public's digital literacy and critical understanding through education and science popularization. Finally, situational embedding is a targeted strategy for ethical co-construction. Governance should shift to a contextualized mechanism, embedding technical specifications into real-life scenarios, and adjusting rules in a timely manner through dynamic dialogue and evaluation systems to make governance closer to public concerns and real needs.

Author introduction

Yang Ming: Professor and doctoral supervisor, School of Communication, Shenzhen University; Du Lijie: PhD candidate, School of Communication, Shenzhen University

This article is an excerpted version of the original paper. The content has been deleted and edited. It was originally published in the 9th issue of "Journalism and Writing" in 2025, and the annotations are omitted.

Please refer to the paper version for academic citations and literature references. Reprinting without authorization is strictly prohibited.

More