AI Ethics

Be Wary Of Ethical Risks In The Commercialization Of Generative AI

Be Wary Of Ethical Risks In The Commercialization Of Generative AI

Be Wary Of Ethical Risks In The Commercialization Of Generative AI

[Science Essay·I See AI] Generative artificial intelligence (AIGC) with large models as the core is accelerating its integration into business scenarios, but the ethical issues caused by the process are also becoming increasingly prominent.

[Science Essay·I See AI]

Generative artificial intelligence (AIGC) with large models as the core is accelerating its integration into business scenarios, but the ethical issues caused by the process are becoming increasingly prominent. In particular, it has obvious market-driven characteristics in terms of algorithm "black box", data abuse, and evasion of responsibilities. Institutional governance is urgently needed to deal with new technological market failures.

The author has sorted out the manifestations of AIGC’s ethical risks in the context of commercialization:

——The property rights of data elements are still unclear, which induces excessive data mining and technology "black box". Data, a core element of digital production, has yet to achieve clear ownership and reasonable pricing mechanisms. Platform companies can seize user data at low cost through fuzzy authorization, cross-platform crawling and other means, while users lack control over the data. Under this structural asymmetry, AIGC products are widely embedded in business processes with the help of the SaaS model. The algorithm logic is highly closed and opaque, forming a technical "black box". Users passively contribute data without knowing it, and the right to know and the right to choose are not effectively guaranteed.

——The corporate governance structure is relatively lagging behind, exacerbating the retreat of ethical boundaries. Some companies still continue the traditional industrial logic and are oriented to profit and scale. They have not yet fully integrated ethical governance into corporate strategies, and have either been marginalized or become a mere formality. Driven by the pressure of commercialization, some companies choose to apply AIGC technology in sensitive fields, such as deep forgery, emotional manipulation, consumption induction, etc., to manipulate user decisions and even affect public perception. Although there are short-term benefits, it undermines long-term social trust and ethical order.

——Regulatory rules are not yet complete, resulting in gaps in governance and a vacuum of responsibility. The existing regulatory system has not yet fully adapted to the rapid evolution of AIGC in terms of division of powers and responsibilities, technical understanding, and law enforcement methods, allowing some companies to advance their business in regulatory blind spots. When generated content causes controversy, platforms often evade responsibility on the grounds of "technological neutrality" and "non-human control", creating an imbalance between social risks and economic interests and weakening public confidence in the governance mechanism.

——There are biases in the algorithm training mechanism, solidifying biases and misalignment of values. For the sake of efficiency and economy, enterprises often use historical data for model training. If there is a lack of bias control mechanism, it is easy to cause bias in the algorithm output. In advertising recommendation, talent selection, information distribution and other links, such deviations may further strengthen the tendency of labeling, affect the rights and interests of specific groups, and even trigger deviations in social value perceptions.

——Weak social cognitive foundation promotes the spillover of ethical risks. Most users lack understanding of the working principle of AIGC technology and its potential risks, making it difficult to identify false information and potential guidance behaviors. The failure of education, media, platforms and other parties to work together to promote the popularization of ethical literacy makes the public more likely to fall into misinformation and misleads, providing a low-resistance environment for AIGC abuse, and the risk quickly spreads to the level of public opinion and cognitive safety.

So, how to improve the design of ethical risk management system and ensure that science and technology are good?

The author believes that to solve the ethical risk dilemma in the commercial application of AIGC, we need to start from multiple dimensions such as property rights system, corporate governance, regulatory system, algorithm mechanism and public literacy, and build a systematic governance structure that covers the entire process, including front, middle and back, to achieve forward-looking early warning and structural mitigation of ethical risks.

First, establish data property rights and pricing mechanisms to crack the abuse of data mining and technology "black box". It is necessary to accelerate the promotion of legislation to confirm the rights of data elements, clarify the boundaries of data ownership, use rights and trading rights, and ensure users' complete rights chain of "data knowledge-authorization-withdrawal-traceability"; build a unified data trading platform and explicit pricing mechanism to enable users to actively manage and price their own data; promote the platform to disclose the algorithm operating mechanism or provide interpretable disclosure, and establish an information source labeling mechanism to improve the transparency of AIGC operations and user perception.

Secondly, reform the corporate governance structure and embed ethical responsibility and value orientation. It is recommended to incorporate AI ethics governance into corporate strategic issues, establish an algorithm ethics committee and a moral responsibility officer, and strengthen the embedded management of ethics from the organizational structure level; establish a "technical ethics assessment" pre-processing mechanism to conduct ethical impact assessments before product design and deployment to ensure reasonable value orientation and clear safety boundaries; introduce an ethics audit system and incorporate ethical practices into the ESG performance appraisal system; encourage leading platforms to publish ethical practice reports to form an industry demonstration effect and guide companies to achieve "innovation for good."

Third, strengthen cross-department collaborative supervision to narrow governance gaps and ambiguous areas of responsibility. A cross-departmental supervision and coordination mechanism should be established as soon as possible to form an AIGC comprehensive governance group to coordinate the formulation and implementation of regulations; speed up the introduction of special regulations such as identification of generated content, definition of data ownership, and attribution of algorithm responsibilities to clarify the main responsibility of the platform in generating content; The principle of "presumable liability" can be set up for the content generated by AIGC, that is, the platform must bear corresponding responsibilities if it cannot prove that it is not at fault. This prevents companies from evading governance obligations in the name of "automatic generation by algorithms" and establishes a full-chain governance system that combines prior prevention, in-process supervision, and post-event accountability.

At the same time, training data governance rules should be improved to eliminate algorithm bias and value misalignment. An authoritative third party should lead the establishment of a public training corpus, provide diverse, credible, and audited corpus resources for enterprises to use, and improve the ethical quality of basic data; force enterprises to disclose the source of training data, debiasing technology and value review processes, and establish an algorithm filing mechanism to strengthen external supervision; encourage enterprises to introduce multiple indicators such as fairness and diversity into algorithm goals, change the current single business orientation of "click-through rate" and "dwell time", and build an AIGC application logic with balanced value.

Finally, we must improve public digital literacy and lay a solid foundation for consensus-based ethical governance. AI ethics and algorithm literacy education should be incorporated into the curriculum system of primary and secondary schools and universities, support social forces such as the media, industry associations, and public welfare organizations to participate in AI ethics governance, and promote the normalization of private supervision through the establishment of "public technology observation groups" and "ethical risk reporting windows"; encourage platforms to establish ethics science popularization and risk warning mechanisms, and timely release technical interpretations and ethical guidelines for AIGC hot applications to alleviate public anxiety and enhance the overall society's ability to identify and prevent AIGC.

The commercial application of generative artificial intelligence is a major opportunity for the integration of technological progress and economic development, and is also a severe test for the ethical governance system. Only by coordinating development and regulation with the concept of system governance, and strengthening system design and implementation of responsibilities, can we promote technological innovation while maintaining the ethical bottom line and cultivating a safe, sustainable, and trustworthy digital economic ecology.

(Author: Li Dayuan is a professor at the Business School of Central South University, and Su Ya is a doctoral candidate at the Business School of Central South University)

More