AI Ethics

Artificial Intelligence Major: Are We Cultivating 'technical Craziness' Or 'ethical Guardians'?

Artificial Intelligence Major: Are We Cultivating 'technical Craziness' Or 'ethical Guardians'?

Artificial Intelligence Major: Are We Cultivating 'technical Craziness' Or 'ethical Guardians'?

In the course sheet of artificial intelligence majors, technical courses such as machine learning, neural networks, and deep learning account for 90% of the space, while courses such as artificial intelligence ethics and social impact are either elective courses or simply not.

In the course sheet of artificial intelligence majors, technical courses such as machine learning, neural networks, and deep learning account for 90% of the space, while courses such as artificial intelligence ethics and social impact are either elective courses or simply not. This "focus on technology and neglects ethics" education model is causing a serious question: When AI can accurately predict crime, manipulate public opinion, and even determine the life and death of a person, are the technical talents we cultivate ready to take responsibility?

The case of an AI lab is thought-provoking: a recruitment AI system developed by the research team automatically gives female job seekers low scores due to the gender bias implicit in the training data. However, when testing, the team members only focused on the accuracy of the model and were completely unaware of the ethical issues. They were not removed from the shelves until the system was launched. What this case exposed is the fatal loophole in artificial intelligence professional education - it only teaches students how to "make powerful AI" but does not teach them "what can't be done".

Technology itself has no good or evil, but people who use it have it. The risks of artificial intelligence are far more than algorithmic bias. Deep fake technology may be used to create fake videos to frame others, AI recommendation systems may aggravate the information cocoon to tear the society apart, and the "tram problem" of autonomous driving is directly related to life choices. These problems cannot be solved by improving the accuracy of the model. They require technology developers to have ethical awareness and social responsibility.

Fortunately, some universities have already started to take action. A 985 university offers a compulsory course in the artificial intelligence major "AI Ethics and Governance", allowing students to think about the boundaries of technology through case discussions, simulated decision-making, etc. There is a classic proposition in the course: "If autonomous driving has to choose between crashing into a child and five elderly people, how should the algorithm be designed?" This question without standard answers can just cultivate students' ethical thinking ability.

For students majoring in artificial intelligence, don’t let yourself become a “technical tool person”. While debugging the model, think more about the social impact it may bring; when pursuing technological breakthroughs, keep the ethical bottom line. After all, whether AI in the future benefits mankind or brings disaster depends largely on you now - you must not only understand technology, but also understand responsibility.

More