EDUCAUSE: Coping With The Ethical Dilemma Of Artificial Intelligence In Higher Education
EDUCAUSE: Coping With The Ethical Dilemma Of Artificial Intelligence In Higher Education
While maintaining ethical standards while coping with the complexity of artificial intelligence, a balanced approach is required to comprehensively consider the benefits and risks brought about by adopting artificial intelligence.As artificial intelligence continues to change the world, including in higher education, there is an unprecedented need to use artificial intelligence responsibly. Although artificial intelligence has great potential in enhancing teaching and learning, ethical considerations surrounding social equity, the environment and humanities issues continue to emerge. The College Teaching Center is tasked with supporting teachers in the best teaching practices and faces increasing pressure to adopt a balanced approach to adopting new technologies. Unpredictable and rapidly changing environments complicate this challenge. As new artificial intelligence tools continue to emerge, opportunities and challenges in the field of education are growing exponentially. This challenge is particularly arduous for teaching centers that have led innovation in colleges and universities.Adaptive learning platforms that tailor teaching instructions based on students’ learning performance are also worrying. While such systems can
While maintaining ethical standards while coping with the complexity of artificial intelligence, a balanced approach is required to comprehensively consider the benefits and risks brought about by adopting artificial intelligence.
As artificial intelligence continues to change the world, including in higher education, there is an unprecedented need to use artificial intelligence responsibly. Although artificial intelligence has great potential in enhancing teaching and learning, ethical considerations surrounding social equity, the environment and humanities issues continue to emerge. The College Teaching Center (CTL) is tasked with supporting teachers in the best teaching practices and faces increasing pressure to adopt a balanced approach to adopting new technologies. Unpredictable and rapidly changing environments complicate this challenge. As new artificial intelligence tools continue to emerge, opportunities and challenges in the field of education are growing exponentially. This challenge is particularly arduous for teaching centers that have led innovation in colleges and universities.
To support university faculty and students to cope with the complexity of AI integration while following ethical standards, university teaching centers must prioritize a balanced approach that takes into account both the benefits of AI and its risks. Colleges and universities should strengthen their critical awareness of artificial intelligence, face up to social inequity, examine the environmental impact of artificial intelligence technology, and promote people-oriented design principles.
Solve social inequity issues
Although the original intention of the AI system is designed is positive, it may lead to some students at a disadvantage. One of the most pressing issues related to artificial intelligence is its potential to persist or even exacerbate social injustice. Since artificial intelligence algorithms are usually trained on historical data, they tend to reflect inherent biases in society. This could lead to further marginalization of the already underrepresented student population in terms of access to opportunities and receiving assessments.
Take the automatic scoring system driven by artificial intelligence as an example. These platforms can quickly rate assignments and provide feedback, and can even reduce some of the subjective factors in manual ratings. The AI scoring system frees teachers’ time so that they can focus on other meaningful teaching activities such as planning a course or interacting with students. However, not all ratings are fair. Although AI scoring systems can score rote-based assignments, providing meticulous feedback on more subjective assessments requires human expertise and judgment. Using automated systems to rate subjective assignments can lead to bias and exacerbate injustice.
Adaptive learning platforms that tailor teaching instructions based on students’ learning performance are also worrying. While such systems can enhance personalized learning experiences, they are also learned based on existing data, leading to their continued social bias inherent to certain groups and hindering academic progress.
Because adaptive learning systems rely on quantitative data, deeper contextual factors that affect students' learning are not considered. Furthermore, while AI has the potential to increase the chances of accessing personalized learning support, such as one-on-one tutoring, many powerful tools that provide more accurate results have set up payment barriers. Students who can afford the enhanced tool will have an advantage over their peers, exacerbating the digital divide.
To address these issues, college teaching centers can be the first to promote social fair dialogue on the use and access to artificial intelligence. They can provide workshops, short courses and resources to explore the problems of artificial intelligence aggravating social injustice. For example, the University of Michigan’s Center for Academic Innovation offers courses on the intersection of artificial intelligence, justice, and equity. The Center for Teaching Promotion at Wake Forest University and the University of Delaware held forums and workshops on the ethical impact of artificial intelligence in education. These programs encourage students, teachers and others to think critically about how to apply AI tools in an educational environment.
Fair-centered support pathways
When leveraging artificial intelligence tools, college teaching centers can adopt the following feasible strategies to promote equity:
Create generative AI training materials that support faculty and students to remove the digital divide. Regular seminars are held to discuss how artificial intelligence can continue bias in an educational environment, with special attention to scoring systems, adaptive learning platforms and artificial intelligence detection platforms. Teachers are encouraged to critically evaluate AI tools before they are integrated into teaching, paying attention to whether the tools reinforce social bias. Provide students with resources to explain how AI tools affect their learning experiences and develop students’ ability to advocate for more equitable assessment practices when necessary. Teachers are encouraged to supplement artificial intelligence scores with manual supervision, especially for homework that require meticulous and subjective judgment. Involve teachers, students, staff and other stakeholders from underrepresented groups in developing AI usage guidelines to ensure that the ethical integration of AI tools in teaching is provided from different perspectives. Support teachers to design other assessment methods other than traditional exams and papers—traditional exams and papers may be prone to artificial intelligence bias—to ensure diversification of the way knowledge is presented. Work with various departments of the school to explore how generative artificial intelligence can improve accessibility and student support. Create and distribute resources based on these explorations. Teachers are encouraged to prepare for the obstacles students may encounter when using generative AI tools. Teachers are advised to use school resources, such as borrowing related equipment or computer laboratories. If teaching abroad students, make sure that they can use these tools before assigning students to use relevant generative AI tools. Develop relevant guidelines to enable teachers to understand how to use intellectual property and personal information to train artificial intelligence tools and influence their future output.
By raising awareness and providing multiple training avenues, college teaching centers can help teachers and students take full advantage of AI while actively mitigating the potential impact of its potential inequality that may aggravate social inequities. Addressing social inequity in artificial intelligence is an ongoing process that requires being vigilant, adapting to change, and working to promote inclusion and equity.
Understand the environmental impact of artificial intelligence
The environmental impacts of AI are often marginalized in teaching decision-making discussions; however, these impacts should be a key consideration as higher education institutions integrate AI technologies into education. Artificial intelligence systems—especially large language models and deep learning algorithms—requires enormous computing power, meaning high energy consumption and increased carbon emissions. With the increase in the use of artificial intelligence, its environmental impact has also intensified, and the unfairness has been further aggravated due to the uneven geographical distribution of data centers.
The environmental impact accumulated by the adoption of artificial intelligence tools in teaching, learning, and management can be enormous. Some schools have been at the forefront of the sustainable development of artificial intelligence and incorporate environmental impact discussions into higher education curriculum. For example, MIT and William and Mary College have incorporated discussions on the cost of AI environments into their philosophy and data science courses, prompting teachers and students to consider sustainability when choosing technology.
Care must be taken when explaining the negative effects of generative artificial intelligence to students, as this can cause excessive cognitive and emotional burdens for students, leaving them feeling powerless. To eliminate this sense of helplessness, Redford encourages students to brainstorm against difficult problems, exploring not only the negative effects of AI use but also potential solutions—these activities that develop students’ critical thinking and leadership skills.
Sustainability-focused support pathways
Here are some feasible strategies for college teaching centers to promote sustainable development while leveraging artificial intelligence:
Teachers are encouraged to choose AI tools that have less environmental impact, as well as suppliers that prioritize environmental sustainability, such as those committed to reducing water consumption and carbon emissions in their AI infrastructure. Highlighting case studies of AI technologies and tools hosted with energy-efficient algorithms or data centers powered by renewable energy sources, and teachers are encouraged to adopt these alternatives. Working with teachers, incorporating sustainable development topics into AI-related courses, especially philosophy, ethics and data science courses, allows students to critically evaluate the environmental consequences of AI systems. Organize workshops on integrating sustainable development into curriculum design. Teachers can learn how to make informed choices when using AI tools, thereby reducing the impact of the course on the environment while improving learning outcomes. Develop a sustainable development scorecard for curriculum design, where teachers can use the scorecard to evaluate and minimize the impact of artificial intelligence technology on the environment. Promote student engagement through sustainability challenges or competitions. Students are encouraged to analyze the impact of AI tools on the environment and propose creative solutions to reduce their carbon footprint. Work with various departments to incorporate AI sustainable development projects into existing curriculum. Provide resources and training on systematic thinking and leadership skills to not only encourage students to understand the negative impact of AI, but also to develop solutions to these challenges. Create a discussion space on AI and environmental sustainability, engage teachers and students, and explore actionable steps that can be taken to contribute to the more environmentally friendly use of AI. Work with the Office of Sustainable Development of Schools or the Green Committee to ensure that AI adoption is aligned with the school’s carbon reduction goals.
College Teaching Centers can help teachers integrate sustainable practices into AI use and provide tools and knowledge to make more environmentally conscious decisions, minimizing the ecological impact of AI on higher education. Through these efforts, college teaching centers can ensure that AI’s empowerment of education does not come at the expense of the environment.
Emphasizes people-oriented learning
As artificial intelligence is increasingly integrated into higher education, it is crucial to ensure that these technologies enhance rather than weaken the humanities in learning. People-centered teaching design aims to create a technologically advanced, equitable, barrier-free learning environment. While AI has the potential to personalize and enhance learning experiences, over-reliance on these tools may undermine interpersonal interactions at the heart of education. Although automated systems are efficient, they lack the empathy and intuition unique to traditional teaching. Using artificial intelligence for people-centered educational design should prioritize the needs, preferences and well-being of students and teachers, and use intuitive, accessible, AI education tools and systems that support human interaction and creativity.
Artificial intelligence has the potential to revolutionize the field of education through personalized learning experiences, automated management tasks, and real-time feedback. However, without adopting a people-oriented approach, it is possible to create a learning environment that lacks humanity and one-size-fits-all learning environment that cannot meet students' diverse needs and experiences. Take AI-driven adaptive learning platforms, as an example, that can customize teaching by analyzing student data, but may also weaken human participation. If incorrect integration, these tools can cause students to be in an isolated state of learning, interacting more with technology than with classmates or teachers.
The challenge for colleges and universities is how to achieve an appropriate balance between leveraging the advantages of artificial intelligence and retaining the humanities of education. Improve accessibility and support while not undermining the humanity that is crucial to education. For example, Georgia Tech deploys an AI TA called “Gill Watson” to answer students’ routine questions in a large online course, which allows human TAs to focus on more complex, student-centered interactions. This approach shows how artificial intelligence can be used to assist human teaching rather than replace it.
To prioritize inclusion and accessibility, Universal Learning Design (UDL) guidelines can be used to promote equitable learning, providing students of different backgrounds, abilities and learning styles with a more inclusive and personalized learning experience. For example, Cornell University encourages teachers not to hinder cross-pattern flexible homework and assessment methods because of fear of academic misconduct, allowing students to present their learning outcomes more broadly.
People-centered support approach
College teaching centers have unique advantages in supporting teachers to make people-oriented decisions. The following are feasible strategies for college teaching centers: