Columns
Learning in the age of AI
Learners are losing faith in their competencies, thinking of AI-generated content as superior.Roshee Lamichhane
Learners are entering a world where artificial intelligence (AI) will be ubiquitous. Meanwhile, educators are concerned that learners may simply copy and paste responses from ChatGPT or other AI tools without critically evaluating the information’s accuracy, authenticity and source. While AI detection tools—like plagiarism detectors—are being developed, they are still in the early stages, and their reliability is unproven. Rather than fighting against AI, the focus should be on equipping learners with skills to support their professional development in future jobs where humans and AI coexist. As such, AI needs to be thoughtfully integrated into assessment, learning and teaching in higher education.
My reconnaissance study based on interactions with colleagues and faculty at different business schools in Kathmandu suggested that educators and higher education leaders are increasingly concerned about assessment tools like ChatGPT, as they cannot control learners’ use of such technology. While educators want to learn to integrate AI into education effectively, they believe it should be used as a tool or means, not an end.
Many educators feel the learning aptitude of learners has significantly decreased post-Covid and that with the advent of AI, these tools are making learners dumber rather than smarter. They think learners overly rely on AI for answers instead of their own creativity and ingenuity. Learners are comfortable utilising AI, while older school educators struggle to keep up, acting more as investigators and detectors rather than mentors engaged in authentic learning. Additionally, some educators are concerned that learners are losing faith in their competencies as they perceive AI-generated content to be superior, leading to an increasing sense of "imposter syndrome" across academic levels.
Difficulties in using AI
In the paper “Assessment reform for the age of artificial intelligence”, Lodge and others assert that “For educators, the core issue persists, that students can easily circumvent the learning process and potentially pass assessment tasks using generative AI. This is a serious threat to assurance of learning as our ability to trust student submissions as a fair and accurate representation of what they know and can do is greatly diminished”. Undoubtedly, there are concerns regarding the consequences of AI among both educators and learners.
To address the issues posed by artificial intelligence, Johns Hopkins University, for example, has redesigned its assessment using the avoidance and activation approaches. While the approach involves designing assessments to minimise students’ dependence on AI, activation approaches involve seeking out ways to integrate AI tools into their assessment process.
Recent evidence suggests that universities have either embraced all these strategies or a combination of them. For instance, professors at University College London (UCL) are integrating prompt engineering into teaching delivery to make the classes more interactive and engaging, mostly for formative activities. Likewise, the University of Technology Sydney (UTS) professors believe detecting AI is nearly impossible. Thus, it is essential to rethink assessment.
UTS and UCL adopted a three-tiered approach to GenAI and assessments: Tier 1 states that GenAI can be used in an assistive role; Tier 2 declares its use integral to the assessment; and Tier 3 stipulates GenAI cannot be used. Thus, it is important to define what constitutes academic integrity in different parts of the assessment process when AI tools are widely available. Equally important is assessing whether AI tools violate academic integrity. Given that AI can mislead and produce fabricated data, educators should determine what constitutes a balanced–ethical and transparent—use of AI.
Best practices
As it is crucial for educators to be aware of the use of AI and assessment scales, they can learn from the best practices elsewhere. They can provide clarity via an assessment scale: A pilot study was conducted at British University Vietnam (BUV) exploring the implementation of the Artificial Intelligence Assessment Scale (AIAS), a flexible framework for incorporating GenAI into educational assessments. Empirical evidence suggested that there was a reduction in cases of academic misconduct: A 5.9 percent increment in student attainment across universities and a 33.3 percent increase in module passing rates. The AIAS consists of five levels, ranging from “No AI” to “Full AI”, enabling educators to design assessments that focus on areas requiring human input and critical thinking.
They may set questions that test higher-order skills using GPT-Proof question creation. Instead of giving a ready-made, familiar case study from the web, teachers should formulate one. In doing so, they create a novel scenario and increase the difficulty level by introducing new multidisciplinary ideas. Adopting case study-based assessments that require critical analysis, application and evaluation of taught concepts, construction and support of arguments and articulation could be another alternative.
In-person, in-time and in-place assessments involve thinking, pairing and sharing activities. They require role-playing, reflection activities in the classroom, open-ended examinations without technology and the use of discussion boards. Engaging learners in these old but gold-level assessments could be beneficial.
Educators must expect AI use and upskill learners. Since Nepal has yet to devise policies regarding AI for higher education, educators need to be creative and find ways to reduce the potential pitfalls of AI use in assessments, where plagiarism detectors may not be foolproof or affordable for all higher education institutions.