In early December 2022, a flurry of emails were bouncing around universities bringing news of a development that would redefine curriculum design in the years ahead. These emails were often marked ‘urgent’, were not the typical season’s greetings, and nor were they emails relating to a much anticipated national policy development. This new urgent-email-phenomenon came in just as universities were finally getting back to a sense of normality following the lockdowns of 2020-21. These were emails regarding a new technology platform, which would begin as a “have you heard about this new thing?” rumour and would quickly become an urgent external factor to address in our education committees. The news was of the sudden mass availability of generative AI applications, notably ChatGPT (launched on 22nd November 2022), which had the ability to generate large language outputs at the click of a button, for little or no financial cost to the user. This blog will discuss where we are three years on and the implications I see for students’ engagement with taught curriculum.
Artificial intelligence – not so new
Let’s start with saying that artificial intelligence (AI), defined as the ability for digital technology to simulate human intelligence through making independent decisions and perform complex tasks which appear autonomous, is not a post-2020 development. There were thousands of applications using AI already operating across the world, many of which we were already experiencing in our lives. Smart buildings which adjust due to climate conditions autonomously, autopilot assist in aeroplanes and cars, chat bots on websites we visited providing customer services, and even the little Paper Clip on 90s Microsoft Word are all versions of artificial intelligence. The most recent developments of AI in the form of Large Language Models (LLMs) is perhaps different as its availability is so prominent, and we are both more consciously aware of it and encountering it more frequently as it is embedded into many of our daily apps such as Google, WhatsApp and Microsoft Office. The AI of today features prevalently in how we digitally perform tasks on our phones, on websites and importantly for written or designed outputs (like emails, word processing and research) – speeding up daily tasks. And, undeniably, the tech is simply getting faster, easier to use, more flexible and able to handle larger amounts of data.
This blog will not be talking about wider innovations of AI in sectors such as finance, health, customer marketing or engineering, but it is important to note that the world is seemingly currently in an AI frenzy. The business news is dominated by the debates around new AI tech giants and their economic boom in nations such as China, India and the United States. Social media platforms are full of deep-fake AI generated content, showing unbelievable videos and images of content such as unlikely celebrities performing rap battles, or animals talking like humans. Alongside this and fuelling the discourse are politicians and business leaders, who when asked about economic productivity or business innovation, will often reply with a trite response of, “we need to look to AI for the possibilities for money making opportunities and efficiencies”. Artificial intelligence is confronting us in our personal, social, professional, consumer and educational spaces – where the modern university is particularly impacted in regards two major areas of student engagement. These two areas will be discussed in this blog. The first the access and synthesis of knowledge, and second is assessment which requires evidencing the knowledge, skills and understanding our students have developed.
Changing relationship with learning and knowledge
For years I have sat in university committees discussing student engagement, with a regular feature for discussion being students’ access to IT systems, library resources, and WiFi. Our students are a generation connected by technology, and our universities are certainly now technologically reliant. Often in the 2010s, education providers would prioritise WiFi coverage, data download speeds, and wider digital accessibility, so students could access their learning platforms, resources and assessment submission while on campus and from home. By the time the Covid-19 pandemic hit, a degree could be almost fully engaged with by students at distance, with lecture recordings, online resources, and assessments accessible remotely. In my blog posts last year I debated this, highlighting that student attendance and their relationship with knowledge was changing, as a somewhat comparable (if not as personal) university experience is available in the devices in their hand. Access to knowledge is no longer hidden behind thick, brick or stone university walls. Learners are now able to engage at distance and in their own time, digitally benefitting thousands of commuter, neurodiverse and increasingly time poor students internationally.
With these considerations in mind, we can now reflect on these digital foundations in light of the AI developments we have seen more recently. Prior to mass availability of AI tools, such as those operating on LLMs, students would largely only use digital means to access or search for, learning resources at distance to read or watch, to learn from and support their understanding and complete their assessments. However, generative AI goes further than just searching for and providing access to articles – presenting them as pdf copies for the student to cognitively engage with and do the work to understand or utilise to support their own writing. Generative AI platforms can be used to by-pass searching, reading or watching time, by simply writing short bullet-point summaries of resources (whether from an individual resource or the wider web), to generate swathes of text, create whole artefacts, and even produce notes from recording – and all in mere seconds.
All of these functions are highly time efficient, if the end product is only the artefact (such as the essay, the email, the notes, or the summary), students can save countless hours of reading or watching learning resources. But the end product of a degree is not only the assessment as an output, it is the process through which our students grow (personally, intellectually) and become. Prior to the development of Generative AI, students would need to engage with something (a reading, a lecture, a resource), cognitively through synthesis, evaluation, and critical engagement, testing and puzzling through possible solutions in their mind. Of course, there have always been options to short-cut their understanding – SparkNotes, for example – but these still required the student to engage cognitively to some extent with the material. This cognitive engagement was then translated into the assessment task, which became the product which followed the process of their own learning synthesis. These central engagement phases, the student learning and synthesising their own thinking has undeniably decreased through offloading this work in favour of AI efficiencies.
Junk food learning and assessment
When universities suspect a student of having used Generative AI to simply collage an assessment artefact without (or with less) actual personal engagement, research, or thinking, often the failsafe approach of a viva is required to test student’s reliability as being the author of the artefact. If the student has not engaged themselves in the topics discussed in the artefact, it very quickly emerges as they show little understanding of the knowledge presented in the assessment. This, in turn, highlights that the learning outcomes assessed as part of the unit of study have not been achieved. This is now most often due to offloading the knowledge production process to generative AI technology, bypassing the individual. Research is now showing the detrimental effects of offloading the cognitive processes of creating an output (such as an assessment task) on the integration and retention of information due to overall weakened schemata formation (Oakley et al., 2025; Kosmyna et al., 2025). In other words, cognitively offloading the assessment task means that students will not retain the information required to adequately argue that they have met the learning outcomes relating to knowledge.
The artificial intelligence innovator Jack Clark, co-founder of Anthropic has recently referred to these encounters with information presented by large language models as Fast Food Knowledge. It’s certainly served to the recipient in record speeds, but perhaps this comment also connotes a sense that this form of knowledge access is less beneficial, or nourishing, to the learner. This brings thoughts to mind of processed, fast (junk) food; food which may promise ease and taste (depending on your perspective) but which leaves you nutritionally unsatisfied and with costly health implications. This metaphor seems fitting for the junk food knowledge fed to us by generative artificial intelligence – just as junk food isn’t always ‘real’ food, AI knowledge isn’t always ‘real’ knowledge. However, perhaps because we are getting information faster and easier it isn’t questioned; in the same way society at large does not question how a dinner can be served in under two minutes.
I make this comparison not to stimulate your appetites for dinner, but because there are tangible risks to the students and graduates if we welcome Generative AI into education without thinking about its overly processed and nutrient-free offering. Students’ will be engaging with artificial intelligence by default, as it’s embedded into our search engines, social medias, and increasingly dominant as a method of answering questions in our lives. Universities are right to prioritise students being ‘AI ready’, ‘AI literate’ or even ‘AI experts’, but the there is a balance to be struck not to slip our students into becoming unhealthily AI dependent. We need to spend longer considering what a healthy diet looks like for a student in this new world. The risks will soon begin to appear in graduates if a student has gained a degree in a topic that AI has produced for them, where the human being, the student, has not truly integrated that knowledge in their long term memories. Perhaps you might argue that what it means to ‘know’ something has and will change. I’m not convinced by these arguments, and I think it misses the greater point of the educational experience.
These issues are not in universities alone, where journalists, businesses and even consultants, have been caught for using AI by their stakeholders – notably Deloitte in 2025, whose report highlighted use of AI generating hallucinated findings from using AI for a $440,000 consultancy project. This risks the students’ later performance and success in job interviews and professional development, not to mention the profession into which they are entering. I’m not sure I’d like to drive over a bridge built by a civil engineer who could not understand how to check and confirm the calculations offered by an AI tool – we still need people to know what is correct, what is factual, and to critically appraise information in the way graduates should know how to do. At Westminster, we have moved quickly towards authentic assessment - where the person is assessed in work-like activities, to take additional steps to ensure the student themselves benefits and showcases the learning authentically - to emphasise the person at the centre of the assessment experience. This will form the topic of my next blog.
An AI world that students are ready for
This blog is not arguing that we should remove artificial intelligence from students’ educational experience entirely. On the contrary, students need to be exposed to AI (how it works, its strengths and limitations), in the same way we expose students to technical programmes relating to their degree. Use and knowledge production through technology is important, but this must be balanced with the need to ensure our students gain the knowledge, understanding and skills of the discipline, and are able to communicate it as their own in assessment. We can all create written or creative outputs within seconds due to generative AI, but in these collages of text, pictures or sources, questions of authorship and memory of information are absolutely key to consider deeply before deciding to engage. Given that universities fundamentally generate and validate knowledge through research, teaching, and accreditation, the increasing outsourcing of these functions to technology raises significant concerns. It is essential that students remain able to evidence their own independent thinking, both face-to-face and in authentic real-world contexts, beyond purely digital means.
Tom Lowe has researched and innovated in student engagement across diverse settings for over ten years, in areas such as student voice, retention, employability and student-staff partnership. Tom works at the University of Westminster as Assistant Head of School (Student Experience) in Finance and Accounting where he leads on student experience, outcomes and belonging. Tom is also the Chair of RAISE, a network for all stakeholders in higher education for researching, innovating and sharing best practice in student engagement. Prior to Westminster, Tom was a Senior Lecturer in Higher Education at the University of Portsmouth and previously held leadership positions for engagement and employability at the University of Winchester. Tom has published two books on student engagement with Routledge; ‘A Handbook for Student Engagement in Higher Education: Theory into Practice’ in 2020 and ‘Advancing Student Engagement in Higher Education: Reflection, Critique and Challenge’ in 2023, and has supported over 40 institutions in consultancy and advisory roles internationally
References:
Kosmyna, N. & Hauptmann, E., Yuan, Y., Situ, J., Liao, X. H., Beresnitzky, A., Braunstein, I. & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint. 10.48550/arXiv.2506.08872.
Oakley, B., Johnston, M., Chen, K.-Z., Jung, E., & Sejnowski, T. (2025). “The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI.” In The Artificial Intelligence Revolution: Challenges and Opportunities (Springer Nature, forthcoming). http://dx.doi.org/10.2139/ssrn.5250447





