Do you ever feel that with the advent of AI, we are time traveling back to the noughties? Juicy Couture tracksuits, low-rise jeans and butterfly clips were in. So were 21st century skills. It was an information cornucopia, and students would never need knowledge again. They could ‘just Google it.’
Even though we now have this thing called the ‘knowledge economy,’ some educators still jump straight to problem-solving and critical and creative thinking as desirables. But agility and such buzz-terms aren’t domain-general skills—they’re the ability to transfer deep knowledge, with automaticity, to new situations, as I explained in more detail in last week’s post.
Knowledge workers can be conceptualised as possessing T-shaped knowledge—very deep and specialised in a certain area, but broad enough that schemas from other subfields can be applied to new problems. There are innumerable ‘what employers want’ lists like this one, laying out the ‘skills’ employers value. But again, even in the workplace, all these skills often amount simply to uses of knowledge.
Now that AI is upon us, it could be posited that certain types of knowledge and ways of learning are not needed. In some senses, this may even intensify as theoretically, most students have ChatGPT and the like in their pocket for a relatively small cost. Knowledge has ostensibly been democratised. At the risk of going over already-trodden territory, here are some thoughts on why this might not be the case.
I’m going to use the term AI for any AI, GPTs and LLMs from this point. I’ll also use the term ‘responses’ for whatever the models generate.
Knowledge gaps are persistent
We know from the Matthew effect that when it comes to learning, the rich get richer and the gap between rich and poor grows. Even though in theory AI opens up the range of knowledge available to students, and the efficiency with which it can be filtered and organised, this doesn’t mean that gaps won’t persist and possibly worsen.
We know that AI is only as good as the prompts it receives, and we also know that responses can be of quite high or very poor quality. Without a cache of existing knowledge as a reference point, students have few ways of discerning the quality of responses. AI is good at moving student work from mediocre to less mediocre in the absence of subject knowledge. But that’s about it.
A more knowledgeable student might start with using AI as a scoping tool, conduct their own verification and research, and then use the tool at the end stages of a project to refine, structure and edit. However, a less capable student will likely rely on a first response, accepting it without question. These are not new observations but stay with me.
Cognitive offloading works best for experts
I’m in a unique position to reflect on this particular use of AI. I’ve helped develop an award-winning study skills program and I’m a struggling student myself. I happened across this term listening to Professor Paul Reimann at the Cognitive Load Theory Conference last year. Cognitive offloading is what it sounds like—it’s where we pass cognitively demanding tasks to AI. In theory, we can free up valuable space to spend our working memory capacity on other things.
Here’s where the study skills and cognitive offloading come together. If we want students to practice certain skills or recall knowledge to automaticity, remember things in the distant future, and—okay this bit might be slight fantasy—engage in overlearning, then offloading will potentially be antagonistic to this goal. The big question is whether students know what to strategically offload.
I am reasonably expert in matters of teaching, learning and metacognition. However, I am a complete novice in aspects of my PhD study. I’m two learner-profiles in one! It’s an incredibly painful position—it’s hard to watch past-me walk into rookie errors and gross inefficiencies. The time wasted breaks my heart. But because of my knowledge about learning—especially independent learning—I have gleaned some insights.
When knowledge is insufficient to make an accurate evaluation, offloading is almost always a bad idea. Being a reasonably expert writer, I offload editing, some aspects of structure and argumentation. I frequently reject suggestions, and I make or adapt suggested changes with near automaticity, intuition even.
When working through roadblocks as a novice researcher, and it is only with the benefit of hindsight that I can say this, I have almost always regretted using AI. The result is too effortless, which we know to be bad news for long-term retention. Even now, I lack the expert knowledge to figure out how to be more purposeful, methodical and cautious.
If the goal is learning and the student is a novice, offloading is not a great idea. If the goal is productivity, an expert can judiciously decide which tasks to offload and which ones require their attention.
Metacognition favours those with knowledge
Prompt generation and evaluating quality of responses are just part of the problem here. For learners where the aim is to get better/more knowledgeable/more skilled at something, metacognitive skills are essential. They’re also highly correlated with intelligence. I imagine that there’s some kind of feedback loop at play here too. I’ll give some examples to illustrate.
A novice writer with low achievement is perhaps more likely to use AI to edit their work (you’ve no doubt seen this). A higher achieving student will ask for a list of issues and decide which to apply and how. That student might even include a marking rubric and ask for feedback. Perhaps they will refer to a specific syllabus or even provide an example that they want to emulate.
The metacognitive process means not bypassing the work in favour of a quick fix. A metacognitively able student will hold on for delayed gratification as they improve their prompts and enter into dialogue with a tool. The process is the learning—it’s likely this is where the term assessment as learning comes from.
Not only do higher achieving students have more knowledge about a subject, but they also possess knowledge about style, quality, rubrics and all the other often-hidden expectations of school and university learning. I’m sticking to what I know here, writing and the humanities, but I’m sure this applies to other domains too.
Just as ‘Googling it’ never eliminated the need for knowledge, neither will AI. If anything, it shines a light on the need for deep, well-organised schemas. Without them, students lack the foundation to critically engage with AI.
The same truths hold: expertise still requires effort, metacognition still favours those with knowledge, and offloading is only useful when you know what can safely be offloaded. The knowledge economy isn’t going anywhere, and neither is the need for actual knowledge.
Come hear these guns speak at Sharing Best Practice, Sydney.
Nathaniel Swain (La Trobe University), Jenny Donovan (Australian Education Research Organisation), Saskia Kohnen (Australian Literacy Clinic, Australian Catholic University), Manisha Gazula (Principal, Marsden Road Public School), and Veronica Alexander (Professional Learning and Development Lead, SPELDNSW).
Oh, and me. I’m speaking about Explicit Teaching in Secondary Humanities.
How many times has this been uttered: "THIS will democratize education!"?
The printing press.
The library card.
Radio.
Television
Home encyclopedia.
Books on tape.
The VCR.
The internet.
The smart phone.
And now, AI.
So true. I worry about how much AI is being pushed in the field of education, thanks for sharing your thoughts.