On reading comprehension
The pros and cons of removing knowledge variance from standardised tests
Is this baseball? I have no clue.
This article is kind of old news now. I wasn’t able to properly articulate my views about it on Twitter and it turned, as Twitter often does, into a polite morality battle rather than a discussion about reading comprehension. So hopefully I do a better job of unpacking this issue here. If you haven’t read the article yet, there has been a recent study that has claimed to reduce the reading gap between advantaged and disadvantaged groups by localising the background knowledge presented in NAPLAN texts. For example, if you don’t know what an avocado is, you’re not going to be able to understand a text about avocados. Seems logical. The baseball experiment is the classic and most cited example of this. The NAPLAN study fits with the current evidence on reading comprehension, where it’s been shown time and again that background knowledge is a huge factor in reading comprehension, but before we start arguing for localised or even culturally and economically neutralised knowledge, let’s look more closely at the cases for and against.
The case for localised tests
Results increase. We will look more closely at this later.
A barrier that could be more present for some classes, cultural groups or races is removed.
Some ‘knowledges’ are absent or not valued in the mainstream NAPLAN texts.
These are all morally sound reasons that we can all feel good about. Results magically increase, teachers have done a great job and everyone is happy. But what if the equality/equity question is more complex? There are some fairly significant issues that can be ignored if we leap to these feel-good solutions.
The case against localised tests
The results increase if you take away or dilute a key factor in reading comprehension, namely broad general knowledge. Of course lowering the bar will produce improved results. But does this actually increase a student’s capability to approach unseen texts in a world that does not pre-localise texts? No, and so the data is so contextualised as to be (hyperbole trigger warning) meaningless.
If knowledge is needed to understand tests in the world, and the evidence here is strong, does reducing the knowledge needed increase results or just remove an integral aspect of accurate measurement? The latter. All we know from these results is that students can decode and comprehend when the challenge level is low. So the measure loses all its relative power and thus its purpose in measuring students against their peers. This also abrogates schools of the responsibility to teach a broad and knowledge-rich curriculum in pursuit of improved reading comprehension.
It is unfortunate that some ‘knowledges’ dominate our cultural and literary landscape. That doesn’t diminish the fact that, for right or wrong, they do. Archaic language exists. Avocados and Westfields exist. The aim of education, in my view, is to give students options. Being able to understand a broad range of texts is one way to increase options. Being culturally aware and possessing the kinds of knowledge that give mobility is another.
Egalitarian ideals in themselves hurt nobody. But there is a danger in lowering the expectations of students and teachers who are already disadvantaged. We need to look to the evidence: the teaching of knowledge is needed. Removing its value and influence in standardised tests won’t make this evidence magically disappear even as NAPLAN results miraculously begin to improve.
In a future post, I’m going to aim to create both a Hirschean list of the knowledge needed to read the SMH; and I’m going to try coming up with a knowledge-neutral list of texts from various contexts. I would love to hear your ideas for what those facts, ideas and lists might look like. Feel free to comment here or on Twitter.