Writing instruction in Australia has been a bit of an orphan until quite recently. Speaking for myself, I was cursed with capable writers at the beginning of my career. Why do I say cursed? Because there wasn’t a huge impetus for teaching writing. I thought that the mechanics of writing were kind of innate to students, a bit like the whole language version of writing instruction. I like to think of this as the ‘Field of Dreams’ approach - if you immerse them, they will come. The syllabus also kindly let me off the hook. The emphasis on meaning, rather than synthesis, meant I could blithely assess for conceptual understanding and gloss over the writing itself.
Not any more. The new NESA syllabus has acknowledged this gaping absence of direction, and sentence-level instruction has been added to the syllabus. Literacy has been a cross-curricular priority for many years and we are finally at the stage where we’re admitting defeat. Of course, this has been met with mixed reviews. The English Teachers’ Association of NSW responded by saying, “Returning sentence structure and all of that kind of stuff purely to English I think is unfortunate,” but I’m not convinced it was really that well served by English departments in the first place, let alone whole schools.
The can has been kicked down the road by too many groups and students are being let down. If primary writing instruction hasn’t been completely effective, the can gets kicked down the road to high school. If high school instruction is de-emphasised or absent, the can gets kicked to universities. A lifetime of writing provisions and practices like Universal Design for Learning can result in near-complete avoidance. Currently, only 85% of trainee teachers in Australia meet the literacy standards required to graduate, on their first attempt at the LANTITE test. And all this after universities have passed them through their coursework for several years.
Last year, I took action by enrolling our students in Assessing Writing Australia, to get some data that would measure their writing skills in that moment, without having to wait months for NAPLAN results. It was the beginning of my journey out of the learned helplessness that I had started to feel in recent years. I spoke to Jeanette Breen recently about how we could move forward with that data in the secondary context and I was surprised to see how many problems we shared.
For those who haven’t been involved in Assessing Writing Australia, what is the principle behind Comparative Judgement and what data does it give?
Comparative Judgement is a statistical model that uses the decisions of judges on pairs of writing laid side by side to determine which is better. When done repeatedly by multiple judges, it creates a continuum of student data that allows us to compare writing over time. We can directly measure progress between schools, between cohorts and for a single student. We can use the scaled score obtained through not just an absolute judgement (as we have been used to in traditional moderation and marking) but a more reliable, relative judgement. So rather than ask ‘What would you assign this piece?’ we ask, ‘Which piece of writing is better?’
Marking writing is problematic for educators because we are used to subjectively making a decision about a student’s writing and assigning a mark. Consistency with an open-ended writing assessment is incredibly difficult and it is not uncommon for teachers to disagree with each other and even with themselves. Daisy Christodoulou explains this here.
So the data that comes from CJ achieves reliability because every student is judged by multiple pairs of eyes. What is interesting with CJ is that we do have a reasonably consistent and collective understanding of quality. We can tell this from the judging data called the infit, which shows how consistent we were when compared with each other and the range for each particular script in a judging session.
So your school, Tempy Heights Primary in Melbourne, has been a bit of a guinea pig. Tell me about your journey with CJ.
Templestowe Heights has been the blank canvas for a writing journey whose beginnings are not unlike many other schools. We began with a program, coaching, and with capacity building. We even built our own writing rubric mapped to the Victorian curriculum. We were invested in boosting our data, but we arrived at the conclusion that what we were doing wasn’t working. We had engagement but we couldn’t see the transfer of skills outside of the students writing long creative pieces in a workshop-style approach, which over time turned out to mean writing about the same topic over and over. I shared our experience with Think Forward here.
The students weren’t equipped for the release from basic skills to writing deeply for a purpose, and we needed to go back to writing mechanics. This meant an overhaul of resources and teacher capability as current instruction was linked to a limiting student-centred product. The journey in vulnerability changed our direction as we explored resources such as The Writing Revolution with a view to bringing our teaching into tight alignment. But we needed to know if these interventions would impact long-term performance.
So my exploration of CJ began not long after, during the world’s longest lockdown. I read Daisy Christodoulou’s ‘Making Good Progress,’ attended a 2am info webinar and contacted Dr Chris Wheadon, founder of No More Marking, where the process had been employed successfully by over 2000 schools across the UK.
The data point for our Year 3 cohort in the first trial in March of 2021 showed average achievement when compared to 25 other schools – this was not unexpected when considering the 2020 year of interrupted school and scattergun teacher knowledge we’d already identified.
We changed our framework to the use of slides, and writing was stripped back to sentence level using TWR and several other well-documented approaches. When more lockdowns hit in 2021, we stayed the course, continuing to embed sentence-level instruction into learning online. The growth for the November 2021 cohort, which was maintained into Feb 2022, was really exciting. Emina McLean and I wrote about this here. The gap between genders was also somewhat closer despite missing nearly two terms of face-to-face teaching.
When we chatted, I noticed a lot of commonalities between your experience and mine. I’d previously been guilty of sweeping the mechanics of writing under the rug when assessing, and taking a more conceptual approach. What’s the problem with that?
The issues between primary and secondary sound parallel. I was also unaware that there were other methods to assess writing and gather data apart from NAPLAN and rubrics. Moderation with our rubric at THPS sessions - while lively - showed the gaps.
Teachers looking at extended student writing seemed to focus on voice and spent a lot of time arguing about definitions of quality. We even went so far as to read the writing aloud so we could limit bias and read for content rather than mechanics. In retrospect, this highlights the many factors that a student needs to integrate accurately, in the formation of just one sentence. We know now, that it’s not enough to accept what a student intended to create - the standard you walk past is the standard you accept. The variance in teacher theoretical knowledge reflected known problems with ITE and the lack of confidence to teach writing.
Aside from this, it was the individual conferencing at the formative assessment level that continued to be problematic because sometimes by the time you had a chance to provide feedback, writing had gone beyond the point where you could assist students to actually change something valuable. Conferencing is useful but more so when we can assess the small steps, so I advocate for whole class feedback that everyone can benefit from, rather than one student at a time. I think when specific feedback is missing, this transfers into massive slabs of ‘stream of consciousness’ and students writing the way they speak. These are troubling issues that we see with students as they get older and the CJ data has allowed us to specifically intervene in this.
I took part in the Comparative Judgement trial because I was overwhelmed with the granularity and volume of the NAPLAN data. This data was never intended to be used for 1:1 intervention. It’s completely unfeasible. Is this a solvable problem with No More Marking?
Absolutely. I’ve learned that it is impossible to teach like this. Previously, we were conferencing to help students set learning goals, having them record these and make them visible on their desks, with some sense that this would encourage them to achieve ‘editing for better word choice’ or ‘making sentences more complex’.
If I ‘comparatively judge’ the time spent on this versus the speedy feedback loops we use now, the impact is actually unthinkable. Input must equal output.
Busy writers, engagement with creating texts, and setting writing goals do not necessarily equate to learning. It was Carl Hendricks who said that feedback should be more work for the recipient than the donor – and this is where the data has changed things for us. I am a big believer in just asking the students what they think, I have been confronted with some incredible feedback this way. When I asked them about goal setting, sticky notes, and my comments in the margins – they told me straight away that they do not read them. I think I assumed so much.
We use the insights from the larger projects we are part of to address the needs of the whole class. An example this year: we gave a narrative stimulus image of some children on a boat looking out to sea and an enormous number of students crafted their opening sentence with, 'Hi, my name is...'. Our thoughts were that it sounded like they were emulating Diary of a Wimpy Kid instead of responding with a sentence that considered the reader. They were writing like they spoke, and in a style that didn’t show interaction with all the different ways the stimulus could be addressed creatively.
I would have been happier with 'It was a beautiful sunny day when...' An instructional sequence on this became our next teaching goal. We also used sample paragraphs from the project, asking students to rewrite with attention to punctuation, capitals, or sentence combining. Or we provide multiple choice options with ‘choose the accurately formed sentence,’ as retrievals and quick data loops.
We have found we can gather data and address needs very quickly and the end-of-year CJ project will help us measure the student’s application of all their learning. Importantly, we will not assume they have learnt it, just because we taught it – we will stay the course working from the sentence level, moving to paragraphs, and then wait for the data to dissect it.
Primary and secondary teachers both need to stop kicking the can down the road and for the first time in a long time, they’re well supported to take action. Jeanette is running free webinars Thursday 14th & Tuesday 26th July specifically for Australian secondary schools. You don’t need to be a head of department to attend, just an individual English teacher or pedagogy leader who is curious about how to support their students as writers. You can register for the event below.