The Progress Obsession

Despite my best efforts to convince people of the futility of the exercise, probably the most common question I get asked is:

“How do I show progress?” 

Why is this futile? Because what they are really asking is: “How do I use data to ‘prove’ that pupils have made ‘good’ progress?”

The reason for the inverted commas is because data does not really ‘prove’ anything – especially when it’s based on something as subjective as teacher assessment – and what constitutes ‘good’ progress varies from pupil to pupil. What is regarded as ‘good’ for one pupil, may not be enough for the next. One pupil’s gentle stroll is another pupil’s mountain to climb. Progress is a multi-faceted thing. It is catching up, filling gaps, deepening understanding, and overcoming those difficult barriers to learning. It can be accelerating through curriculum content, or it can be consolidating what has been learnt; it can mean no longer needing support with fundamental concepts, or it can be about mastering complex skills. Different pupils progress at different rates and get to their destination in different ways.

Progress is not simple, neat or linear – there is no one-size-fits-all pathway – and yet all too often we assume it is for the sake of a convenient metric. We are so desperate for neat numbers – for numerical proxies of learning – that we are all too willing to overlook the fact that they contradict reality, and in some cases may even shoot us in the foot by presenting an average line that no one follows in reality. Rather than a line that fits the pupil, we make pupils fit the line.

Basically, we want two numbers that supposedly represent pupils’ learning at different points in time. We then subtract the first number from later one and, if the numbers go up – as they invariably do – then this is somehow seen as evidence of the progress that pupils have made. Perhaps if they have gone up by a certain amount then this is defined as ‘expected’, and if it’s gone up by more than that it’s ‘above expected’. We can now RAG rate our pupils, place them into one of three convenient boxes, ready for when Ofsted or the LA advisor pay a visit. Some pupils are always red, and that frustrates us because it doesn’t truly reflect the fantastic progress those children have actually made, but what can we do? That’s the way the system works. We have to do this because we have to show progress.

Right?

First, let’s get one thing straight: data in a tracking system just proves that someone entered some data in a tracking system. It proves nothing about learning – it could be entirely made up. The more onerous the tracking process – remember that 30 objectives for 30 pupils is 900 assessments – the more likely teachers are to leave it all to the last minute and block fill. The cracks in the system are already beginning to show. If we then assign pupils into some sort of best-fit category based on how many objectives have been ticked as achieved (count the green ones!) we have recreated levels. These categories are inevitably separated by arbitrary thresholds, which can encourage teachers to give the benefit of the doubt and tick the objectives that push pupils into the next box (depending on the time of year of course – we don’t want to show too much progress too early). Those cracks are getting wider. And finally, each category has a score attached, which now becomes the main focus. The entire curriculum is portioned into equal units of equal value and progress through it is seen as linear. Those cracks have now become an oceanic rift with the data on one side and the classroom on the other.

Assessment is detached from learning.

This rift can be healed but only if we a) wean ourselves off our obsession with measuring progress, and b) sever the link between teacher assessment and accountability. Teacher assessment should be ring-fenced: it should be used for formative purposes alone. Once we introduce an element of accountability into the process, the game is lost and data will almost inevitably become distorted. Besides, it’s not possible to use teacher assessment to measure progress without recreating some form of level, with all their inherent flaws and risks.

Having a progress measure is desirable but does our desire for data outweigh the need for accuracy and meaning? Do our progress measures promote pace at the expense of depth? Can they influence the curriculum that pupils experience? And can such measures lead to the distortion of data, rendering it useless? It is somewhat ironic that measures put in place for the purposes of school improvement may actually be a risk to children’s learning.

It’s worth thinking about.

Subscribe to receive email updates when new blog posts are published.

Share this article

9 thoughts on “The Progress Obsession

  1. @norwichtech
    on February 6, 2018 at 2:05 pm

    Absolutely (and I run a tracking company). Accountability is hard, and must necessarily involve coarser measures, but too often it does not take account of the individuals under the data

  2. James Pembroke
    on February 6, 2018 at 8:35 pm

    Yes. It’s a tough one. I’m not anti-data, i’m anti bad data. i’m not anti-tracking system either; i’m against processes that take up inordinate amounts of teacher time for no real benefit. We need a new way of thinking and some new systems to challenge the old guard and the established orthodoxy. This also requires a change in tune from the DfE. Not easy.

  3. Mark Williams
    on February 7, 2018 at 9:32 am

    I'm stuck between systems that I can use to track progress across years (using standardised scores) and systems that I can use to track progress within a year. For the latter, I only really need to know how similar children compare with each other – that's the only meaningful progress.

  4. James Pembroke
    on February 7, 2018 at 2:47 pm

    Track progress across years using standardised tests. Track gaps in year. The progress takes cares of itself.

    As long as you don’t go done the route of asking teachers to tick a million learning objectives. That’s just counterproductive.

  5. Mark Williams
    on February 7, 2018 at 6:33 pm

    Could you unpack 'track gaps in year' a bit more please? I have avoided the million objectives route, but now I need to think a bit more deeply on in-year progress.

  6. James Pembroke
    on February 7, 2018 at 7:57 pm

    Think it’s more a case of teachers being equipped to talk about the gaps in learning than collecting data on them all. Means teachers having in depth curriculum knowledge. Maybe track key learning objectives, but evidence of in year progress is in books.

  7. Simon Pritchard
    on February 7, 2018 at 9:25 pm

    We track in-year against the NAHT KPIs. Data entry 3 times a year. We use the tracking to identify gaps in learning. Still based on teacher judgements so if they arent accurate then waste of time. We don't base appraisal targets on them ie x% at Secure by June etc. We are thinking about using some kind of standardised testing to look at progress from year to year but want some kind of onscreen test that would produce diagnostic reports to cut down on workload.

  8. James Pembroke
    on February 14, 2018 at 9:02 pm

    Have a look at Star Assessment computer adaptive from Renaissance Learning. Contact @MrLearnwell on twitter. He’s the guru.

  9. Andy Howard
    on August 19, 2018 at 12:32 pm

    And even more complex for SEND pupils, where progress may not even be anywhere near the neat boxes …

Leave a Reply

Your email address will not be published.

© 2024 Sig+ for School Data. All Rights Reserved.