Slave to the algorithm

A few months back someone posted a screenshot from an Ofsted report on Twitter. The paragraph in question stated that ‘according to the school’s own tracking data, most pupils are not making expected progress’, Ouch! The school appeared to have shot itself in the foot with its own system. 


It’s tempting to write this off as an isolated case: a naive headteacher who made an error of judgement. More fool them. But this is far from being an isolated case; it’s actually quite common. I regularly go into schools and get shown tracking systems that are awash with red. Loads of children are apparently below ‘age-related expectations’ and are not making ‘expected progress’. Yet, invariably, the headteacher will claim that ‘this is not a true reflection of the pupils in our school’, and ‘if you were to look in their books you’ll see the progress they’ve really made’, which begs the simple question:

What is the value of a system that is at complete odds with reality?

It does the school no favours whatsoever, yet they soldier on with it because they’ve ‘already paid for it’ and they’ve been ‘using it for years’ and ‘the staff understand it’ and ‘the governors like the reports’. The fact that it is not painting the school in a favourable light is not enough to overcome their inertia. It’s like changing banks: hardly anyone does it.

This term the issue has become particularly apparent due to the simplistic algorthithms used in some systems. Essentially, systems are counting how many objectives have been assessed as achieved or secured, and presenting this as a percentage of the total number of objectives to be taught that year. If the system has the in-built, crude expectation that pupils will achieve a third of the objectives per term (a common approach), then any pupil that has not achieved over 33% of the years’ objectives this term will be classified as below age-related expectation and will not be awarded the additional point that indicates that they have made so-called expected progress. But is ‘age-related expectation’ really a simple on-off thing or is it more subtle that that? More realistically, the vast majority of pupils are likely to be working within the age appropriate curriculum; it’s just their security within it and the support they require that differs.

So ‘ARE’ is far more nuanced than the data suggests yet schools are putting up with stark binary approaches. Rather than turning off their systems and attempting to do something more meaningful instead, they find workarounds, export the data and convert it into something else, or try to ignore vast swathes of the system that make their school look bad. But what happens when someone asks how much progress the pupils are making? With a sigh they will inevitably turn back to the data because they have nothing else, and risk shooting themselves in the foot in the process.

It appears that we have become hard wired to quantify progress. To distill pupils’ learning down to a neat linear point scale even when it does us no favours whatsoever. Even when it bears no relation to the achievement of our pupils. Even when it jeopardises the standing of our schools. We are evidently finding it exceedingly difficult to break the chain. 

But break the chain we must. Legacy tracking systems – those originally designed to measure progress through levels – only serve one group of pupils well: those that are catching up. These pupils can get extra points by rapidly progressing through the curriculum. They make ‘better than expected progress’ in the traditional sense. However, a pupil that starts and finishes the year at the broadly age-related expectations appears to have made less progress despite deepening their learning; and the progress of the pupil that is closing gaps from the previous year is also not properly recognised. As for the SEN pupil that has not covered much in terms of curriculum but has overcome significant barriers to learning, their progress hardly even registers on the scale. 

Catching up, filling gaps, deepening understanding and overcoming barriers to learning – it is clear that we need more intelligent systems that are capable of recognising these different types of progress and treating them as equals. I can’t see a simple algorithm doing this. Surely only a human is capable of identifying such complexities of learning and making an accurate assessment of progress. Unfortunately we have become accustomed to having a system make the decision for us. We have effectively absolved ourselves of responsibility for assessment and handed it over to a machine. Tick the box and press a button. This might have made us feel a bit less accountable in the past but now it’s starting to backfire. All too often we find ourselves at odds with our systems.

As daunting as it sounds, it’s time we started to wrestle back control of our data before it bites us on the backside. Do we really want to present data that erroneously suggests that only half of the pupils are at age-related expectations and few are making expected progress? No, of course not.

Now ask yourself this: if your tracking system ceased to exist would you reinvent it?

No?

So, what would you do instead?

Whatever your answer, it probably makes a lot more sense than many of the pseudo-level approaches that are currently on offer. I am not saying that we should ditch systems altogether, I’m simply saying we need to find better ways of doing things, that more realistically reflect the subtleties and complexities of pupils’ learning.

What will be the consequences if we don’t?

Subscribe to receive email updates when new blog posts are published.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 Sig+ for School Data. All Rights Reserved.