I Predict a Riot

I’ve come to the conclusion that I’m easily misunderstood. I rock up somewhere to give a talk, start ranting about how we have to wean ourselves off our number addiction – stop trying to measure and quantify everything that moves (and if it doesn’t move, measure it until it does) – and people are left thinking “who the hell is this guy? I thought he was a data analyst but he’s telling us to quit the data”. Well, that’s not quite true. I’m telling people to quit the dodgy data, to get beyond this ‘bad data is better than no data at all’ mindset. But much of what I see in terms of tracking of pupil progress falls well and truly into the bad data camp.

Essentially, there are three main things I take issue with when it comes to tracking systems:

1) The recreation of levels – most systems still use best fit bands linked to pupils’ coverage of the curriculum. *coughs* “sublevels!”

2) The simplistic, linear points-based progress measures and associated ‘expected’ rates of progress *coughs* “APS!”

3) The attempts to use tracking data, based on ongoing teacher assessment against key objectives, to predict end of key stage test outcomes *coughs* “astrology!”

So, earlier this week, I was attempting to explain my thoughts on points 1 and 2 when someone stated – and I’m paraphrasing here – that ‘the DfE quantify everything so we need to do the same in order to predict outcomes’. First, let me me be clear here, I am not suggesting schools give up on data – I think that would be foolhardy – I just think we need to be smarter and only collect data that has a genuine and demonstrable impact on pupils’ learning. We just have to accept that we can’t quantify everything – as much as some may want to – and admit that much of the data we generate is for purposes of accountability and performance management, not learning. Second, I do not believe we should use tracking data to predict end of key stage outcomes. It’s a bad idea for a number of reasons.

First, such predictions hinge on a school’s (or teacher’s) definition of ‘secure’/’on track’/’at age related expectations’. What constitutes so-called ‘secure’ quite clearly differs from one school to another. Many schools are using systems that provide a convenient answer for them and this is often based on the simplistic expectation that pupils will achieve a certain percentage of objectives per term, so a pupil is expected to achieve, say, a third of the objectives in the first term, two thirds in the second term, and so on. I recall an amusing yet worrying twitter conversation where teachers offered up their definitions of end of year expectations, all based on a percentage of objectives achieved. Various numbers were thrown into the ring: 51%, 67%, 70%, 75%, 80%. Interestingly no one suggested 100%, so quite clearly there are many pupils out there tagged as ‘secure’ despite having considerable gaps in their learning, gaps that may well widen over time. If we make the assumption that these pupils are on track to meet expected standards then we may well be in for the a shock. 

The next thing that creeps in are those key accountability thresholds: 65% for the floor standards, 85% for coasting. So, based on our definition of ‘secure’ or ‘on track’, which are quite possibly wide of the mark, we attempt to estimate the number of pupils that will meet the expected standard to satisfy ourselves (and our governors) that we’ll be safe this year. Breathe a sigh of relief. All this inferred from a somewhat spurious definition of ‘secure’ that varies from school to school. Worse still are those predictions based on a linear extrapolation of a pupil’s current progress gradient. I remember tracking systems doing this with point scores and the predictions were off the map. A pupil has made 4 points/steps/bands this year so we assume they will do the same over the next two years and will therefore easily exceed the expected standard (seriously, this is going on).

Next, we fall deeper down the rabbit hole and find schools that are converting teacher assessment into a sort of pseudo-scaled score. So, a pupil that is currently ‘secure’ or ‘at ARE’ will have a score of 100, whilst those pupils that are ‘above’ have higher scores, and those that are ‘below’ have lower scores. This is achieved by scoring and weighting each objective, totalling each pupil’s score and standardising it. Horrible. Don’t do this.

My overall concern is the impact these practices have on the expectations and aspirations for pupils. Will there be a concentration of resources on ‘borderline’ pupils and perhaps less opportunity for pupils to deepen their learning? Can a school really promote a learning without limits culture if it is distracted by distant thresholds? Will such approaches create a false sense of security that could easily backfire? 

And what are the consequences if those predictions are wrong, as they are so likely to be?

Obviously schools will want to have some idea of likely outcomes, and no doubt governors (and others) will request such information, but really this should only be done for end of key stage cohorts, and any predictions should be informed by the standards, test frameworks and optional testing. It is extremely risky to try to make the leap from a broad teacher assessment, at the end of year 4, say, to an end of key stage outcome, especially now when the curriculum is so new. Essentially we are attempting to link observations to outcomes based on a huge amount of supposition, and this is extremely risky. 

My firm belief is that tracking systems need to be untangled from accountability and performance management if they are to be truly fit for purpose. They should not be used to set performance targets and they should not be used for making predictions. If they are used in this way then there is always the risk that the data will be manipulated to provide the rose-tinted view rather than the warts-and-all picture that we really need. Instead, tracking systems should be very simple tools for recording and monitoring pupils’ achievement of key objectives; that allow teachers to quickly identify gaps in pupils’ learning and respond accordingly.

And if they do that then the final outcomes will take care of themselves.

Subscribe to receive email updates when new blog posts are published.

Share this article

5 thoughts on “I Predict a Riot

  1. Chris Grabski
    on May 19, 2016 at 11:50 am

    Hi James, I agree with your points regarding data and progress, expected or not. It is easy to talk about numbers or letters and forget that behind the data there are real children. My concern is that when we celebrate those who achieved, we need to remember that behind those percentages of success are percentages with real names of children who did not achieve 'expected progress', but still made progress, still are learning. I think even the term itself is very negative. It is like saying 'You have disappointed me'. Important blog. See you on Saturday in Sheffield.

  2. Unknown
    on May 19, 2016 at 11:51 am

    This comment has been removed by the author.

  3. Kirsty Poak
    on May 19, 2016 at 12:23 pm

    I think I'm with you but I'm feeling the pressure. The SENco wants to be able to predict early on in the year if a child is not going to make ARE by the end of the year. She wants to be able to quantify the progress a child who is not at ARE needs to make in order to catch up. We think the % of children at ARE is going to be low this year but others are saying that their numbers are fine e.g. 90%. That's making us panic. The LA says there's no excuse for low %ARE because Y6 have been on the new curriculum for 2 years and it's only a L4b anyway. Struggling to find arguments that are pithy but powerful..

  4. James Pembroke
    on May 19, 2016 at 12:42 pm

    First, with regards your LA, the likelihood of children achieving the expected standard relates to their prior attainment. What is the context of those schools predicting 90% RWM? And what are they basing that prediction on? Maybe you are being pessimistic whilst they are being over optimistic. We can't really know until all the results are in. And as for the whole 4B thing, that's a red herring. I wish the DfE had never said that because it's caused all sorts of problems and dodgy target setting antics, particularly from LAs. And anyway, what % pupils nationally achieved L4b in RWM last year? 69%! And that's based on L4b in reading and maths but just L4 in writing. The new writing standard is a lot harder.

  5. James Pembroke
    on May 19, 2016 at 12:49 pm

    Second, how can you predict what a pupil will achieve by the end of the year? A system certainly can't do that for the reasons explained in this blog. Usually it'll involve some highly tenuous linear extrapolation. Only a teacher can really know if a pupil is on-track or not using their own professional judgement drawn from experience. A system can help by showing the gaps in pupils learning and the expected next steps. So, you could quantify this by simply saying they need to secure x number of objectives; maybe express that as a percentage. My main point is that tracking systems should exist primarily to support teachers, not as a basis for predicting end of key stage results. Once you go down that path you are likely to end up with data that is manipulated to provide the expected outcome. Then it's of no use to anyone.

Leave a Reply

Your email address will not be published.

© 2024 Sig+ for School Data. All Rights Reserved.