Running to stand still

Yesterday I re-read this from @edudatalab and, following an enlightening discussion with @meenaparam, I took the red pill and discovered that the VA rabbit hole goes deeper than I previously thought. 
Much is made of the issue of progress in junior schools and their correspondingly poor Ofsted outcomes. I’ve tweeted about the problem numerous times and have written a blog post about it, comparing estimates derived from CATS against VA estimates based on the KS1 results. The differences can be enormous with far higher expectations for KS2 attainment when plotted from KS1 – the gap between the CATS and VA estimates in junior schools is around 3 points on average, with the former being the more accurate predictor. 
Inevitably the finger of blame points squarely at the Infant school, and in some cases this may be justified. I’ve worked with a number of junior schools where the large proportion of supposedly high ability pupils is completely at odds with both the school’s own assessment of pupils on entry and the context of the area. However, as the Education Data Lab article points out, it may not be as simple as this. Is the issue of poor progress in junior schools really about over inflation of results in the infant school? Or is the cause more complicated and less direct than that? 
Could it be that the issue of poor progress in junior schools actually relates to the depression of KS1 results in primary schools?
To get your head round this you need to understand how VA works.
VA involves the comparison of a pupil’s attainment against the national average outcome for pupils with the same start point. 
Now, what happens if primary schools were dropping KS1 assessments by a sublevel, so, for example 2As became 2Bs? If, on average, all those pupils went on to get a 5C then it would appear that was the national average outcome for a 2B pupil when in actual fact it’s the national average outcome for a 2A. The benchmark for a 2B pupil therefore becomes a 5C.
The implications for this in a junior school are huge. It is of course highly unlikely that the infant school would depress their results so even without any grade inflation the junior school is in a tricky position. The benchmark for their 2B pupils is a 5C because that is apparently what is happening nationally. Unfortunately for the junior school their 2B pupils are real 2B pupils, not bumped down 2Aers. 
If we add into this any grade inflation by the infant school then the problem is exacerbated even further. The wholesale depression of baselines by primary schools results in unrealistic expectations for schools whose KS1 data are accurate, and any inflation of results at KS1 pushes the expectation still further out of reach. There are direct and indirect factors which explain why so many junior schools’ RAISE reports have a green half (attainment) and a blue half (progress). Essentially pupils in junior schools have to make an extra 2 points of progress to make up for the depression of KS1 results by primary schools nationally and possibly a further 2 points to account for any grade inflation in the infant school. 4 extra points of progress just to break even.
Running to stand still.
Unfortunately, the only way to solve this problem is to have a universally administered baseline test.
Watch this space.

Subscribe to receive email updates when new blog posts are published.

Share this article

One thought on “Running to stand still

  1. tarique hasan
    on April 7, 2017 at 8:51 am

    This comment has been removed by a blog administrator.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 Sig+ for School Data. All Rights Reserved.