Similar schools (my a***!)

An ex-colleague called me yesterday with a question about the similar schools measure in the performance tables. As we spoke I could feel that creeping uneasiness one experiences when confronted with something you really should know about but don’t. Cue delaying tactics (how’s the family? Good Christmas?) whilst frantically searching for the guidance on the internet. Then it transpired I had no internet connection because the builders had accidentally tripped the switch at the consumer unit. And then, thankfully, we were cut off. Phew!

To be fair, I had read the guidance when it was published last month; I just didn’t really pay much attention and evidently the information hadn’t sunk in. Now was an opportunity to correct that. Besides, I was writing a report and could do with the distraction.
So what is the similar schools measure? How does it work? Essentially it borrows from VA methodology in that it involves calculation of end of key stage 2 estimates based on key stage 1 start points, and is similar to FFT reports in that they calculate an estimated percentage ‘likely’ to achieve expected standards in reading, writing and maths. Unlike FFT, however, they do not then compare that estimated percentage to the actual result. Here’s the process:
1) for each pupil in the previous Year 6 cohort, the probability of that pupil achieving expected standards based on their prior attainment at key stage 1 is calculated. For example, in 85% of cases nationally, a pupil with KS1 APS of 17 achieved the expected standard, and a pupil with a KS1 APS of 15.5 achieved the expected standard in 62% of cases. These pupils therefore have a statistical probability of achieving expected standards. However, a pupil with a KS1 APS of 12 has only a 38% chance of achieving expected standards (i.e. Nationally, a pupil with this prior attainment achieved expected standards in only 38 out of 100 cases). This pupil therefore does not have a likelihood of achieving expected standards.
I made all those probabilities up by the way. They are for illustration purposes. I could have done some proper research – there is a graph in the guidance – but I’m just lazy.
So now we know, based on pupils’ start points and national outcomes, whether pupils have a likelihood of achieving the expected standard. Once this is done for individual pupils, we can aggregate this to calculate an estimate for the whole school cohort: simply add up the number of pupils that have a probable chance of achieving expected standards and divide that by the total number of pupils in the cohort.
Note that this process has been done for use in the performance tables. These probabilities are not calculated in advance of pupils sitting SATS; they are done after the event. We already know what pupils results are and whether or not they have met expected standards. Here we are calculating the probability of them doing so based on what pupils with the same prior attainment achieved nationally. It’s retrospective. 
In FFT reports, they take this estimated outcome and compare it to the actual result, which gives us the +/- percentage figures seen on the right hand side of overview page in the dashboard (those nice dials). Essentially this is FFT telling us the difference between the likely outcome for such a cohort and the actual outcome. This is a form of VA.
That is not what the DfE have done.
2) now each school has an estimate, a probable outcome. This is the percentage of pupils likely to achieve expected standards based on the achievement of pupils with similar start points nationally. Schools are ranked on the basis of this estimated outcome. We now have a big pile of 16500 primary schools ranked in order of likely result. 
3) each school is placed in a group with 124 other schools. The groups are established by selecting the 62 schools above and below your school in the rankings. These are your ‘similar schools’, schools that have similar estimated outcomes on the basis of pupils’ prior attainment. Size of cohort and contextual factors are not taken into account.
4) then – and this is where it gets a bit odd – they take each school’s actual results (the percentage of pupils in each school that achieved the expected standards) and rank the schools in the group on that basis. Schools are then numbered from 1 to 125 to reflect their position in the group. Now, in theory, this should work because they all have similar prior attainment and therefore ranking them by actual results should reflect distance travelled (sort of). Except it doesn’t. Not really. Looking at the data yesterday, I could see schools ranked higher despite having much lower VA scores than schools below them. The similar schools measure therefore conflicts with the progress measure, which begs the question: why not just rank schools in the group on the basis of progress scores rather than attainment? Of course a combined progress measure, like FFT’s Reading and Maths VA score, would help. Or, at least calculate the difference between the actual and the estimate and rank on that basis. The fact that the school estimates are not published bugs me, too. These should be presented alongside the number of pupils in the cohort and some contextual data – % SEN, EAL, FSM, deprivation indicators and the like. If part of the reason for doing this is to help schools identify potential support partners (that’s what the guidance says), then surely this data is vital. 
Not factoring in cohort size is a particular issue. A school with 15 pupils, of which 60% achieved the expected standard, will be ranked higher in the group than a school with 100 children of which 58% achieved the expected standard. In the former school a pupil accounts for 7%; in the latter it’s less than 1%. It’s hardly a fair comparison. 
And of course no adjustment is made to account for that high percentage of SEN pupils you had in year 6, or all those pupils that joined you during years 5 and 6, but that’s an issue with VA in general.
I get the idea of placing schools into groups of similar schools, but to blow all that by then ranking schools in the group on the basis of results without factoring in cohort size or contextual seems wrong. And to overlook the fact that schools can be ranked higher despite having poorer VA scores is a huge oversight. Surely this hints at a system that is flawed. 
So, there you go. That’s the similar schools measure. Go take a look at the performance tables and see where you rank and which schools you’re apparently similar to.
And then join me in channeling Jim Royle:
Similar schools, my arse!

Subscribe to receive email updates when new blog posts are published.

Share this article

2 thoughts on “Similar schools (my a***!)

  1. saint edmund
    on March 28, 2017 at 7:35 am

    This comment has been removed by the author.

  2. saint edmund
    on March 28, 2017 at 7:36 am

    Nice Information! Thanks for sharing!
    Best Schools in India

Leave a Reply

Your email address will not be published.

© 2024 Sig+ for School Data. All Rights Reserved.