5 things primary governors should know about data. Part 3: progress measures

The key stage 1-2 (KS1-2) progress measure is a value added (VA) measure, and this is nothing new. We have had VA measures for years, both at KS2 and at KS4. But previously these VA measures – which took up pages of the old RAISE reports – played second fiddle to the levels of progress measure. This was for a number of reasons:

  1. Levels of progress was a key measure with floor standards attached
  2. It was in the same language as everyday assessment
  3. It made target setting easy (just add 2 levels to KS1 result)
  4. It was simple and everyone understood it
But levels have gone, and for good reason: they labelled children, they were best-fit so pupils could have serious gaps in learning but still be placed within a level, and progress became synonymous with moving on to next level rather than consolidating learning and developing deeper understanding. Plus, they were never designed to be split into sublevels or points and used as for progress measures anyway. 

Most confusing of all, the two progress measures – VA and levels of progress – often contradicted one another. It was possible, for example, for a school to have all pupils make ‘expected’ progress of 2 levels, and yet have a VA score that was significantly below average. This was because – contrary to popular belief – the VA measure had nothing to do with levels; it was all to do with average KS2 scores from KS1 start points. 2 levels might be enough from one start point but nowhere near enough from another. 

But this is all rather academic now because levels have gone and we are left with a single progress measure: VA.

So, what is VA? 

VA involves comparing a pupil’s attainment score at KS2 to the average score for pupils with similar prior attainment. There are a few myths we need to bust first, before we continue:
  1. We do not need data in the same format at either end of the measure to calculate VA. Currently we have KS1 (sub)levels at the beginning and KS2 scaled scores at the end. These data are not in the same format. We needed compatible data for the levels of progress measure but not for VA. This misconception is a hangover from levels, and it’s something that is better understood in secondary schools where they have KS2 scores at one end and GCSE results at the other.
  2. We do not even need the same subjects at either end. Again, this is better understood in secondary schools, where the baseline comprises KS2 scores in reading and maths (note: no writing) and the end point is any GCSE the student sits. VA can be measured from KS2 test scores in reading and maths to GCSE result in Russian or Art, for example. 
  3. KS1-2 VA has nothing to do with that magic expected standard score of 100. Plenty of pupils get positive progress scores at KS2 without achieving a score of 100 in KS2 tests. They just need to exceed the national average score of pupils with the same prior attainment, and scoring 92 might be enough, depending on start point. And pupils that achieved 2b at KS1 (often referred to as ‘expected’ in old money) do not have to achieve 100 to make ‘good’ progress; in 2017 they had to exceed 102!
Each pupil’s KS1 result – their prior attainment or start point – is therefore crucial to this process. Each p-scale, level and sublevel in reading, writing and maths at KS1 has a point value, which enables the DfE to calculate a KS1 average point score (APS) across the three subjects for every child that has a KS1 result (note: pupils without a KS1 result are excluded from progress measures). Their KS1 APS is then used to place pupils into a prior attainment group (PAG), of which currently we have 24, ranging from pupils that were on p-scales at KS1 (pupils with SEND) up to pupils that were Level 3 in all subjects. There is even a PAG for pupils that were level 4 at KS1, but there aren’t many pupils in that group. 

All pupils with KS1 results are therefore slotted into PAGs alongside thousands of other pupils nationally. The DfE then take in all the KS2 test scores and calculate the average KS2 score for each PAG. Let’s look at two examples for reading at KS2 (the process is the same for maths):
  • We have two pupils in a class that have KS1 prior attainment of 16 APS (2b in reading and writing and 2a in maths at KS1). They are placed into the same PAG as thousands of other children nationally with 16 APS at KS1. The DfE take in all the thousands of reading test scores for all the pupils in this PAG and calculate the average score, which for this PAG is 105 (note: in reality benchmarks are to 2 decimal places e.g. 104.08). 105 therefore becomes the benchmark for this group. Our two pupils scored 108 and 101 in their KS2 tests and both have met the expected standard. However, only one pupil has a positive progress score. The pupil scoring 108 has beaten the national benchmark by 3 whilst the other has fallen short by 4. These pupils’ VA scores are therefore +3 and -4 respectively.
  • We have two other pupils in our class who have KS1 prior attainment of 10 APS (2c in reading and Level 1 in writing and maths). They are in the same PAG as thousands of other children nationally with 10 APS at KS1. The DfE collect the reading test scores for all pupils in the group nationally calculate the KS2 average score, which in this case is 94 (again, in reality this would be to 2 decimal places). 94 therefore becomes the benchmark for this group. Our two pupils scored 98 and 88 in their KS2 tests. Neither have met the expected standard but the first pupil has beaten the national benchmark by 3 whilst the other has fallen short by 7. These pupils’ VA scores are therefore +4 and -6 respectively.
This process is repeated for each pupil that has a KS1 result. All pupils are placed into PAGs and their scores in KS2 tests are compared to the national average score (the benchmark) for pupils in the same PAG. If a pupil beats the benchmark, they have a positive progress score; if they fall short, their progress score is negative. Page 17-18 of the primary accountability guidance has a table of all PAGs with their corresponding KS2 benchmarks in reading, writing and maths. 

What happens next?

The DfE take all progress scores for all pupils in the year 6 cohort in your school, and calculate the average. In our example above we have four pupils (two with prior attainment of 16 APS and two with 10 APS). Let’s imagine that is our entire Y6 cohort (it’s a small school!). We add up the progress scores (3 + -4 + 3 + -7 = -5) and calculate the average (-5 / 4 pupils = -1.25). This school’s VA score is therefore -1.25, and you will see these aggregated progress scores presented in the performance tables and ASP (where they are colour coded and categorised), in Ofsted’s IDSR (where they inform the areas to investigate), and in FFT reports (where they are shown to be in line with, or significantly above or below average). 

And what does -1.25 mean. Putting it crudely, it tells us that, on average, pupils scored 1.25 fewer points in their test than similar children nationally. And when the DfE say ‘similar children’, they are basing this on prior attainment alone, not contextual factors. A progress measure that takes context into account is called Contextual Value Added (CVA), which the DfE scrapped in 2011, but which FFT still offer. CVA is an attempt to create a like-for-like progress measure but is not favoured by government.

Are there any issues with KS1-2 progress measures? Whilst VA is preferable to levels of progress, there are numerous problems:
  1. Writing! There is no test for writing at KS2 but there is still a progress measure. As in reading and maths, pupils are set benchmarks in writing that are fine graded to decimal points (see p17-18 here), but because pupils do not have a test score, these benchmarks are essentially unachievable. Instead, the DfE have assigned ‘nominal’ scores to teacher assessments for writing, which makes for a very clunky measure. The vast majority of pupils are assessed as either working towards the expected standard, working at the expected standard, or working at greater depth. These attract values of 91, 103, and 113 respectively. In reading, and maths pupils can achieve test scores in the range of 80-120; in writing, they get 91, 103 or 113. It doesn’t work.
  2. Pupils below the standard of the tests/curriculum are also assigned nominal scores, which range from 59 for the lowest p-scales, up to 79 for the highest of the pre-key stage assessments. These pupils often have SEND and tend to end up with big negative progress scores, which can have a detrimental impact on a school’s overall progress scores. The system is therefore punitive towards those schools that have large groups of pupils with SEND (or towards small schools with just one such pupil). The DfE plan to mitigate this issue by capping negative scores this year. 
  3. It can’t be predicted. The benchmarks change every year (they are the national average scores for each PAG that year), and we don’t know what they are until after pupils have left. This is a headache for many headteachers and senior leaders.
  4. It relies on the accuracy of KS1 results. I’ll say no more about that. 
Now you know how these progress measures are calculated, and what the issues are. But what do they mean in terms of school accountability? 

That’s the subject of the next post in this series: headlines and trends.

Subscribe to receive email updates when new blog posts are published.

Share this article

Leave a Reply

Your email address will not be published.

© 2024 Sig+ for School Data. All Rights Reserved.