The numbers must go up

I love this time of year. Spring arrives in a riot of colour and we emerge from our winter torpor, stumbling out of darkness into light. From January onwards we look for signs of spring’s arrival, assessing its progress via a series of milestones: bulbs flowering, birds nesting, that first evening drink in the garden. If we were more methodical we might record the dates on which we first notice the appearance of certain key indicators: the first snowdrop or cuckoo call for example. And some weeks nothing happens. Now imagine you were tasked with recording the progress of spring and that your performance was determined by the evidence you collect. Noting occasional dates is no longer enough and so you set about counting flowers once a week. But a week in which ‘no change’ is recorded is frowned upon and leads to the utterly unintuitive decision to increase the schedule to daily recording. You are now reduced to getting down on your hands and knees with a ruler to measure plant growth in millimetres. The numbers must go up.

One of the main mistakes made with levels was adopting sublevels and points to measure progress over shorter and shorter periods. This was not done to support teaching and learning; rather it was in response to the ever growing pressures placed on schools to show improvement from one month to the next. A device for accountability and performance management. Consequently the data ceases to reflect pupils’ learning and instead shows what we require it to show: continual, upward, linear progress. Teachers, under pressure to keep the numbers going up, tick a few more boxes and make sure the latest value is higher than the last one. The metrics we use are not rooted in what can be measured and do not encapsulate the complexities of learning; they simply reflect how many times a year schools collect assessment data. If we had 12 so-called data drops per year, then we’d conveniently have 12 increments on our scale: an increasing number of arbitrary values to ‘prove’ progress over ever tighter timescales. But the irony is that the more complex these systems become, the less they actually tell us about pupils’ learning.

The final report from the Commission on Assessment without Levels notes that ‘many systems require summative tracking data to be entered every few weeks, or less’ and warns that‘recording summative data more frequently than three times a year is not likely to provide useful information. Over-frequent summative testing and recording is also likely to take time away from formative assessments which offer the potential not just to measure pupils’ learning, but to increase it.’ Sadly, this vital piece of advice is being ignored by some of those that scrutinise school performance and their demands should not go unchallenged. Are we attempting to measure the immeasurable and is this obsession actually a danger to children’s learning?


What our systems and approaches need is a spring clean, and that’s the subject for next time.

Subscribe to receive email updates when new blog posts are published.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 Sig+ for School Data. All Rights Reserved.