Tracking in the Early Years Foundation Stage is a mess. We’re not talking about statutory assessment here – of the Reception Baseline and the Foundation Stage Profile. This is about day-to-day data collection, especially across the reception year, for the purpose of monitoring pupil progress. I use the phrase ‘day-to-day’ deliberately because let’s admit it: all too often there’s more data collection happening in early years than in all the other primary year groups. In some schools it pretty much is a daily process. But now, with the publishing of the new foundation stage profile and development matters guidance, we have a chance to evaluate the nature and purpose of the data that’s collected, and rationalise it. Perhaps, as Julian Grenier suggests in his recent blog post, this is Early Years’ ‘assessment without levels’ moment.
In most primary schools, data collection in the early years falls broadly into three categories:
- Recording assessments against numerous development matters statements
- Recording overall assessments against each of the early learning goals (ELG)
- Collecting qualitative information including photos and commentary for learning journals
This blog will deal with the first two (although no doubt there is a lot of scope to reduce workload in that third area. How many photos?).
The first point relates to tick lists of statements derived from the development matters guidance. It is common for teachers to record assessments against the statements for each month band, stating whether a pupil is at ‘age related expectations’ or not. This can be an immensely time consuming process. I recall one school having a system that would require the reception teacher recording 45,600 assessments when the number of pupils, statements, ELGs, and ‘data drops’ was taken into account. No one is going to diligently do that much box ticking. They are more likely to block fill the grids or not complete them at all, in which case you have to ask yourself: what’s the point? It’s unlikely the process will tell anyone anything useful, but sometimes things have been done in a particular way for so long it can be hard to see past it.
Will this change with the new development matters guidance? Not if schools are determined to carry on or are ‘recommended’ to do so by external agencies. Interestingly, the new guidance does not contain statements at the level of individual ELGs, only for the areas of learning. Sadly, and inevitably, schools are already trying to map new statements to the ELGs or come up with their own lists, for the purposes of tracking. They are supposed to be examples, not hard criteria. And these statements, whether from the new or old development matters guidance, were never designed to be used as a checklist for tracking. We all know this but here we are. It’s time for schools to question their procedures and consider the workload involved. As always with data gathering, ask: if teacher’s didn’t do this, would learning suffer?
The next issue involves making overall assessments against early learning goals, and this is where it can get really confusing. Early years teachers may want to record whether or not a pupil is typical for their age against a particular ELG, and will therefore take the pupil’s date of birth – their age in months – into account when making that assessment. Are they summer or autumn born? Is their development typical for a child of their age? Problems arise when tracking towards the final ELGs, the assessment of which is not age-adjusted. Senior leaders in a primary school are likely to want to know how many pupils are on track to meet specific ELGs and reach a ‘good level of development’. This may conflict with the ‘age related’ approach that shows summer born pupils, who may well be ’emerging’ at the end of the year, are ‘at expectations’ for their age. Conversely, if we don’t take age into account then we miss vital information on these young children for whom a year’s difference in age could be 25% of their total lifespan. As with age standardised scores, making age-related judgements, cuts through the normal differences accounted for by age, and reveals the true issues that lie beneath. The problem is that, as with other forms statutory assessments, EYFSP, an undeniably high stakes assessment, is not adjusted for age. Perhaps schools need both?
What then should schools record when making judgements and tracking pupil progress across early years? Currently, most schools are using a system of development matters age bands subdivided into three increments, much like levels were split into sublevels. And as with levels, these increments are commonly used to measure pupil progress, which requires each to have a point score. Indeed, there are still reception teachers who will confide that they are set performance management targets for ‘all pupils to make at least 4 points of progress’. Sometimes this is based on their overall assessment of each pupil’s progress towards the early learning goals each term, and sometimes it’s based on how many statements they’ve ticked but – as with any situation where a person’s performance management is based on the data provided by the person being performance managed – the risk to the integrity of what is already subjective data is clear.
You can have accurate teacher assessment or you can use it for performance management. It’s your choice.
Back to what data schools currently collect, here’s a real example:
|1 ELG Emerging||21|
|2 ELG expected||22|
|3 ELG Exceeded||23|
This is what happens when progress measures rule: numbers outweigh logic. The overlap between bands has been ignored – isn’t 30-50s higher than 40-60e, not lower? – and the scoring system runs through ELG assessments and into year 1. It’s a completely made-up scale. It’s a mix of age-related judgements, statutory assessments, and quasi-levels. It’s meaningless. And yet it’s sadly very common.
What could schools do instead? First, have a very honest conversation about whether ticking off long lists of statements is an efficient use of a teacher’s time but don’t be surprised if there is resistance against attempts to remove or even reduce it. As we learnt from the removal of levels and APP, some habits are hard to break.
Next, scrap systems such as those illustrated above and stop trying to measure progress. Instead, in the reception year, consider taking a simple, binary approach that indicates whether or not pupils are on track to meet early learning goals. This will provide senior leaders with a running total of those on track to reach a ‘good level of development’ and can be justified in the light of the the new early years foundation stage profile which does away with the ‘exceeding’ grade. Alternatively maintain a three tier system that records whether pupils are likely to be emerging in the ELGs, or will meet or exceed them. Either way it is preferable and easier to understand than the Frankenstein systems currently in use.
The problem of course is that ELGs are the end point of early years and assessments with those in mind are not taking into account whether or not a pupil’s development is typical for their age. Grouping assessments by, say, term of birth, will quickly reveal whether the ‘exceeding’ pupils are all autumn born and the ’emerging’ (or ‘not on track’) pupils are all summer born but it’s not ideal and unlikely to be a popular approach. And it wouldn’t be appropriate for pre-reception anyway. If early years teachers are to record data they will probably want it to be age-related. Perhaps schools should only consider tracking towards ELGs later in the reception year, early in the Spring term onwards. Prior to that, and in earlier years, any data should focus on ‘typicality’ rather than ‘on trackness’, but remember that the former does not translate into the other. Data in early years, especially in the reception year with its statutory profile assessment, may be in tension, brought about by competing purposes of what senior leaders want to see and what teachers think appropriate to record.
It will be interesting to see what happens this year with regards tracking in early years. We are told that development matters should not be used as a checklist and that the month bands should not be subdivided and used for measuring progress And yet that is what many if not most primary schools are doing. Perhaps the new guidance with its broader approach will force a change in data collection, but already schools are subdividing the age bands in response to the age-old question: “but how will we measure the progress?”
Surely, if we are going to collect data in the early years, we just need something that tells us if pupils are typical or not for their age and, at some point, are on track to meet early learning goals.
How complicated does it need to be?
Many thanks to Ruth Swailes for all the help and advice that has shaped my thinking on this. Your patience and insight during 3 hours of phone calls is very much appreciated.