Attack of the Clones: are data people trying to replace levels with levels?

A couple of days ago the opening salvos of a war between good and evil were fired across the vast expanses of the Edutwitterverse. From their distant quantitative system of darkness, the number crunching legions of the Evil Empire of Analysis were positioning their Data Death Star in orbit around the peaceful and progressive Moon of Awol (that’s assessment without levels, in case you didn’t know); and much was at stake. 
Well, OK, there was a minor skirmish involving some words and some huffing, and some good points were made. Mainly, I have to confess, by Michael Tidd (light side), and not so much by me (dark side). Michael (follow him on twitter @MichaelT1979 – he knows stuff) has already convincingly blogged his side of the argument here. I also hurriedly wrote a piece detailing my thoughts but managed to lose the whole thing after 2.5 hours of effort. Typical. Try again!
So what’s the big issue?
Well, to put it simply, the question is this: do we still need a numerical system for tracking now that levels have gone?
Which caused one person to ask: is this ‘a case of the data bods driving the agenda’?
Whilst someone else worried that ‘it’s beginning to sound a lot like levels’, which I’m fairly certain was a Christmas hit by Michael Buble.
They have a point. So, before I go any further I’d like to state the following:
1) I’m no fan of levels. They are too broad, sublevels are meaningless, and they have resulted in the most dangerous/pointless progress measure ever devised.
2) I don’t believe a numerical system is required for reporting to parents and pupils. As a parent I am far more interested in what my daughter can do, and what she needs more support with, than some arbitrary number. 
3) I understand that assessment and tracking are different things.
So, do we still need a numerical system for tracking purposes? Well, I think we do. I fully support objective-based assessment models – they make perfect sense – but I also believe that conversion to a standardised numerical system will allow for more refined measures of progress, particularly at cohort or group level, and over periods longer than a year. To reiterate, these do not need to be used in the classroom or reported to parents; they would simply provide the numbers for analysis. They would be kept under the bonnet, fuelling the tracking system’s engine; and this is the approach that most tracking systems have adopted. It remains to be seen, of course, how well these work in practice and whether schools start reporting these figures to parents and pupils. I hope not.
Ofsted and RAISE
So, this is where I have to make a confession: Ofsted concern me and many of my opinions about tracking and analysis relate to inspection, which is rather depressing I know, but I think it’s necessary. No, I don’t think we should build tracking systems solely to satisfy Ofsted but I think it’s foolhardy not to consider them. Having been involved in a number of difficult inspections in the past year, I know that data presentation (particularly fine analyses of progress) can often make the difference between RI and Good, which again is depressing, but it’s a reality. If we want to get an Ofsted-eye view of school data, just look at RAISE. If you want to counter the arguments stemming from RAISE then it pays to be able to present data in a similar format, in a way that an Ofsted inspector will find familiar. And let’s face it: inspectors aren’t going to find many things familiar this year. 
The measure that concerns me most is VA – a measure of long term progress, comparing actual against expected outcomes. Without resorting to any particular metric, we can address the proposed new floor measure of 85% making the expected standard by end of KS2 by showing the percentage of the cohort/group that are at or above the school’s defined expectations linked to the new national curriculum. Mind you, to digress for a bit, I have a couple of issues here, too. Being at or above the expected level is not necessarily the same as being on track. The pupil may have made no progress or gone backwards. Also, if the school only defines the expected level at the end of the year, will this mean that all pupils are below expectations until the end of the year is reached, like a switched being turned on? Where will this leave the school? Would it not make sense to have a moving definition of expected to allow for meaningful analysis at any point in the year? Just a thought. 
Back to the issue of measuring progress, under various proposed assessment models, we can easily analyse in year progress by counting the steps pupils make, and we can also measure progress by monitoring the shifts in percentages of pupils in a cohort that are below, at or above expectations. But long term, VA-style progress measures are more tricky. If no numerical system exists, how does the school effectively measure progress to counter any negative VA data in RAISE? I’m really struggling with this and I suspect that many if not most headteachers would like their assessment system underpinned by a numerical scale, which will allow progress to be quantified . We know that a floor standard, measuring progress from beginning of EYFS to end of KS2, will be implemented and will be of huge relevance to schools, the majority of which will (initially at least) fall below the 85% expected standard threshold mentioned above. I’m assuming that schools will want to in some way emulate this VA measure in their tracking by quantifying progress from EYFS to the latest assessment point, and perhaps in some way project that progress to make predictions for the end of the key stage.

Another confession: I made the assumption that these assessment models rely on sequential attainment of objectives. If this were the case then a decimalised curriculum year-based model would be useful and neat. For example, categorising a pupil as a 4.5 because they are working within the year 4 curriculum and have achieved 50% of its objectives. Simple. And of course would allow meaningful comparison between pupils within a cohort and even between schools. However, as was pointed out to me, this is not how pupils learn and it doesn’t tell us what 50% they’ve achieved (it’s not necessarily the first 50%). This was what we were debating yesterday when the ‘data bods driving the agenda’ accusation was fired at us. The author of that comment has a good point. 
However, in my defence – and I’m sure it’s the same for most other data people – I don’t want to drive the agenda. I spend most of my time in schools, working with headteachers, senior leaders, teachers and governors, and I’m constantly learning. I change my mind pretty much everytime I look at twitter. My opinion is like Scottish weather: if you don’t like it, just wait 20 minutes. I simply want to ensure that schools have the best tools to do their job and to defend themselves against Ofsted. That’s it. I’m not interested in unwieldy, burdensome, time consuming systems; data systems should simplify processes, save time and improve efficiency. It should be a servant, not a master. And yes, its primary function is to inform teaching and learning.
So, to summarise a rather rambling blog, I’m excited about the removal of levels and see it as an opportunity to innovate. As a parent I am more interested in knowing what my daughter can and can’t do, than her being assigned a meaningless level. I just think that tracking works best when data is converted to a standardised numerical system. This numerical scale should be used for strategic analysis, to help senior leaders compare current school performance against that outlined in RAISE. I don’t think that new numerical systems should replace levels and be used for reporting purposes.  Any such systems must be kept guarded within the mainframe of the Data Death Star at all times.

and we’ll leave those cute little Awols alone. 

Promise!
Data Vader
Level 5 (Sublevel C)
Data Death Star

Subscribe to receive email updates when new blog posts are published.

Share this article

3 thoughts on “Attack of the Clones: are data people trying to replace levels with levels?

  1. Anonymous
    on August 28, 2014 at 2:50 pm

    Thanks for this blog James, which is really clear despite the loss of the first.

    I suspect, as is so often the case, that we agree far more than we disagree. As I've stated elsewhere, my angst is with the situation that means that school leaders aren't leading this enough.
    You have summarised a system which I would be quite happy with – a clear objective-led system for classroom use; a numerical data system at the overview level for tracking group and cohort progress. My concern is always that the latter will over-rule the former. That's actually an issue caused by school leaders again, not the data experts

    Could a percentage work for that, or a score of 4.5 for a child mid-way through Y4? Certainly if it is contributing to a group or cohort measure. What matters is that we don't then have teachers telling students that to progress they need to get to a 4.8!

    On reflection, maybe it's not the data bods we need worry about? It's those school leaders again!

  2. James Pembroke
    on August 28, 2014 at 7:53 pm

    I think you're right. I think school leaders need numbers as descriptors almost like a comfort blanket – the comfort of the familiar! I must have visited 120+ primary schools in the past academic year and I don't think I've been to one where they thought the removal of levels was a good idea. I guess it is inevitable that, given a numerical system to play with, many will resort to using for labelling purposes, and they'll find their way into reports. That doesn't mean it's not a good idea, it just requires users to be better educated about reporting.

    I think that a numerical system will have to be decimalised/percentage-based to allow for ease of progress tracking. Annual 3 and 9 points scales are going to end up with an additional liner scale underneath. This is just the same as sublevels and point scores and I know of one tracking system that has taken this approach. I think it's lazy but will be popular (for a while) because it's familiar (they've even mapped the new values to old points scores and sublevels – I kid you not). The problem with a percentage system though is still this issue of non-sequential learning. That's not going to change so we're going to have to be more savvy with the design of a metric. Watch this space.

  3. Priya
    on February 28, 2017 at 8:06 am

    Great and really helpful article! Adding to the conversation, providing more information, or expressing a new point of view…Nice information and updates. Really i like it and everyday am visiting your site..
    CAT coaching in chennai

Leave a Reply

Your email address will not be published.

© 2024 Sig+ for School Data. All Rights Reserved.