Before I get started answering this question, I want to make one thing clear: I’m not keen on the word ‘tracking’ in the context of school data management. I think it’s slightly sinister and misleading. Tracking is what hunters do; and what the military does with missiles. I think we need a better name. How about ‘Assessment Database’?
No, I’m not keen on the phrase ‘Tracking System’, but I do think they are essential. And I do think that commercial, online systems have big advantages over home grown varieties. I have seen many schools go down the route of setting up Excel tracking sheets with great enthusiasm initially: the blank slate, the flexibility, the money saved. But all too often, usually within 2 years, it’s not so much fun anymore. It has become the responsibility of one person who constantly fights to keep it up to date , keep it secure, and keep it working, whilst others make copies on their own laptops, overwrite the formulae, and put data in the wrong place. Plus it doesn’t speak to the MIS so keeping pupil lists up to date is an endless manual process. And then there’s the reporting. As great as Excel is – I love Excel! – it doesn’t do a great job of ‘tracking’ and many schools resort to making separate tables into which they copy and paste data just to produce some data for governors. Sometimes those tables are in Word. Sometimes they get their calculator out. Unless you have a dedicated data manager – and this is where secondary schools are in a stronger position – Excel can be a route to frustration and increased workload.
I do need to make clear that I work for a company – insighttracking.com – that provides a popular tracking system used in primary schools, so obviously I would advocate such tool. But I advocate it, and work for them, because it’s a great system that does a really good job. It saves schools time and hassle and schools really like it for that reason. I think such systems – plural, because other systems are available – are a vital component of a school’s wider assessment network, but we do need to define what they can and can’t – and should and shouldn’t – do. They provide a set of really useful and important functions but they also have limitations, and sometimes we expect too much of them. A big part of the problem is that the terms assessment and tracking have become confused, and are almost seen as synonymous. But a tracking system is not an assessment system – it does not assess and if you have a system that automatically assigns a grade to a pupil on the basis of how many boxes have been ticked, then I urge you to reconsider your approach. It may be convenient but it is also wrong, and rarely does anyone agree with the result.
A tracking system is simply an assessment database. It stores a history of useful assessment data alongside a history of useful contextual data so we can learn something about a pupil’s journey through school. In order to do this job well – and this is its main job – the system needs to be capable of storing any data in any format for any subject for any point in time for any pupil. The aim being to build a detailed picture over time. That data could include:
- Contextual information: pupil groups, attendance, date joined, number of school moves, breaks in education, month of birth, special education needs, free school meal history
- Standardised test scores
- Teacher assessments
- Baseline assessments
- Results of statutory assessments: EYFS, phonics, KS1, KS2, GCSEs
- Diagnostic assessments
- Reading scheme levels
- Comparative judgement scores
- Reading ages
- Targets, estimates (eg FFT), and predictions
- Teachers’ comments
- Types of provision
A tracking system is not an island; it is part of a wider assessment network. There are other systems forming part of that network that perform important roles, the data from which will feed into the tracking system. An integrated assessment system may therefore look something like this:
- MIS: Contextual information is pulled from here into the tracking system. If the schools uses an system that is integrated with the MIS, then this is already taken care of, but check that the system is able to store all the data listed here.
- Standardised tests: these provide external reference and give a good idea where a pupil’s attainment sits in a national context. Mostly results are in the form of standardised and age-standardised scores, and attainment ages, but some tests also provide stanines and percentile rank. Tests are available from the likes of GL, Renaissance, Hodder, NFER, and CEM. Schools need to decide how many tests they want to run per year, and whether they want paper or online versions. This will whittle down the list of options. Online tests can be adaptive – i.e. Pupils are routed through according to their answers – which is quicker but multiple choice could be a negative factor for some. Online tests also save on time in terms of marking and analysis. Paper tests are more like those at KS2 or GCSE, which many prefer, but teachers will usually have to mark the papers themselves. Schools should consider how many tests they need to run and in which subjects (usually restricted to reading, maths, grammar, and science). Are termly tests necessary? Is an annual test enough if other assessments are used at other points (see question bank)? Note: there is a big difference between standardised and scaled scores that schools need to be aware of.
- CAT (cognitive ability test): another form of standardised test but adds another – non-curriculum – element. Many schools report that comparing the results of a CAT to those of other curriculum-linked tests can be quite revealing. Provides age-standardised scores.
- Other curriculum tests: PiXL and White Rose Maths are popular tests that do not (currently) provide standardised scores.
- Comparative judgement: a quick and robust method for assessing writing that provides scaled scores, writing ages, and (in primary) an indication of expected standards and greater depth. Can take the pain out of moderation and definitely worth a look
- Question bank: this could be an internal system on a spreadsheet or other database but I’m particularly excited about Carousel (https://www.carousel-learning.com/), an online database of questions from which schools can devise their own tailored tests. Schools can upload their own questions and draw from those already there. Using such a tool could be a real time saver and could also take the place of some standardised tests, which are not so useful for formative purposes as they are not necessarily testing what has been taught.
- Question level analysis (QLA): a system that will analyse test answers at pupil, group, and strand level in order to reveal gaps in learning.
- Provision mapping: A tool predominantly dealing with the support provided to pupils with special educational needs but has wider application. Should enable the setting and monitoring of individual targets, and provide a facility to link types of provision to the pupils concerned in order to track the effectiveness of support, including cost element. Often this done in a spreadsheet; it is something that a tracking system could do more efficiently.
- Reporting to governors: bread and butter for a tracking system. Schools should be able to quickly run reports that present key data to governors including the results of statutory assessments as well as the current picture based on internal data. Ideally, the system should allow this view to be presented in a simple ‘on-a-page’ format without having to resort to exporting to Excel or worse, Word tables.
- Reporting to parents: again something tracking systems can take the pain out of. Store comments and data in one system and batch exported to school-designed template. Job done!
There are many different systems and functions listed above. Clearly a tracking system cannot perform all of them but what it can do is store the total output in one place for ease of access, to build that all important picture over time. We should stop viewing tracking systems as assessment tools. Ticking lists of learning objectives is not assessment, it’s more like an audit. Nor should we see tracking systems as devices whose primary function is to measure progress. That way lies levels, flight paths, and the misuse of data.
We need to see these systems as libraries for the storage and retrieval of summative assessment data. They should be easy to set up, maintain and use. They should be customisable so schools can store all their data in one place in whatever format they choose without compromising. They should make reporting to governors and parents a straightforward process. And, perhaps a factor that is often overlooked: they should be engaging. A system that is nice to use will be well used, meaning two things: data will be timely and accurate, and teachers will be better informed.
They are not assessment systems – that’s someone else’s job – but as part of a wider assessment network, they are indispensable.