Educational strategies and new interventions are being evaluated like never before as emphasis is increasingly put on finding out “what works” in teaching. Many of these studies are randomised controlled trials – meaning children have to be separated into different groups so that the impact of one specific programme or teaching method can be evaluated against their peers who didn’t experience it.
In the past, much of this kind of research has been done by external academics and education researchers. But as demand increases for results, can teachers do this for themselves?
We have recently analysed the way that trials of two literacy programmes, funded by the Educational Endowment Foundation (EEF) charity in England, were carried out. Both involved pupils in the process of moving from primary to secondary school, who were below expected literacy levels.
One was a study of Accelerated Reader – a software-based programme to encourage reading – which found that the pupils on the programme made around three months extra progress in literacy compared to a randomised comparison group from the same schools. Another looked at Fresh Start – a phonics system for poor readers – and also found that the participating pupils made around three months extra progress.
This is good news, but not in itself that remarkable. Many planned interventions turn out not to work when tested, while some will be more promising. What is remarkable is that in both cases the intervention was handled by the schools alone – and evaluation was done at least partly by the schools.
Several schools had independently applied for funding to the EEF to conduct one of these two interventions. Each application was deemed too small in scale by itself, but if the schools involved in each programme were to co-operate and bring in a few more schools then the scale would be sufficient for two aggregated trials. Each trial would assess the impact of its programme, and also help inform whether schools can run their own robust evaluations.
If they can then the general quality of available evidence in education could improve. The cost of robust evaluations could be reduced, making them more feasible across a range of situations. And, perhaps most promisingly, a series of large, ongoing, almost automatic trials could be conducted nationally, similar to those espoused by doctor and writer Ben Goldacre for GP treatments.
Training the teachers
Because this was a new idea, the funders appointed Durham University as independent evaluators and guides for both trials. Our role was to advise the school research leads on the process of conducting research, randomisation and testing, and to aggregate the eventual results from all the schools involved. We provided workshops for the schools on the conduct of a trial, how to randomise and how to avoid bias. Schools were also trained in how to analyse and interpret the results.
Schools were surprised to learn how many ways there are to accommodate randomised control trials within normal school life. One of the school research leads involved explained to us that, before the training, they would have thought the priorities needed in a research trial could be a barrier to a school timetable and organising classes. But they then learnt procedures and designs that would be helpful when introducing any further new ideas to the school.
Catch-up interventions often require individual or small-group work, so they are not always suitable for schools to implement across a full cohort of pupils at the same time.
Cue the waiting-list design, which schools had never thought of. If half of the pupils are randomly allocated to the intervention for the first term, and the other half receive the intervention in the second term using the resources now freed up from the first half, then their relative progress at the end of the first term provides an unbiased estimate of its impact. All the pupils get exactly the same treatment, for the same amount of time, but just at different periods in the year. It’s easy, ethical and similar to what schools often already do with resources.
Advantages of school-led trials
Of course there are important cautions, which we will discuss more carefully in the future. But schools are reasonably good at implementing new methods – and all appeared to follow the programmes well. Getting permission to innovate was easier than it would have been for an external agency. Schools were also generally good at monitoring attendance and progress. During school visits we observed that the teachers always had in-depth data on pupils’ performance. Teachers were using this to make decisions such as what level of intervention should be introduced to pupils and when to proceed to the next level.
Their involvement meant that pupil drop-out was low in both trials. By giving responsibility for the trials to the teachers, there was no direct involvement from the developer of each programme. This is an advantage because there was no external pressure on the schools to find the interventions beneficial, as usually happens.
In addition, the training helped build the capacity of teachers in reading and critiquing research claims more widely. Teachers routinely make decisions on the basis of pupils’ performances. That’s partly why we need teachers to be research consumers, able to interpret results with appropriate levels of critical skill.
If conducting such research was seen as a part of schools’ regular function then the overall cost of research could go down. It may even be possible to create some nationwide ongoing trials with all willing schools contributing to a growing on-line database.