Menu Close

NAPLAN results don’t tell us the whole story

NAPLAN results do have their uses, but they are often overstated and they certainly don’t paint a picture of a child’s overall success at school. AAP

Results are out for NAPLAN 2014, and already the discussions have started around the meaning of those results. The program remains controversial, with academics and the public debating its impact on students and schools. There are also questions as to whether, in the end, the benefits outweigh the negatives. What are the purposes of NAPLAN, and how able are the tests to fulfil these purposes?

The Australian Curriculum Assessment and Reporting Authority (ACARA) develops and implements NAPLAN as part of the National Assessment Program. As outlined by ACARA, the NAPLAN program has two purposes. The first is to provide information that can be used to improve teaching and learning. The second is to increase the accountability of schools and teachers.

While these may be ACARA’s stated purposes for NAPLAN, in practice the results are used for many purposes by different groups and individuals. Key groups using NAPLAN include parents, teachers, principals, policymakers and the media. Sometimes these uses can be reasonable, but at other times they are much less so (as in the school I saw that used NAPLAN results to stream its students).

Inappropriate uses usually come from two very common misunderstandings about testing, as described by Harvard Professor Daniel Koretz:

That scores on a single test tell us all we need to know about student achievement, and that this information tells us all we need to know about school quality.

People using NAPLAN results need an understanding of the limitations of standardised tests and test scores. Some of the most important are:

  • Due to time limits, tests can usually only test a small amount of the knowledge or skills in any given area. Test designers make choices about what to include, so the balance of items in a test may suit one student better than another. This can lead to a stronger result for the first student and a weaker result for the second;

  • Individual students vary in how they perform on a test on any given day due to different factors: how interested they are in doing the test, noise or distractions at the time of the test, or how well they slept;

  • Tests work well for measuring some knowledge and skills, but are not as good for measuring other things we may see as important goals of schooling, such as enthusiasm for learning or social skills.

Achievement in high-stakes testing is based on many factors, including how the child was feeling on that particular day. Flickr/Melanie Cook, CC BY

So in terms of what NAPLAN results can and can’t tell us, the most important message would be that results should not be studied in isolation, they should be interpreted with lots of other information about students’ learning. When it comes to understanding how much an individual student has learnt, parents and teachers need to combine NAPLAN results with classroom observations, classwork and school-based assessments and reports.

For schools, NAPLAN results can help a school identify its strengths and areas for improvement. We know that principals can find the results useful for this. But when it comes to using NAPLAN results to decide how effective one school is compared to another, we are on shakier ground, as so many other factors can contribute to results.

As an example, two schools may have students from similar social backgrounds. But if the first school markets itself as very academic and attracts more academic students, while the other school welcomes all comers, the first school may well have better NAPLAN results, but this says nothing about the quality of teaching in either school.

For those in government departments deciding on policy for schools, the message would again be that, while NAPLAN results can provide a snapshot of what students achieve on the test on a given day, they need to be used together with other information about schools. Because so many factors influence student results, making assumptions about causes, such as “We put X policy in place and NAPLAN results improved, so the policy must be effective”, can’t be justified.

The way in which NAPLAN results are sometimes used as if they were a totally reliable way to judge students, schools and systems is worrying. It highlights a need for more knowledge amongst parents, schools, the media and policymakers about the strengths but also the very real limitations of this type of reporting.

Want to write?

Write an article and join a growing community of more than 182,400 academics and researchers from 4,942 institutions.

Register now