The federal Labor government’s proposal to expand the National Assessment Program – Literacy and Numeracy (NAPLAN) franchise and include science literacy is not a surprising move.
Once national testing regimes start, they tend to spread and intensify.
The government continues to give strong indications that they see testing as the best means to improve student achievement. This political obsession has been most recently enshrined in the Education Bill 2012 where the Prime Minister formalised her aim to be in the top five schooling systems by 2025 on international tests.
But these tests, along with NAPLAN, are problematic. It’s assumed that they can measure what it means to be a quality school, a quality teacher, a quality education and what we should aspire to as a nation.
If quality can be determined in approximately 120 minutes of tests taken every four years by a sample of students (as is the case with the international PISA tests) then it is a neat trick.
It is not that standardised tests have no use or place in our education system, nor is it that what is generated from the tests cannot be informative. However, we must always be alert to the ways that the data being used impacts on students and their learning.
The idea that the NAPLAN tests are logical, objective and merely assess what schools should be teaching is persuasive, but the ways the data is being used as an ad hoc measure of quality brings to mind Campbell’s Law:
The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.
The question is not whether science literacy is important (it is). Nor is the issue whether schools and teaching should continue to improve achievement in science (they should). What we should be asking is whether the overemphasis on NAPLAN will distort the learning experience of students in Science and make improved achievement less likely.
If, as the research shows, a broad curriculum focus, opportunities for students to engage deeply, think critically and be supported in developing their learning are all less likely under NAPLAN, then clearly the disadvantages far outweigh the benefits.
Widening the tests to include science literacy, or any other subject area for that matter, does little to address these concerns and may not improve student achievement.
Standing still or going backwards
Since 2008, significant improvement across Australia in NAPLAN can only be seen in Year 3 Reading, Year 5 Reading and Year 5 Numeracy. In other words, there have been no statistically significant improvements in 17 out of the 20 categories since NAPLAN began. Most categories have stayed fairly static.
A troubling result given the time and cost associated with NAPLAN – it would appear NAPLAN is not the best mechanism to drive improvement in student achievement.
It may also not be a great mechanism for improving equity of achievement, for example ACARA’s 2012 NAPLAN Summary Report showed fewer Year 9 students met the minimum standards in Reading, Spelling, Numeracy and Writing in 2012 than in 2008 (ACARA, 2012).
Granted some of these differences are small, and in the case of writing probably reflect a change in assessment, but it remains an issue of concern.
So why is this happening? It is most likely because of the unintended classroom consequences of standardised tests like NAPLAN; that it narrows curriculum, is not inclusive of student needs, increases anxiety and requires teachers to teach to the test.
Education expert and statistician Margaret Wu argues that these unintended consequences are exacerbated by the MySchool website, which is in practice used as the measure of quality in schools, ignoring the fact that what is reported only represents a fraction of what schools do and that there are problems with the reliability and validity of the data for this purpose.
Two questions are worth asking when it comes to including science in NAPLAN.
First, given that ACARA assesses science literacy already on a “rolling 3 yearly basis” using a representative sample of Year 6 students, what extra information will we receive from yearly NAPLAN testing?
Second, what will be the likely effects of sitting science literacy tests every year?
If the argument for the science literacy testing is that increased accountability in science will lead to improved student achievement, it’s important to remember international research shows test-based accountability is unlikely to improve student achievement, and may even have a negative impact on the least advantaged students.
There may be pragmatic reasons, testing science literacy may aim to align NAPLAN with TIMMS and PISA which both assess science.
Another pragmatic reason might be the previously mentioned Australian research showing an excessive focus on literacy and numeracy because of test pressures leads to less time being spent on subjects not tested like science.
Of course, the pressure to focus on literacy, numeracy and science literacy will further squeeze curriculum choice – bad news for subjects such as History, Languages, Drama, Arts and Physical Education to name a few.
At least until we can test them as well.
But with little evidence to prove that NAPLAN drives improved student achievement in Australian schools, there must be better ways to assist teachers and students.