As the Gonski school funding debate heats up again, the political focus so far has been on the big billion dollar figures. In this crowded debate about who’s right or wrong on the numbers, there’s little space left to look at what these extra resources aim to do, namely to address educational disadvantage.
Whether it’s because a student lives in a regional or remote area or has English as a second language, some students face barriers to reaching their academic potential. The government’s solution is to give extra funding to the schools with more disadvantaged students.
But how do we define educational disadvantage? And are the government’s new reforms and extra funding targeting disadvantage in the right way?
The Gonski plan
Under the government’s National Plan for School Improvement (NPSI), each primary school student has a base funding amount of A$9,271 and A$12,193 for secondary students. Along with the base amount, six types of educational disadvantage will attract additional funding - (1) a school’s size and (2) location, as well as (3) how many students are Aboriginal or Torres Strait Islander, (4) have a disability, (5) are from a low socioeconomic background or (6) have limited English proficiency.
These were all areas identified in the Gonski review but some of the exact formulas are yet to be finalised (namely students with disabilities and students with limited English proficiency).
The basic rationale is the more students with these characteristics, the more difficult the job for teachers and schools. For some areas the reasons for extra resources are clear – it costs schools in a remote community, for example, more to teach because of greater infrastructure costs and higher teacher turnover.
But we also know that there’s a clear link between the background of students and educational outcomes. Take the example of maths, reading and science scores from a sample of 15 year olds in 2009 from different backgrounds, attending different schools. Indigenous students and those who speak a language other than English at home have significantly lower test scores, while students from relatively advantaged families do better:
Limitations of the plan
The reforms, if fully implemented, would represent a significant increase in the amount of resources available to the schools with the most disadvantaged students. But there are some problems here.
First, while they are well supported empirically as predictors of educational disadvantage, the six loading areas explain only a small proportion of the variation in school outcomes. Why weren’t other factors linked to educational disadvantage included?
The government could have considered including the level and quality of the early childhood education of the students; social and behavioural difficulties; student mobility and turnover; parental involvement in the school (not just capacity to contribute); and the historic funding shortfall of schools just to name a few.
Second there can be considerable variation within schools in terms of the educational disadvantage faced by the students. Not all disadvantaged children attend a relatively disadvantaged school with some children in even the most advantaged school environment likely to struggle. Under the current plan there is no guarantee that the most needy students within the schools will receive the benefits from the additional funding.
The fact is addressing disadvantage through schools is only part of the answer. After all, it could be that no matter what’s happening at school, the stability of the child’s home could impact more on student outcomes.
This raises the question of whether the additional funding allocated as part of the NPSI is best targeted towards schools or whether at least some of it should go to the families themselves.
With a finite budget and an uncertain revenue base, the government’s goals might better be reached by additional spending on high quality early childhood education; cash incentives; greater income support; health interventions; remedial literacy outside of school hours; or a range of other potentially worthwhile programs.
But the truth is, in Australia, we really have very little quantitative evidence on what actually works to encourage disadvantaged children to successfully engage with formal schooling. This is partly because the relevant data is not always made available. But it is also because education interventions are not currently set up in such a way that a clear comparison can be made between a treatment group and an otherwise identical control group.
More research needed
Support for additional funding in education is reasonably easy to justify. But strong evidence to support how the additional funding should be allocated is more difficult to find.
This is both a limitation of the existing research, as well as how similar interventions have been structured in the past. Clearly we need to know more about what works for disadvantaged students and target policy and funding accordingly.
This article was co-authored with Tim Cameron, a research assistant at the ANU.