Objective analysis of good microdata on students can yield results that are highly relevant to educational policy. This should come as no surprise, since it is the case in most other disciplines.
To learn about the structure of the universe, we need powerful telescopes to collect detailed cosmological data; to learn about unemployment trends, we need people and companies to report detailed information about employment; and to learn about what is happening to individual students in our educational institutions, we need detailed student-level data from those institutions.
Yet educational institutions currently face strong incentives not to provide their data.
A case in point is the recent launch of the MySchools website, with which not all schools were happy. There is a natural fear of being “found out” to be worse than others, or worse than your “clients” expected, and therefore to lose market share in the short run.
What was needed to enable the MySchools launch was a third-party push mandating schools’ NAPLAN scores to become public knowledge.
While this may not have helped each individual school in the short run, in the long run it will help the sector and the students it serves by creating something more like a competitive market, with full information available to all market participants.
Free and full information provision allows students and their families to select schools based on criteria they care about, and thereby helps to harness the power of competition amongst schools to craft an educational system that meets the needs of the country.
MySchools represents a baby step by international standards. In the United States, student-level data from entire public school systems is used routinely by education economists to analyse what is happening on the ground.
Public school systems in New York City, North Carolina, Texas, and Florida have all made blinded student-level data available to researchers who perform objective academic research in order to help both the school systems and their own academic careers.
This is a win-win situation: the schools receive policy-relevant advice that then feeds through to better outcomes for students, and the academics involved fulfill their intellectual interests and their professional mandate to publish.
The states involved also earn a reputation for caring about the optimal design of their educational systems, which makes for a strong selling point.
Several overseas universities have also provided blinded student-level data to academic researchers. The University of Maryland, Berea College, Dartmouth College, and Williams College have all provided such data to economists doing educational research for the good of the institution and the progress of our understanding of educational phenomena.
These phenomena range from the persistence of disadvantage to the effects of peers, race, and gender on university performance. It is industry-standard practice for the institutions providing the data to be identified while the confidentiality of individual students’ records is maintained. Once again, this is a win-win situation for all parties.
The data that universities currently release to the Department of Education, Science and Training (DEST) is at such an aggregated level that it is difficult to draw clear policy-relevant conclusions from them.
Yet Australian universities are sitting on vast quantities of student-level data that have significant potential to inform Australia’s higher education policy.
The project that I began with ARC funding in 2008 was the first initiative in the country to periodically extract and compile detailed student-level records to create a multi-institutional panel data set suitable for objective academic research with micro foundations about important questions in the sector. But I hope that it is not the last such initiative.
My proposal is for a third party representing the government, such as DEST or the Department of Education, Employment and Workplace Relations (DEEWR), to sponsor an opt-in program under which universities make periodic extracts of their student-level data available for independent, objective analysis.
Naturally, this should occur only under strict confidentiality provisions, such as those applying to my project, where individual students are anonymised. To encourage universities to opt-in, there could be a further provision that in any publically-released analysis, data from some minimum number of institutions must be included, which would shield any one institution from being isolated.
Universities that choose to opt-in to this scheme would reap several rewards. They would send a strong signal to the national and international communities that they are prepared to directly and objectively monitor the job they are doing with students.
They would receive valuable information, including far more detailed benchmarking than is currently available, that they can use in positioning themselves in the market and setting their internal policies; and finally, due to the academic papers that could be published using the data, they would receive recognition in the international educational research community.
To my knowledge, there are no countries in the world that currently have such a government-supported, multi-institutional data-provision program in higher education. Excellent educational researchers would be drawn here to Australia to conduct their research, and our higher-education sector would thereby reap even more direct rewards.
The direct public benefits of such a scheme should be clear: equipped with knowledge of what was going on in the higher-education sector, we could craft fully informed education policy from which the whole country could benefit.
I hope that the Australian higher education sector takes this opportunity to stand for what is right for the country – to encourage a rigorous look at student-level microdata to find out what is actually going on in higher education, so we can do something about it – and I congratulate UniSA and UTS for being first movers in this regard.
These institutions have led the sector in the move towards evidence-based policy-making in higher education. Let’s not stop.