At face value this might seem like an obvious way of improving literacy and numeracy outcomes. However, the challenges a model such as this would create are as complex and divisive as those that surround the testing regime itself.
Since 2008, NAPLAN tests have been undertaken by year 3, 5, 7 and 9 students across Australia. The tests are supposed to track the progress of students and improve student achievement in literacy and numeracy.
However, preliminary data released for the tests taken this year show no improvement in literacy and numeracy results since the testing began. This is despite seven years of data designed to drive improvement.
The number of students not participating in NAPLAN is increasing each year. Students with disabilities and learning difficulties are grossly over-represented in those not participating in the tests.
This means their results are not captured in the NAPLAN “snapshot” of achievement of Australian school students. Increasing numbers of students are being withdrawn on philosophical grounds or are just not going to school on the days of the testing; their data is also missing.
As a result, tying educational funding to NAPLAN data is fraught with danger. Funding cannot be allocated in an equitable manner to schools if there is “missing data”, especially if that data is from particular cohorts of students.
Validity of the data
NAPLAN captures data from a single point in time. It does not provide a comprehensive picture of a student’s ability in either literacy or numeracy, or across those areas that NAPLAN does not measure.
This raises questions about the validity of NAPLAN to provide an accurate representation of students’ capabilities in literacy and numeracy. As a result, allocating funding according to this data presents the very real risk that funds will not be provided to the schools and students that really need it.
In addition, questions have been raised about the capacity of NAPLAN, in its current format, to provide accurate data in the areas it does purport to measure. Is it testing what it says it’s testing?
Lets take spelling as an example. NAPLAN requires students to proof-read and edit texts, a skill that is very different from being able to spell words accurately during a dictation task. Can NAPLAN then be said to really be testing spelling?
This narrow approach to the areas being tested brings into question the validity of the data being collected. Is it really an accurate representation of students’ abilities in the tested areas? If not (or even if not completely), then tying funding to NAPLAN data again runs the risk of resources not being allocated to those students who really need it.
Penalised for success
A “Catch 22” situation arises with funding that is based on academic results. According to the proposed model, and somewhat bizarrely, schools that improve their NAPLAN results are at risk of receiving lower levels of funding in the future.
While this might sound like an equitable outcome (after all, if students are performing well in NAPLAN then schools do not need the funding), there are two major issues with this.
Firstly, the reality is that schools and teachers work incredibly hard to implement programs to improve student outcomes, and should not be punished through a reduction in funding when they do. Reduced funding will lead to reduced support, which could in turn lead to a fall in achievement levels. Schools are at risk of being placed onto a resourcing rollercoaster, with funding levels going up and down each year.
Schools that offer strong support, and are thus highly sought after, may find themselves being punished for being too good. The scenario could be:
This cycle could bring with it a serious paradox. Rather than schools and teachers “teaching to the test” as a result of the pressure of NAPLAN, schools may aim to lower or stagnate their NAPLAN outcomes so they can gain or keep funding.
Secondly, NAPLAN does not provide the story behind the data. It is collected only once a year and cohorts of students are tested only once every two years, meaning that data collected is inexact and irregular.
A cohort of students may achieve high results one year, while the next cohort to be tested the following year struggles to reach the National Minimum Standards for literacy and numeracy. If schools lose funding based on the outcomes of one cohort of students, they may not be provided with the resources needed to support and grow the students who were not in the year levels being tested.
Education funding must consider a multitude of factors, including academic and social achievements of students across all learning areas, location of the school, and the needs of the wider community in which the school operates. While NAPLAN data may provide one very small piece of the data puzzle used to allocate educational funding, it should never be used as the only determinant.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Authors: The Conversation