Home / Event / ALASI2019: Panel debate – the validity of using student evaluation surveys for performance based funding at Australian universities

ALASI2019: Panel debate – the validity of using student evaluation surveys for performance based funding at Australian universities

Nov
28
Date: Thursday, 28th November 2019
Time: 09:00 AM
Location: University of Wollonging

Leonie Payne1, Kirsty Kitto1, Michael Pracy1, Jason Lodge2, Abelardo Pardo3

Abstract

The Australian Federal Government announced in August 2019 that aspects of the Quality Indicators for Learning and Teaching (QILT) Student Experience and Graduate Outcomes surveys will form two of the four key metrics for performance based funding of Australian Universities from 2020. Given the lack of consensus on the validity and appropriate use of student evaluations of teaching, it is time to explore the rammifications of this decision. And who better but the Learning Analytics community to do so? We propose a plenary panel debate on the provocation “Student evaluations of teaching are the worst form of evaluation, except for all of the others”.

Keywords

Performance based funding, QILT, Student evaluation surveys, teaching quality, higher education

1 Email: {Leonie.E.Payne,Kirsty.Kitto,Michael.Pracy}@student.uts.edu.au, Connected Intelligence Centre, University of Technology Sydney. PO Box 123 Broadway NSW 2007 Australia

2 Email: jason.lodge@uq.edu.au, University of Queensland

3 Email: abelardo.pardo@unisa.edu.au, Uniersity of South Australia

1. Panel Debate Background – Performance Based Funding

The Performance-Based Funding for the Commonwealth Grant Scheme: Report for the Minister for Education was released in August 2019. This report outlines the proposed measures for performance-based university funding to be implemented in 2020, with the inclusion of the Student Experience and Graduate Outcomes QILT (Quality Indicators for Learning and Teaching) surveys. The metrics to be included are student satisfaction with teaching quality (Student Experience survey) and  graduate employment rates (Graduate Outcomes Survey) for domestic bachelor students. The stated aims of the scheme include to create more “accountability” for public investment on higher education priorities and to provide financial incentives to encourage improved university performance, with the identified key principles of “fitness-for-purpose, fairness, robustness and feasibility” (Commonwealth of Australia, 2019).

Given the implications that these decisions will have for university funding, and the diverse conflicting perspectives on the validity of student surveys as a form of teaching quality evaluation, we propose a panel debate for ALASI 2019. The topic will be: “Student evaluations of teaching are the worst form of evaluation, except for all of the others”. We envisage the debate would provide a highly interactive, entertaining and potentially controversial event, that would help to advance discussion about this important topic which will have high impact upon the Australian university sector.

1.1 Quality Indicators for Learning and Teaching (QILT)

The Quality Indicators for Learning and Teaching (QILT) is an annually published survey that allows comparison of Australian higher education institutions and study areas on measures of student experience (QILT 2018). It is an example of Student Evaluations of Teaching (SET) (Marsh 2007). The QILT survey provides an opportunity for students to make comparisons of universities based on surveyed student experience and graduate employment outcomes (QILT 2015).

1.2 Arguments for and against the Validity of Student Evaluations of Teaching

The Marsh (2007) review discusses a wide range of research which has demonstrated that there is validity in using Student Evaluations of Teaching as a measure of teaching performance. For  example, there is a well-established relationship between student ratings and learning with SETs having good internal consistency and stability (Abrami 2001). In addition, Sporen, Brockx, & Mortelmans (2013) found that SETs are also correlated with teachers’ self-evaluations, alumni ratings and evaluations by trained observers.  Aleamoni (1999) dispels the myth that student ratings are merely a “popularity contest”, finding students rated educators on their preparation and organisation, stimulation of interest, motivation, answering of questions, and courteous treatment of students. In contrast, a wide body of research questions the validity and appropriate application of student surveys for evaluating teaching quality. For example, Johnson (2000) cautions against the use of student evaluation questionnaires as a bureaucratic tool driven by market ideologies, arguing that while student evaluations may be useful as formative diagnostics they are not appropriate tools for summative judgement for employment decisions and tenure. Similarly, a study by Boysen et al (2014) demonstrates that teaching evaluations are interpreted by administrators and teaching faculty as posessing higher levels of precision than are warranted given the statistical methodologies used. Shevlin et al. (2000) argues that a ‘halo’ effect limits their ability to measure the multi-faceted, multi-dimensional nature of teaching effectiveness, and Macfadyen et al (2016) posit bias in what type of students even respond to SETs, arguing that this sample does not reflect the individual and course characteristics of the total student population. Thus, the cases for and against SETs are extensive, and conflicting, which sets the scene for a lively forum.

2. Panel Format

This panel would take the format of a plenary sesion debate, with 6 speakers allocated to one of 2 teams, for, and against the proposition. Team captains will be Jason Lodge and Kirsty Kitto with the remainder of the participants to be determined once ALASI attendees are known. We will endeavor to bring a QILT member, and the VC of Woollongong (who chaired the performance based funding review) into the panel as team participants. Abelardo Pardo will play the role of MC.

3. Organiser Credentials

  • Leonie Payne is a PhD Student at the Connected Intelligence Centre, UTS, where she is working on a thesis that aims to bring rigour to the evaluation of quality in Higher Education by accounting for bias in student response rates.
  • Kirsty Kitto is a Senior Lecturer at the Connected Intelligence Centre, UTS, where she leads a number of LA She was formerly seconded to the QUT Quality and Evaluation unit where she worked on analysing 4 years of SET data to derive performance metrics for teaching.
  • Michael Pracy works as a data scientist at the Connected Intelligence Centre, UTS. His background is in astrophysics, where he performed extensive work on controlling for bias in data obtained from astrophyisical phenomena.
  • Jason Lodge is an Associate Professor at the University of Queensland where he concentrates on the application of the learning sciences to higher education. Specifically, he is interested in the cognitive and emotional factors that influence learning and behaviour and how research findings from the learning sciences can be better used to enhance design for learning, teaching practice and education policy.
  • Abelardo Pardo is Professor and Dean Academic at the University of South Australia and the Environment at the University of South Australia. His research interests include the design and deployment of technology to increase the understanding and improve digital learning experiences. He is the current president of SoLAR.

References

Abrami, P. (2001). Improving judgments about teaching effectiveness using teacher rating forms. New directions for institutional research (109) 59–87.

Aleamoni, L. (1999). Student rating myths versus research facts from 1924 to 1998. Journal of Personnel Evaluation in Education 13(2) 153–166.

Boysen, G.A., Kelly, T.J., Raesly, H.N. & Casner, R.W. The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education, 39(6) 641-656.

Commonwealth of Australia (2019). Performance-Based Funding for the Commonwealth Grant Scheme, Report for the Minister for Education – June 2019, viewed 9th August 2019, https://www.education.gov.au/performance-based-funding-commonwealth-grant-scheme

Johnson, R. (2000). The Authority of the student Evaluation Questionnaire. Teaching in Higher Education, 5(4), 419-434.

Macfadyen, L., Dawson, W., Prest, S., & Gasevic, D. (2015). Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations. Assessment & Evaluation in Higher Education 1–19.

Marsh, H. (2007). Students evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In The scholarship of teaching and learning in higher education: An evidence-based perspective, 319–383. Springer.

QILT 2015, Department of Education and Training, QILT website launched, M2 Presswire – September 17, 2015

QILT 2018, “Quality Indicators for Learning and Teaching”, viewed 9th August 2019, www.qilt.edu.au

Shevlin,  M., Banyard, P., Davies, M., & Griffiths, M. (2000). The Validity of Student Evaluation of Teaching in Higher Education: Love me, love my lectures?. Assessment & Evaluation in Higher Education, 25(4) 397-405.

Spooren, Pi., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching the state of the art. Review of Educational Research 83 (4) 598– 642.

 

 

Top