QSSET Background

QSSET Background


 

Teaching Assessment at ľĹĐăÖ±˛Ą - QUFA Voices

The following article appeared in the :

Student teaching evaluations and the uses to which they are put are controversial matters at universities to put it mildly. At Queen’s the use of a survey of students (USAT) for the evaluation of teaching is authorized by Article 29 of the Collective Agreement; USAT has been in use at least since QUFA certified in 1995. The parties to the Collective Agreement—QUFA and Queen’s have long recognized that USAT has many shortcomings. The 2005-2008 Collective Agreement provided for a review of USAT, a joint committee was struck and an elaborate new system of teaching evaluation was proposed. The recommendations of that committee were ultimately rejected by QUFA because they included a requirement for annual peer review of teaching as well as new survey.  Our Members were concerned (rightly in my view) about the additional work and possible biases that peer review of all teaching would entail, but the committee asserted that the survey could not be used without the peer review piece.  So USAT lived on.  In the 2015-2019 Collective Agreement the Parties agreed once more to review teaching evaluation and a joint committee, the Teaching Assessment Committee (TAC), was established which considered possible models, questions, and methods of administration.  TAC finished its work in late 2017 with recommendations that Queen’s pursue online administration of the survey and a design principle for a new survey.  A new committee was established by the JCAA, the Teaching Assessment Implementation Committee (TAIC), co-chaired by me for QUFA and John Pierce (English) for the University. The TAIC has developed the new survey and guidelines for its use, has now run a usability test for online administration and is preparing to pilot the new survey in November of this year. The pilot will use sample classes of graduate and undergraduate teaching (across disciplines) and teaching modalities (e.g. lecture, seminar, clinical, and practical, classroom and online).  The instructors for the sample classes must be tenured or continuing faculty who are not immediately seeking promotion.

The first task TAIC undertook was to derive principles to guide its work from the Collective Agreement. Article 29.1.3 establishes the indicators of teaching effectiveness that evaluators of teaching are supposed to consider.  Some of these, such as “accessibility to students” are clearly matters that students can reliably speak to; others such as “familiarity with recent development in the field” are much less so. Moreover, Article 29 describes the USAT as just one element to be considered in evaluation of teaching with the two other specified items being a teaching dossier and a survey devised by the Member. Thus, Article 29 makes clear that the USAT is not equivalent to the evaluation of teaching. Only Heads and Deans are evaluators of teaching for Collective Agreement purposes. Because students cannot speak to aspects of teaching that the CA requires evaluators of teaching to consider, student surveys cannot be used as a proxy for teaching evaluation, although this is now common practice at Queen’s. USATs are surveys of student experience, not direct and sufficient evidence of teaching effectiveness and can only be considered in relation to other materials such as those provided in teaching dossiers.

The survey the TAIC developed is designed to allow Heads and Deans to parse the elements that contribute to students’ experience of teaching and thereby to ensure that Members are evaluated only on their teaching practice and not conditions beyond their control.  There are four sections, “Student,” “Instructor,” “Course,” and “Infrastructure.”  Only responses to “Instructor” can be used in evaluating the Member; the other sections furnish context for the evaluator to use in interpreting student responses. “Student” asks students to reflect on their preparation for study at the level of the course and their commitment to the course; “Course” asks students to comment on materials, marking, and workload, matters that may or may not be performed by the instructor. “Infrastructure” asks about the room, the IT and scheduling.  Students are asked to plot their answer on a scale from 1 to 7 with NA as a possibility. (Because the questions are provisional pending the experience of the pilot, I am not providing them here.)  Each section also allows students to amplify or explain their responses with written comments. Responses will be presented in their distribution, not as averages.  Because the students are not evaluators of teaching for CA purposes and can only report their experiences the new survey will be called the Queen’s Survey of Student Experience of Teaching (QSSET). 

As the TAIC was finishing the survey design this past summer, an arbitration award was issued by Arbitrator William Kaplan in a dispute between Ryerson University and the Ryerson Faculty Association.  This award, which has received a lot of press coverage, is an interest arbitration, which means that it settled matters that were outstanding in bargaining between Ryerson and RFA.  Its core finding is, however broad in implication: “Insofar as assessing teaching effectiveness is concerned – especially in the context of tenure and promotion – SETs [Student Evaluations of Teaching] are imperfect at best and downright biased and unreliable at worst.”  The award specifically pertains to practices at Ryerson, but several of the impugned practices are also found at Queen’s.  Chief among these are the use of student surveys as proxies for evaluation rather than as evidence of student experience, which has some, albeit limited, bearing on teaching effectiveness.  But another is the use of averages rather than distributions of response values, and the comparison of these averages across courses at different levels and formats.  Of this practice Kaplan observed: “the use of averages is fundamentally and irreparably flawed.” The TAIC was happy to see that most of the problems Kaplan identified had been already addressed in its proposals. The arbitration is short and an interesting read.  It can be found here:

A couple of other matters deserve comment. A well-documented problem with student evaluations of teaching is gender bias.  See .  While the questions in the proposed survey are designed to cultivate objectivity, it remains to be seen whether they can reduce this bias, which arises from student perceptions rather than from survey questions.  The TAIC will propose measures to monitor responses for gender and other forms of bias.  Another problem that has been amply documented at Queen’s is the use by students of the valuable written comments areas to make racist, sexist or otherwise inappropriate comments.  The proposed survey carries a warning to students that such comments will cause the survey to be discarded.  Implementing such a procedure requires that inappropriate comments be recognized in advance of tabulation—or for results to be re-tabulated at the Member’s request.  To do so, and to allow for analysis of correlations in responses to different parts of the survey, something the TAIC also is going to recommend, the survey needs to be administered electronically.  Moreover, the technology which supports the current paper administration of USAT will be obsolete in two years.  For these reasons TAIC recommends electronic administration of the survey.  However, TAIC is also aware of the substantial amount of evidence that online administration of such surveys makes response rates drop, sometimes precipitately, compromising further their already questionable validity.  The TAIC therefore determined that for the pilot, all the conditions of paper USAT administration be replicated—i.e. the survey is to be completed in class during time set aside for the purpose and only at that time.  TAIC has already conducted a usability test to determine that the survey displays properly across all possible combinations of devices and browsers. The pilot will include courses where there are comparators using the paper USAT so that the effect of electronic administration on response rate can be assessed.

Teaching evaluation reform is a conceptually and technically challenging task and many people have contributed to the process initiated in 2016. Student surveys of their experience of teaching are inherently problematic but if used properly are of some value to instructors and administrators.  I am optimistic that the process we are engaged in is moving Queen’s toward better practices.

Recommendations from the Teaching Assessment Implementation Committee to the JCAA on the USAT - May 2019

Executive Summary

This document presents the recommendations of the Teaching Assessment Implementation Committee (TAIC) to the JCAA, following an extensive review of the USAT, a careful consideration of current practices in the evaluation of teaching and learning, and a contextualization of these matters in the context of Appendix E and Article 29 of the 2015-2019 Collective Agreement. The recommendations offer a change in the focus and design of the current USAT and a change in the mode of delivery from paper-based to electronic format. It should be noted that since its first development in 1994 as QUEST and the later change to USAT in the early 2000s, the questions and format of the survey have not changed. A review in 2007 included a number of recommendations for change, but these were not adopted. Thus, the recommendations in this document are part of the first serious review of the teaching survey in over a decade.

The transformation in overall approach taken by the committee appears in its first recommendation that the name of the USAT be changed to the ľĹĐăÖ±˛Ą Survey of Student Experience of Teaching (QSSET). The shift from the "Assessment of Teaching" in the former survey to a measure of the "Student Experience of Teaching" reframes the survey into an attempt to measure the student's participation in the course, the experience each student has of the instructor, the role of the course materials in shaping student academic experience, and the contexts of the timing of a course, the rooms, and the technological supports which all shape and potentially impact student learning. In this reconceptualization of the survey, the TAIC has done much more than simply revise the questions used. Instead, TAIC proposes a teaching and learning survey that is driven by clearly stated purposes, ranges across distinct areas of pedagogical experience with different sections on the Student, the Instructor, the Course, and Infrastructure, and is contextualized by documents outlining the best use of the survey by students, instructors, and Heads, Deans and RTP committees.

To read the full report, including recommendations, download the full report here:

TAIC Recommendations (PDF, 1.1 MB)

Teaching Assessment Implementation Committee (TAIC) Report to JCAA - September 2018

Download the full report here:

TAIC Report to JCAA (PDF, 102 KB)