- This event has passed.
CANSSI Townhall on Student Evaluations of Teaching
May, 11 2022 @ 9:00 am - 11:00 am
Philip B. Stark, Distinguished Professor of Statistics at the University of California (UC), Berkeley, has had an enormous impact on the way student evaluations of teaching are—and aren’t—used to measure teaching effectiveness. On Wednesday, May 11, 2022, from 9:00 to 11:00 a.m. PST, Dr. Stark will be the guest speaker at an online “CANSSI Townhall on Student Evaluations of Teaching,” moderated by CANSSI director Don Estep.
Shared resources from the event
- Presentation slides (Philip B. Stark)
- Notes on Student Evaluations of Teaching (Philip B. Stark)
- Peer Review of Course Instruction (Berkeley Center for Teaching & Learning)
- An Evaluation of Course Evaluations (ScienceOpen.com)
- Gendered Language in Teacher Reviews (benschmidt.org)
- Report of the OCUFA Student Questionnaires on Courses and Teaching Working Group (Ontario Confederation of University Faculty Association)
Student evaluations of teaching (SET) are widely used in academic personnel decisions as a measure of teaching effectiveness. The way SET are used is statistically unsound, but worse, SET are biased and unreliable. Observational evidence shows that student ratings vary with instructor gender, ethnicity, and attractiveness; with course rigour, mathematical content, and format; and with students’ grade expectations. Experiments show that the majority of student responses to some objective questions can be demonstrably false. A recent randomized experiment shows that giving students cookies increases SET scores. Randomized experiments show that SET are negatively associated with objective measures of teaching effectiveness and biased against female instructors by an amount that can cause more effective female instructors to get lower SET than less effective male instructors. Gender bias also affects how students rate objective aspects of teaching. It is not possible to adjust to the bias, because it depends on many factors, including course topic and student gender. Students are uniquely situated to observe some aspects of teaching, and students’ opinions matter. But for the purposes of evaluating and improving teaching quality, SET are biased, unreliable, and subject to strategic manipulation. Reliance on SET for employment decisions disadvantages protected groups and may violate federal law. For some administrators, risk mitigation might be a more persuasive argument for ending reliance on SET for employment decisions than appeals to equity: union arbitration and civil litigation over institutional use of SET are on the rise. Several major universities in the U.S. and Canada have already de-emphasized, substantially re-worked, or abandoned reliance on SET for personnel decisions.
About the speaker
Philip B. Stark is Distinguished Professor of Statistics at the University of California, Berkeley. Dr. Stark works on inference, inverse problems, multiplicity, nonparametrics, optimization, restricted parameters, sampling with applications including astrophysics, cosmology, ecology, elections, geophysics, health, legislation, litigation, marketing, physics, public policy, risk assessment and control, and uncertainty quantification. He has been awarded a number of honours, including Fellow of the Institute of Physics and Fellow of the American Statistical Association. Dr. Stark has served as Department Chair and Associate Dean of the Division of Mathematical and Physical Sciences at UC Berkeley.