When it comes to completing the University’s teacher evaluation forms at semester’s end, I’m admittedly one tough cookie.
I even follow the ironclad requirement that the cookie-cutter forms be completed in pencil — mostly because pencil makes for more beautiful stick-men drawings and sketches in the forms’ comments sections.
Like I said: I’m one tough cookie.
All students are familiar with the forms’ redundant “The instructor … ” statements, which we’re asked to evaluate numerically from one to five. “The instructor was adequately prepared for class,” the forms prompt, for instance.
It’s a valid prompt, as are most of them, but ultimately, the most appropriate of this type of statement never makes it onto the evaluation: “This evaluation is a misuse of students’ time and the University’s resources.”
“5. Strongly agree.”
The University’s evaluations have two distinct aims. The first is to provide instructors with feedback from their students as a matter of customer service. “How can we make your buying experience better, students?” the University asks.
Of course, this aim assumes there’s a correlation between instructor assessments — positive or negative — and instructor behavior. In other words, that instructors respond to students’ evaluations.
As it turns out, it’s a bad assumption to make.
Sports administration senior Jay Ruhlman, said several of his teachers have admitted not reading them at all.
That’s not the general rule, though.
Math 1550 instructor Irina Holmes, for example, not only read her students’ evaluations but attached several of their comments to her classes’ syllabi.
“By far the best math teacher I have had. She was very helpful,” reads one such comment.
Either way, the objective portions of the evaluations, which are analyzed and reported by the Office of Assessment & Evaluation, aren’t provided to instructors until the following semester, according to Emily Elliot, director of Undergraduate Study for the Department of Psychology.
In other words, aside from students’ comments — which sometimes include such constructive criticism as “I wanna date you” and “things about teachers’ packages,” Elliot said — instructors aren’t given actionable data until they’ve almost completed another semester. Often, the instructors can’t adequately act on that data because they’ve already moved on to different classes and subjects.
What’s more is that the evaluations and resultant data — which would be priceless to students during scheduling — are kept confidential by the University, leaving students to consult unregulated third-party aggregators like RateMyProfessors.com and UniversityTools.com.
“It would help students if [the University] would publish the evaluation’s results,” Ruhlman said.
To boot, while each department has its own specific evaluation form, the assessment process’s blanket approach is similarly baffling.
“More [class] specific questions would help,” said Josh Dewitt, philosophy and religious studies sophomore.
At any rate, the University is failing students with these assessments, but it’s also failing instructors, for whom such evaluations are effectively employment reviews.
While it varies by department, according to Elliot, the objective data are typically compiled in an instructor’s annual report, which determines raises, promotions and tenure.
There’s a lot at stake for our instructors, in other words. Too much, given the general disinterest with which students complete these evaluations.
It’s reckless, for example, that instructors administer them the last fifteen minutes of the last class, where a student’s criticism is as guiltlessly easy as a hastily bubbled scantron.
Jared Duckworth, kinesiology sophomore, said he doesn’t take the evaluations seriously.
“I got other stuff to do,” he said.
More importantly, the assessments’ objective data are tainted, according to a 2007 Ohio State University study. Instructors grading easily, the study asserts, are more likely to receive positive evaluations from students — hence the “expected grade” question on our particular forms, which seeks to correct for the phenomenon.
In essence, instructors who are easy — and not necessarily the best instructors — are prioritized for advancement and accolades, and the University suffers as a result
Startling, too, is that the OSU study also found students tend to poorly rate both female and foreign instructors. That said, I’m sure sexism and xenophobia sway few on this campus.
I, for one, will be completing this semester’s assessments in pen.
Phil Sweeney is a 25-year-old English senior from New Orleans. Follow him on Twitter @TDR_PhilSweeney.
—-
Contact Phil Sweeney at [email protected]
The Philibuster: End-of-semester evaluations fail to improve classes, teachers
January 17, 2012