Last week, I commented on Grand Rapids Public Schools’ new attendance policy and Michigan’s tenure reform bill. To summarize, while applauding GR Public’s new policy as effectively incentivizing students to show up to class and take their studies more seriously, I was skeptical about MI’s new bill which ties teacher evaluations to student performance. In their article “Can Teacher Evaluation Improve Teaching” in the most recent issue of EducationNext, Eric S. Taylor and John H. Tyler share the results of their study of the unique teacher evaluation system of Cincinnati Public Schools.
“The results of our study,” they write,
provide evidence that subjective evaluation can improve employee performance, even after the evaluation period ends. This is particularly encouraging for the education sector. In recent years, the consensus among policymakers and researchers has been that after the first few years on the job, teacher performance, at least as measured by student test-score growth, cannot be improved. In contrast, we demonstrate that, at least in this setting, experienced teachers provided with unusually detailed information on their performance improved substantially.
The “subjective evaluation” that they are referring to is Cincinnati Public’s Teacher Evaluation System (TES) “in which teachers’ performance in and out of the classroom is assessed through classroom observations and a review of work products.” They continue,
During the yearlong TES process, teachers are typically observed in the classroom and scored four times: three times by an assigned peer evaluator—a high-performing, experienced teacher who previously taught in a different school in the district—and once by the principal or another school administrator. Teachers are informed of the week during which the first observation will occur, with all other observations unannounced. Owing mostly to cost, tenured teachers are typically evaluated only once every five years.
The evaluation measures dozens of specific skills and practices covering classroom management, instruction, content knowledge, and planning, among other topics. Evaluators use a scoring rubric based on Charlotte Danielson’s Enhancing Professional Practice: A Framework for Teaching, which describes performance of each skill and practice at four levels: “Distinguished,” “Proficient,” “Basic,” and “Unsatisfactory.”
What Taylor and Tyler’s study found was that the performance of teachers’ students significantly improved the year of their evaluations and showed even greater improvement the year after. The good news for teachers is that termination of poor teachers does not seem to be the only option. As Taylor and Tyler note, “To date, the discussion has focused primarily on evaluation systems as sorting mechanisms, a means to identify the lowest-performing teachers for selective termination.” The appropriateness of this focus is called into question by their research. Teachers—even ones with years of experience—can, in fact, improve.
What I find promising about this method of teacher evaluation is that it focuses on more than just student performance (though ultimately its success still seems to be measured in these terms). Teachers are evaluated on an individual and personal level. This, to me, is far more dignifying and far more effective. If their performance is truly lacking, teachers are able to discover specifically how and make positive steps toward improvements.
Furthermore, I cautioned in my previous post that MI’s new bill may lead to grade inflation rather than improving education quality since it unintentionally incentivizes teachers to pad their students’ grades to avoid the possibility of being penalized. I see this as a problem because sometimes good teachers need to give poor grades to poor students and should not have to worry about their jobs in the process. However, if local districts want to take some initiative and implement a similar program to Cincinnati Public’s TES, they are able not only to have a better sense of who the problem teachers really are but an effective plan for improving those teachers’ performance, proving that they have no need of subsidiarity assistance from the state.
There is only one catch, but unfortunately it is a big one. Taylor and Tyler ask,
But are these benefits worth the costs? The direct expenditures for the TES program are substantial, which is not surprising given its atypically intensive approach. From 2004–05 to 2009–10, the Cincinnati district budget directly allocated between $1.8 and $2.1 million per year to the TES program, or about $7,500 per teacher evaluated. More than 90 percent of this cost is associated with evaluator salaries.
A second, potentially larger “cost” of the program is the departure from the classroom of the experienced and presumably highly effective teachers selected to be peer evaluators. The students who would otherwise have been taught by the peer evaluators will likely be taught by less-effective, less-experienced teachers; in those classrooms, the students’ achievement gains will be smaller on average. (The peer evaluator may in practice be replaced by an equally effective or more effective teacher, but that teacher must herself be replaced in the classroom she left.)
Unfortunately this price tag, both financially and otherwise, may be too much for many MI schools. This would be a great opportunity for an entrepreneurial solution. If someone could develop a cost-effective way to accurately evaluate teachers like Cincinnati Public’s TES, it would fill a great need in our public schools and, by improving education quality on the whole, a great service to the common good.