Local View: Tread carefully in assessing teachers’ value.
I didn’t expect to find this article in an online publication from Lincoln, Nebraska, but that is exactly where I found it. It seems that even in the heart of deep-red America, there are the beginning shoots of a growing movement which is pushing back or at least advocating against value-added measurements of teachers.
The picture below is obviously not from Lincoln, Nebraska, I just added it for effect, but you get the sense that people are beginning to awaken to the fact that VAM is not good for teachers, or for students.
The state Board of Education and the Legislature’s Education Committee are considering different models for teacher evaluation. Several well-researched and potentially productive models of evaluation exist, but Nebraska should avoid the model used in several other states: “value-added” modeling based on large-scale test scores.
Several states use value-added analysis as the major component of teacher evaluation. This technique attempts to measure how much “value” a teacher adds to each students’ achievement each year as measured by statewide test scores.
Not only is this a depressingly narrow sense of the “value” of a teacher, the statistical technique of value-added analysis is widely criticized by educational researchers and educators. Evidence from the current value-added efforts in other states point toward the conclusion that this statistical technique is “still new, unsophisticated and often unreliable” (Washington Post, The Answer Sheet, Feb. 24).
One of the major problems with the value-added model is its dependence on results from statewide student achievement tests: Data from these tests were never designed to be used to evaluate teachers. The measurement error associated with most value-added programs in other states is frighteningly high. Value-added evaluations also tend to report lower “scores” for teachers who work with the highest and lowest achieving students. These students’ scores tend to change less over a year, resulting in inaccurately lower “scores” for their teachers.
Another damaging aspect of value-added programs is related to their use by the media. Newspapers in New York and Los Angeles published value-added teacher ratings along with teacher names. Teachers in New York have been persecuted by the press seeking responses to scores that don’t measure what they claim to, and one teacher in Los Angeles committed suicide after months of media attention.
Nebraska has some of the best schools in the country (as evidenced by national test data, student success after high school, parent satisfaction and involvement with our schools). Parents in Nebraska know that the best way to learn about the quality of their child’s education is to visit schools and talk with teachers. These personal experiences tell us much more about the teaching and learning in our schools than single scores on statewide tests can ever hope to.
If the state needs to design a new system to evaluate teachers, we should avoid these contentious, error-filled, punitive, value-added evaluation systems. Nebraska teachers and students deserve better.