If you are a teacher in California, the survey mentioned in this HuffingtonPost article is not welcome news. No, not because we are bad teachers, but because Value-Added Performance assessments, in school districts where they are measures, are not accurate.
While teachers and most education reformers remain highly uncomfortable with having test-based teacher evaluations aired in public, one survey suggests parents take a different view.
In a new SurveyUSA news poll that asked 500 California adults about “releasing performance ratings of California public school teachers,” 65 percent responded that they supported the idea. Sixty-six percent indicated they believed releasing teacher data would improve their performance, while 32 percent said they thought it would discourage teachers from working in California.
“The results are shocking,” said Cynthia Brown, who oversees education policy at the Center for American Progress. “I was surprised at how overwhelming the support was for releasing the ratings and the notion that it would improve teacher performance.”
But Paul Bruno, a teacher in Oakland, Calif., said the results were expected. “If you ask people the same questions surrounding their own professions, the results would be different,” he said, adding that seeing his own ratings appear in a newspaper “would add a lot of anxiety” to his job.
California is no stranger to the controversy surrounding the issue: In 2010, a report by theLos Angeles Times published the names ofmore than 6,000 teachers tied to their value-added ratings. But researchers at the University of Colorado found that more than a third of Los Angeles Unified teacherswould have had different scores if a slightly different formula had been usedto calculate those ratings.
A similar situation occurred in New York. In February, a lengthy battle waged by the local teachers’ union culminated inthe release of more than 12,000 individual New York City teacher ratings, amid a slew of controversy. In question: the ratings’ use of value-added analysis, which calculates teacher effectiveness in improving student performance on standardized tests — based on past test scores. The forecast figure is compared to the student’s actual scores, and the difference is considered the “value added,” or subtracted, by the teachers.
In both New York and LA, some saw the release as a step forward in using student data and improving transparency and accountability by giving parents access to information on teacher effectiveness. The court ruling that granted public access to the ratings states, “the reports concern information of a type that is of compelling interest to the public, namely, the proficiency of public employees in the performance of their job duties.”
But to others, the move was misguided, and signaled an over-reliance on incomplete or inaccurate data that publicly shames or praises educators, whether deserving or not. Value-added models generally don’t control for demographic factors like poverty, race, or English-learner or special education status, which some say are crucial to evaluating teachers. Some believe that ratings will undermine overall education reform by negatively affecting teacher morale and teacher recruitment, as well as byreinforcing the false notion that testing is everything. New York’s ratings, which were developed as a means for internal assessment, were also based on small amounts of data and have large margins of error.
Even the most aggressive of education reformers have been less than enthusiastic about publicly airing value added data.
Wendy Kopp, the founder of Teach for America, an organization well-known for relying heavily on data to grade teachers, was hesitant in a recentWall Street Journalop-ed. “We should make individual teacher ratings available to school principals to inform their work recruiting and developing teaching faculties, but releasing them publicly undermines the trust they need to build strong, collaborative teams,” she wrote.
Bill Gates, whose philanthropic foundation has pumped millions of dollars into developing data-based teacher evaluations, called publishing value-added data in newspapers “a capricious exercise in public shaming” in arecent New York Times op-ed.
But perhaps the survey results suggest that the nature of this controversy hasn’t penetrated public opinion. “The lust for statistical seeming information is really strong,” said Alexander Russo, a long-time congressional education staffer who now blogs on education. “These numbers, given out publicly or not, will not in and of themselves do anything.”