Testing experts have for years now been warning school reformers that efforts to evaluate teachers using VAM are not reliable or valid.
This from yet another story about problems with value-added modeling, or VAM. The story details the case of a New York teacher who is suing the state because evaluation results there, based on VAM, have labeled her ineffective. This despite:
The lawsuit shows that Lederman’s students traditionally perform much higher on math and English Language Arts standardized tests than average fourth-grade classes in the state. In 2012-13, 68.75 percent of her students met or exceeded state standards in both English and math. She was labeled “effective” that year. In 2013-14, her students’ test results were very similar but she was rated “ineffective.”
This is likely due to the “predicted score” the VAM model assigned to Lederman. Most models like this look at a teacher’s set of students and use their past performance to predict a score for that academic year. If the students meet that score, the teacher is labeled effective. If the students fail to meet that score, the teacher is labeled ineffective because the students saw negative growth — even in cases, such as Lederman’s, where a solid majority of the teacher’s students scored at or above state standards. In other words, the students may have grown during the year — even meeting or exceeding state standards (getting a full year or more worth of observable learning under their belts) — and the teacher is assigned a negative score because the students didn’t grow as much as a formula in a statistical black box says they should have.
You come in every day. You do your job. You do it well. You do it so well, in fact, that your students are consistently at or above state standards. And you are labeled “ineffective.” That’s the “promise” of value-added modeling.
More on value-added modeling:
For more on education politics and policy, follow @TheAndySpears