A Texas court has ruled against the use of value-added modeling in teacher evaluation saying the system used violates due process.
Texas now joins New York as states where courts have found value-added systems used for teacher evaluation simply don’t stand up to scrutiny.
Here’s more on the Texas case:
Judge Stephen Smith of the Texas Southern District Court agreed with HISD teacher and union plaintiffs that the district’s use of the system to make employment decisions is a violation of the plaintiffs’ rights to due process. The decision is likely to affect the outcome of similar suits levied in other states, including New Mexico.
Houston’s EVAAS is one form of several existing systems of evaluation known as a VAM or Value-Added Model. As statistical models go, VAM is a sound system for what it was originally designed for: agriculture. So, if you are interested in increasing plant growth or seek to optimize levels of dairy production, get yourself a VAM. As a measure of teacher effectiveness in the classroom however, VAMs have been troubled from the start.
Here’s some pretty good analysis as to why VAM simply doesn’t work for teacher evaluation:
A second major assumption is that high-stakes tests are one of the best measureable inputs to plug into the model to evaluate teacher effectiveness. Aside from the many flaws they have on their own, these high-stakes measures were never designed with the purpose of teacher evaluation in mind. Nonetheless, some individuals are willing to assume that high-stakes test scores are unquestionably accurate, that they have inherent value and that they can be indiscriminately applied to a myriad of purposes. Moreover, mesmerized by the complexity of the VAM formulas into which these test scores are included, many assume the VAM outcomes must represent a precise and indisputable measurement. None of these presumptions are true.
While states like New York, Texas, Oklahoma, and Hawaii are ditching VAM in teacher evaluation, Tennessee is doubling down. Yes, Tennessee is moving to a new test to assess student performance. Yes, that test has had a rather bumpy launch. Yes, moving to a new test creates even more difficulty in VAM models:
If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best. In the first year, such a comparison is invalid and unreliable. As more years of data become available, it may be possible to make some correlation between past TCAP results and TNReady scores.
In spite of all of this, not a single lawmaker voted against the Tennessee Department of Education’s plan to use this year’s student test results as a portion of teacher evaluation via TVAAS — Tennessee’s version of the EVAAS system that a Texas court ruled violated due process.
This should come as little surprise, since Tennessee is the birthplace of value-added modeling in education. Of course, that doesn’t change the fact that VAM actually provides little meaningful differentiation among teachers.
Perhaps lawmakers in Tennessee and other states still using VAM for teacher evaluation will soon realize that the summary of this article on the Texas case is exactly right:
The court confirmed what experts have been saying for years, that for education, VAM is a sham. Thus, any teacher or school ratings or employment decisions made based on VAM outcomes are invalid and unreliable.
For more on education politics and policy, follow @TheAndySpears