VAM: Correct Diagnosis, Wrong Prescription

In his recent article in Education Post, Arthur Levine laments the limits of current Value-Added Modeling (VAM) while proposing a move to a “new” VAM or VAM 2.0. To his credit, he correctly points out some significant limitations of VAM in determining teacher effectiveness, but the solutions he proposes move in the wrong direction.

Here are the four limitations Levine identifies:
Few states test students, particularly those in middle or high school, in the academic subjects we need data for.

1. Despite the growing focus on STEM instruction in the United States, student achievement data is still not available in a number of science-related subjects. Thus, it is not possible to assess the performance of teachers in many of the courses we expect high school students to take.

2. In small classes—generally fewer than seven students, depending on the VAM method—obtaining meaningful scores has been impossible. This is a serious problem in rural and some urban schools, where enrollments in many classes are low.

3. To be statistically significant, VAM requires relatively large numbers of teachers to be teaching in the same tested subjects, grades, schools and districts.

Unfortunately, this isn’t the reality that many teachers, particularly those in high-need schools, experience.
4. The contribution of educators who replaced the original teacher of record—which happens often, particularly in high-need schools—becomes difficult to determine.

In many of these schools, the current teacher of record is the second or third teacher in the classroom in a given academic year. These educators do not work in isolation and benefit from working with colleagues and contributing to the school environment, components not accounted for under the current VAM model. This is one more challenge in obtaining meaningful data on teacher effectiveness.

Levine proposes: In order to determine teacher effectiveness in the years ahead, we need to supplement VAM scores with other measures of student growth, further develop state data systems on student achievement, and create more advanced and sensitive 2.0 versions of VAM assessment.

That means more assessment — more tests — in more subjects. And a continued reliance on an unreliable VAM system.

A focus on student learning would mean a shift to more portfolios and project-based assessments.  This may be more time-consuming, but it’s also more indicative of student learning and can be used to reflect teacher performance. Additionally, while Levine loves VAM, it is important to note the limited information provided by value-added modeling. Even with more assessments, it is difficult for VAM to yield meaningful differentiating information among teachers.

Levine is correct that current VAM models are seriously limited, but his solutions move in the direction of continuing and even increasing reliance on VAM as an indicator of teacher effectiveness. That’s neither student-friendly nor statistically sound. Rather, we should focus on indicators of learning that move beyond the test and require tangible demonstrations of knowledge acquired and concepts mastered.

For more on education politics and policy, follow @TheAndySpears


 

, , , , , ,

No comments yet.

Leave a Reply