The Problem with VAM Scores

John Spencer Assessment, Education Policy

SHARE THIS STORY: Share on FacebookTweet about this on TwitterPin on PinterestShare on Google+

Before the district kill-and-drill benchmark test, a student says to me, "You look stressed."

"I'm fine," I lie, offering a grin that looks more like a grimace. 

"I normally blow off the test, but my mom says you could lose your job if our scores are bad, so I'll do my best."

"How do you know about that?" I ask.

"My mom works in the cafeteria at Heatherbrae and she told me our scores decide if you're a good teacher."

"I'm not worried," I lie. But the truth is I am. I'm terrified. I have tried my hardest not to teach to the test and I know that my students are not prepared. It's a gamble I've won before, but somehow this feels different. 

Why VAM Scores Fail

In theory, value-added measures make sense. We need to know which teachers are good and which teachers are lousy.  So, what better way to look at teacher effectiveness than seeing the value that they add to student learning? Instead of focusing on the subjective observations of principals, VAM promises to offer an objective criteria for accountability. 

Unfortunately, it doesn't work. For one, it assumes that learning and achievement are the same thing. They're not. A multiple choice test is one of the worst ways to assess student learning. In addition, VAM scores are based upon a growth model that doesn't account for the changes in class make-up with a transient population. I have had thirteen students leave and nine students join since the first quarter. I don't teach the same class that I taught in the first quarter.

But I'm less concerned with the flaws of VAM as an evaluation tool as I am with the bad policies it creates. See, our district has to follow state policies enacted as a result of federal teacher evaluation requirements under Race to the Top. As a result, the district needs time to look at student growth to figure out the VAM scores before deciding if a teacher will recieve a contract.

So, now we have a pre-test the first week of school (so much for team building) and at the close of the third quarter. Here are some of the results:

  • A compacted curriculum, where we rush to cover and review standards through the entire fourth quarter without ever questioning depth of mastery. Here's to two day to learn linear equations! 
  • An increase in test preparation over project-based learning. 
  • Impatient teachers. I mention that, because I snapped at kids who weren't learning at breakneck speed when, at other times, I would have sat with them in small groups and helped them master the content through scaffolding and critical thinking.
  • Low staff and student morale. We had the highest number of teacher absences and the highest number of referrals in the four weeks leading up to the post-test. 
  • A decrease in the leadership that administrators have in doing evaluations and setting the tone for curriculum decisions.  

Under No Child Left Behind, we used test scores to judge schools. That didn't work, so now we have Race to the Top, where we use test scores to judge schools and teachers. Explain to me again how this is a step forward.

 

John Spencer

Phoenix, Arizona

In my sophomore year of college, I began tutoring a fifth-grader in a Title One, inner city Phoenix school. What began as a weekly endeavor of teaching fractions and editing essays grew into an awareness of the power of education to transform lives. My involvement in a non-profit propelled a passion for learning as an act of empowerment.

» John's Stories
» Contact John