Evaluating Value-Added Teacher Evaluations

New York City recently joined the Los Angles Unified School District in making value-added teacher ratings open to the public despite significant evidence that value-added scores are riddled with errors and inconsistencies.The scores were subsequently published in major newspapers in each city, mislabling teachers publicly and proving that there needs to be more that goes into determining the "value" of a teacher than "value-added" scores.

"Evaluating Teacher Evaluations" is a recent brief published in Phi Delta Kappan by Linda Darling-Hammond, Audrey Amrien-Beardsley, Edward Haertel and Jesse Rothstein. The piece is a great tool for understanding value-added rating models and how they fail to account for the vast number of factors that influence a student's test scores from one year to the next. Since value-added models can't control for factors like class size, home and community challenges, summer learning loss (which disproportionately affects low-income students), then there is no way they can provide an accurate picture of how effective a teacher is in raising student test scores. 

The brief also provides examples of school distircts that have successfully employed more well-rounded teacher evaluations, like the Peer Assistance and Review program used in Montgomery County Public Schools, that pairs novice or sturggling teachers with mentors teachers for assistance and peer review. 

You can download the full brief on teacher evaluations here

Also worth checking out is Matthew DiCarlo's statistical breakdown of NYC's teacher ratings to reveal huge error margins surrounding the scores, which you can read on the Albert Shanker Institute Blog here