If All The Teachers of Honors Courses Are “Effective,” What’s Up With These A.P. Test Scores?

Recently, the results of two new studies prompted me to delve deeper into the complex world of how effectively our teachers are being evaluated in New York. Collectively, the studies show that despite states’ efforts to make evaluations tougher, principals continue to rate nearly all teachers as “effective,” and when principals are asked their opinions of teachers in confidence, with no stakes attached, they are much more likely to give harsh ratings.

This concerns me because, ever since Governor Andrew Cuomo adopted a moratorium on test-based teacher evaluations through the 2019-20 school year, teacher ratings are primarily based on principals’ evaluations.

The New York teachers union, an arm of the American Federation of Teachers, strongly opposed Cuomo’s initial proposal to increase the weight of standardized test scores to 50 percent of a teacher’s evaluation, stating that assuming a direct correlation among test scores, the effort of teachers, and success of children ignores all of the other factors that go into learning. Fifty percent may place too much weight on test scores that are aligned with course content, but isn’t zero percent — the result of Cuomo’s moratorium — too little?

Our educators still get annual “growth” scores from Albany based on results of state tests given during the moratorium, but the scores will not be used to decide which teachers and principals will be assigned improvement plans or fired. This means that roughly 60 percent of teachers’ evaluations are based on observations and 40 percent on local tests, depending on what is negotiated with local unions. Student learning objectives — locally-negotiated plans that outline how much students should learn over an academic year and  how to measure growth — also factor into the equation.

The landmark 2009 report The Widget Effect, found that less than one percent of teachers were being rated as unsatisfactory. Since then, many states have worked to put more rigorous evaluation systems in place, including incorporating student test scores. But according to these recent studies little has changed, which I personally find very hard to believe.

In Long Island, a 2015 Newsday analysis offers an explanation for this conundrum. The analysis found that the portion of teacher evaluations that local districts solely control is heavily weighted in most Long Island school districts toward ensuring teachers to score high enough to get an overall “effective” rating. This assessment, as well as my personal experience from my daughter’s 10th grade year, have raised many questions for me about how accurately our teachers are being evaluated. For example, my daughter and a number of her friends took an Advanced Placement World History course last year and none of them — all of whom are National Honors Society students — received the end-of -year test score needed for college credit. I find this to be highly disturbing and it raises red flags about evaluating the effectiveness of her teachers. With these types of testing results — a direct result of instructional effectiveness — removed from the evaluation equation during my daughter’s high school years, will she receive the quality of education necessary for her college years?

The way our teachers are evaluated is a subject mired in controversy and one that deserves the utmost attention by our lawmakers and bargaining units.

So what is the winning formula? What should parents be pushing for now?

Although the answers are still evolving, I believe they lie in striking a balance between testing accountability, student performance and evaluations. A process that supports continuous improvement for teachers, principals and our students is vital to shaping our future generations and their contribution to our nation’s workforce.  


What do you think?

More Comments