We want good teachers. We don’t want bad teachers. But how do we reach that goal? That’s the topic that Thursday’s editorial addresses:
Browsing the S.C. Department of Education’s proposed “Educator Evaluation and Support Guidelines,” the model proposed to overhaul the state’s system for measuring teacher success, two conclusions present themselves.
First, it quickly becomes obvious that there are a ludicrous amount of acronyms involved in professional education. Anybody willing to read the state agency’s report would be forgiven for quickly becoming lost amid the discussion of SIG-Enhanced ADEPT models, which differ from the SIG-PADEPP and which are only one component of the SAFE-T evaluation system, itself being perhaps replaced in part by TOPS, although that will likely depend on input from the SCDE and EESC. Readers almost need a master’s degree in education simply to decode the suggestions.
Second, and more to the point, it’s a good thing that this is a system still in the works and we’re a ways away from any statewide implementation.
Evaluating teachers based on how well their students learn has always been a great idea. There’s no question that good teachers should be rewarded for their superior performance and bad teachers should be weeded out. Our children deserve no less. The problem has always been – and continues to be – how to transform that grand goal into specific measures that can be put to use in the real world.
This latest attempt is better than some in that it uses a more dynamic measurement. Instead of relying on an arbitrary test score that we’d like students to reach, it measures the difference in scores from the beginning of the year to the end. This is an important change, capturing teachers’ ability to improve learning over time rather than their ability to get students – whose abilities will vary widely from class to class and school to school – to reach a preset score. Other components include professional standards and a measure of the school’s progress as a whole.
But the model continues to suffer from the same issue that has plagued all such ideas thus far: It’s incredibly difficult, if not impossible, to account for all of the variables that determine whether or not a student learns. Granted, it does do a better job than previous versions. Jay Ragley, Education Department spokesman pointed out last week that the system compares “a student to their peers across the state with similar backgrounds.”
In other words, the performance of students of the same race will be compared with others of the same race across the state. Students who receive free lunch will be compared with others in the same economic situation. And so on, for any characteristic that the department has sufficient statistics on. In this way, leaders hope, they will get a better snapshot of how teachers perform compared with others teaching similar students.
We cringe, however, at the possibilities this idea has for creating different classes of students. If students are compared mainly with peers of similar backgrounds, rather than the achievement of all students, it’s not hard to imagine attitudes creeping in that say “he’s doing pretty good – for a black student” or “wow, she’s smart – for a kid from the boondocks.” Our children should aspire to be the best students they can be, leading not only their peer groups but leading the entire world into the future.
It’s also simply not possible to measure all of the variables that determine students’ performance. You can have the best teacher in the world, but real world events are and will always be out of that teacher’s control. A student might go home at night to an abusive parent. Another may have trouble concentrating because he never gets enough to eat. Another could have parents in the midst of a nasty divorce, filling the home with yelling and negativity. Still another might be grieving the death of a parent or sibling.
Local teachers confront these situations and many more each day, doing their best to teach growing classes in the face of incredible odds. And while statistical models are becoming better and better, they still cannot capture every situation or every obstacle that students and teachers face.
The good news is that the state is still years away from rolling out a final teacher evaluation system. The current evaluation guidelines are now being “beta tested” in 22 schools across the state. After that will be a pilot project in selected school districts. Based on those results, the State Board of Education will have to sign off on the plan, followed by the General Assembly. Changes and tweaks can be made at any point along the way.
“Things are going to change and things are subject to change,” said Ragley. He said Horry County teachers likely have at least another two to three years before they’re subject to any new guidelines, which could end up differing substantially from those now being tested.
It will be interesting to see the results later this year of the early tests, as well as the reaction of teachers who took part. Perhaps these guidelines really are the best method available for measuring teacher performance. Perhaps teachers will embrace the model as a better way to evaluate success in the classroom.
But don’t count on it. More likely, teachers will continue to cry foul, and in truth it’s hard to blame them.