There are many references to educator pay in ESSA, both direct and indirect. The law mentions, but does not require, any specific type of performance-based pay. The areas that refer to pay are covered by savings clauses that protect existing rights through collective bargaining and state laws.
Depending on your local circumstances, affiliates may wish to use the law to implement a new pay system, like a professional growth salary schedule, or additional pay for those who staff hard-to-staff schools. A local may also use the protections in the law to maintain salary systems they have previously developed.
We’ve included below resources that can assist affiliates in negotiating or advocating for compensation systems based on professional growth and advancement and countering the arguments that pay based on student test scores is effective.
Problems with Pay for Performance
Research has been clear on the problems with tying educator pay to student test scores, and cautions against using value-added models to determine educator pay.
In April 2014, the American Statistical Association (ASA), in their Statement on Using Value-Added Models (VAM) for Educational Assessment, issued a stern word of caution for school districts considering the use of VAM scores in high-stakes employment decisions: “a VAM score may provide teachers and administrators with information on their students’ performance and identify areas where improvement is needed, it does not provide information on how to improve the teaching.”
Much of this is because the scores do not take into account many outside factors that are out of the educators’ control. These may include class size and teaching high-needs students.
There are also factors that contribute to variances in test scores that are not related to educators, including student/family background, poverty, and other outside influences. Using VAM scores for high-stakes decisions on tenure, salary, and termination can lead to “unintended consequences that reduce quality.”
The ASA suggests that instead, the scores may be used to analyze the overall system, such as how policies and educator development can be improved. They also state that because of the complexity of VAMs, “high-level statistical expertise is needed to develop the models and interpret their results.”
The American Educational Research Association (AERA), released a similar word of caution in their June 2015 AERA Statement on Use of Value-Added Models (VAM) for the Evaluation of Educators and Educator Preparation Programs. In their statement they warn of the “considerable risks of misclassification and misinterpretation in the use of VAM” for school districts that have made the decision to use the scores in evaluating their staff and preparation programs.
The AERA points to the fact that there are a number of requirements which must be met in terms of accuracy, reliability, and validity when it comes to student assessments and staff evaluation. They stress the fact that to “properly aggregate student assessment results for any purpose, especially those related to drawing inferences about teacher, school leader, or educator program effectiveness,” a high bar must be established for statistical methods and testing.
There is a strong consensus in the research field that the use of poor data and/or the misuse of that data will have a negative effect on students and educators. They encourage the examination of a tremendous amount of research done on other effective methods and models, such as educator observation and peer assistance and review. They highlight the fact that these models “provide formative and summative teaching assessments of teaching and honor teachers’ due process rights.”
For more information, contact the NEA Collective Bargaining & Member Advocacy Department at <url>[email protected].</url>