The term “evidence-based” appears throughout the new Every Student Succeeds Act (ESSA), 61 times in all. Seven different competitive funding programs will require education providers to adopt evidence-based strategies, programs, and interventions to improve schools, teacher quality, and student achievement. The new law reflects a continued commitment among policymakers to the idea that research evidence has an important role to play in supporting educational improvement efforts. To succeed, local leaders will need support to make the vision of evidence-based policy and practice a reality.
The promise of evidence-based strategies in ESSA
ESSA puts new demands on local decision makers for identifying evidence-based strategies, implementing them, and in some cases, designing them. Local education providers are expected to have expertise in evidence-based strategies when states determine evidence is “reasonably available.” In some sections of the new law, providers are called to follow “best practices” in implementing evidence-based programs and for designing evidence-based programs.
The law defines four categories of evidence that provide some clarity on what is meant by an evidence-based program that can help local decision makers. Those tiers are based on the strength of evidence available for programs:
The top tier (“strong evidence”) is comprised of strategies and interventions for which there is evidence of a positive and statistically significant effect on student outcomes from at least “one well-designed and well implemented experimental study,” that is, one that uses random assignment to estimate the causal impact of programs.
The second tier (“moderate evidence”) is comprised of strategies and interventions for which there is evidence of statistically significant, positive outcomes from at least one well-implemented quasi-experimental study.
The third tier (“promising evidence”) is comprised of strategies and interventions for which there is correlational evidence of positive effects, once potential sources of selection bias are accounted for statistically.
A fourth tier comprised of programs that have a “research-based rationale,” that is, where there is a body of evidence from research and evaluation that the strategy or intervention is likely to improve student outcomes. Also, an evidence-based program is one in which there are “ongoing efforts to examine the effects” of the strategy or intervention.
These qualities are likely to be helpful to local decision makers, and the requirements to develop and use evidence-based programs are likely to increase local decision-makers’ “acquisition effort,” their initiative to look for research evidence related to educational problems they are trying to solve. Acquisition effort is a significant correlate of research use, some studies show.
Anticipating barriers to local decision makers’ use of research
By itself, though, the law’s new provisions are unlikely to help overcome some key barriers to research use.
For one, access to relevant research is likely to remain an issue. Many educational leaders say that research is not accessible that is relevant to their local problems. In our own study of research use, we are finding that many leaders do not have access to libraries they can use to search for relevant research. This fact makes expanding the breadth and number of intervention reports and practice guides available on the What Works Clearinghouse all the more important under the new law.
A second issue is that research nearly always lags behind innovations in policy and practice. As such, the likely impact of new programs is unknown, even if the rationale for them seems plausible and based on research. The law’s provisions for new Grants for Education Innovation and Research will be key to developing evidence for new policies and programs.
Third, we know that sustained interaction with colleagues and with researchers about the significance of findings is an important condition for research use. Such interactions help local leaders understand whether a statistically significant finding is meaningful for their school or district and what the limitations of a particular study might be. The law does not include provisions or inducements for these kinds of interactions to occur, however.
The law is also silent on how local education providers can develop the know-how to implement evidence-based programs. The average effect size of a program estimated from a well-designed study hides a lot of variability in outcomes across sites. Variability may be due to differences in implementation, or programs may need to be adapted to better meet needs of local students. Ongoing research—a characteristic of evidence-based programs identified in the law—is needed to build local leaders’ capacity to implement evidence-based programs. Implementation research that seeks both to explain and address variability in outcomes can help.
In sum, the new Every Student Succeeds Act doubles down on the importance of evidence for improving policy and practice while putting much greater responsibility on local decision makers for knowing about, using, and even developing evidence. To succeed, our local leaders will need much support to make evidence-based policy and practice the reality policymakers hope it will become.