Fullan (1983) posited that implementation is “critical for both the planning and the evaluation of new models and programs” (p.224). With an explicit understanding of implementation process, it will help the curriculum developers “be more precise about the operational components of their programmes”, gather “clearer, more useful information about what is happening in practice, and consequently about how to improve it”, and “draw a direct causal link between the model, its quality of implementation, and its outcomes” (pp.224-225). Only by looking at the implementation are curriculum developers able to confirm (or disconfirm) that the reforms have been successful or otherwise. With this knowledge of the implementation process, the developers are then able to pinpoint the specific aspects of the implementation that needs to be improved or addressed. This information is especially pertinent when academic outcomes are not achieved, even though a reform has taken place. The implementation evidence can then be drawn upon to explain such a scenario – whether the reform was inadequate to meet the needs of the teachers and learners or the reform was not implemented properly (Fullan, 1983). Such specific implementation data can aid curriculum developers to communicate the value and progress of the reform to the schools and the larger educational community. Such implementation data can also help developers explain differences in outcomes across schools, even though all schools are enacting the same reform programme.
Decades ago, Fullan (1983) posited that implementation is “critical for both the planning and the evaluation of new models and programs” (p.224). The importance of ascertaining implementation must not be underestimated when evaluating programmes; improved or poor academic outcomes cannot be attributed to the reform, if the reform were not implemented properly. It is the implementation data, rather than the academic outcomes, that can help curriculum developers confirm that the reforms have been successful or not. There are eight aspects of implementation that are attended to in implementation assessment (Durlak & Dupre, 2008); of the eight, Dusenbury, Brannigan, Falco, and Hansen (2003) have branded fidelity as the aspect of implementation which informs researchers of the reason(s) behind an innovation’s success or failure. Fidelity is defined as the extent to which an innovation corresponds with the programme as intended by its developers (Durlak & Dupre, 2008). While research has found that high levels of fidelity lead to positive programme outcomes (Berkel, Mauricio, Schoenfelder, & Sandler, 2011), it has also been discovered that practitioners tend to adapt an innovative programme to suit their local needs (Elliott & Mihalic, 2004). Because fidelity is considered so significant in the study of implementation, this will be the workshop’s primary focus, specifically in three areas: 1) teachers’ concerns and how such concerns may affect the fidelity of their implementation; 2) teachers’ enactment of practices; and 3) teachers’ use of curriculum materials. This workshop will introduce to the participants some tools with which to ascertain these three areas.