Populations Students Early Career Families Educators View My Account
Skip Navigation LinksNASP Home Publications Communiqué Volume 40, Issue 8 Scaling Up RTI

Implementing RTI

Scaling Up Response to Intervention: The Influence of Policy and Research and the Role of Program Evaluation

By Jose M. Castillo & George M. Batsche

District implementation of the response to intervention (RTI) model has occurred at a surprising rate. Data from the Response to Intervention Adoption Survey (Spectrum K12/CASE, 2011) indicate that 94% of schools reported implementing some level of RTI in 2011 (up from 72% in 2009), 24% reported “full implementation” (up from 12% in 2009), and 44% reported that they were in the process of district-wide implementation (up from 28% in 2009). Sixty-six percent of schools reported using RTI as part of the process for determining eligibility for special education (up from 41% in 2010). In a study of nine selected states, Harr-Robins, Shambaugh, and Parrish (2009) reported that all nine states described RTI as a framework to guide the school improvement process for all students. Furthermore, the report indicated that, at the state level, general education had taken charge of RTI or held joint responsibility with the special education division in seven of the nine states studied. These state-level data are consistent with recent district-level data indicating that RTI is led by general education or a unified effort of general and special education in 81% of districts nationwide (Spectrum-K12, 2011). Thus, it appears that implementation of RTI has occurred rapidly and is being implemented with all students.

Rapid implementation of RTI is occurring alongside continued controversy over the model and its intended uses. Federal special education statute (Individuals with Disabilities Education Improvement Act [IDEIA], 2004) and regulations (IDEIA Regulations, 2006) include language allowing school districts to examine student response to scientifically based interventions when determining eligibility for specific learning disability (SLD) programs. Much of the debate among scholars focuses on the validity of using RTI in the process of eligibility determination rather than disagreement with the literature that supports the use of RTI components to improve student outcomes (e.g., Batsche, Kavale, & Kovaleski, 2006). Some critics of RTI have focused on the challenge of implementing the model on a large scale with fidelity (e.g., Batsche et al., 2006; Berkely, Bender, Peaster, & Saunders, 2009; Gerber, 2005); however, the criticisms offered are often discussed in the context of using RTI for eligibility decisions rather than improving student outcomes. In the authors' view, the controversy over RTI emanating from discussions of the model's utility for making decisions about eligibility for SLD programs often detracts from the idea that RTI is a method to improve instruction, beginning with core or universal instruction. All students stand to benefit from the implementation of RTI when the primary focus is increasing the effectiveness of instruction school-wide.

Despite the continued controversy, the momentum of RTI adoption continues to grow (Spectrum K-12, 2011). While it could be argued that this rate of adoption is influenced primarily by current statutory and regulatory mandates (e.g., IDEIA, 2004; IDEIA Regulations, 2006) as well as proposals to include RTI components in federal general education legislation (e.g., the Literacy Education for All, Results for the Nation [LEARN] Act, H.R. 4037, 2009; and A Blueprint for Educational Reform 2010: The Reauthorization of the Elementary and Secondary Education Act, U.S. Department of Education, 2010), a logical question to ask is why are these dramatic changes that include RTI occurring in federal policy? In reality, RTI is simply a term that represents the four steps of the problem-solving process (Bergen & Kratochwill, 1990), of which the final step, evaluation, is really RTI. A broad base of research supports the use of the four problem-solving steps (problem identification, problem analysis, instruction/intervention, and evaluation) to significantly improve the impact of academic and behavior instruction and intervention on student outcomes. It was, in fact, this research base, articulated in testimony at regional meetings held by the President's Commission on Excellence in Special Education (2002), that served as the birth of RTI in subsequent legislation.

The literature on problem solving models ranges broadly in terms of the research questions asked, methodology employed, models examined, and conclusions drawn. Several studies have focused on the effectiveness of each step of the problemsolving process and its overall impact on student performance. Research has supported the effectiveness of data-based problem identification (VanDerHeyden, Witt, & Naquin, 2003), the importance of the problem analysis step (Lentz, Allen, & Ehrhardt, 1996; Taylor & Miller, 1997), the characteristics of effective interventions and methods to increase intervention fidelity (Witt, VanDerHeyden, & Gilbertson, 2004; Noell & Gansle, 2006), and the importance of the use of data to evaluate instruction/intervention response (Fuchs & Fuchs, 1986). In addition, research on the implementation of the RTI process indicates that student outcomes are related both to the use and integrity of the process (Noell, 2008; Burns, Appleton, & Stehouwer, 2005; Telzrow, McNamara, & Hollinger, 2000).

Although significant research support exists for the components of RTI, the question remains about the loss of fidelity and inconsistency in implementation when these components are scaled up (i.e., the process of expanding the implementation of RTI with fidelity across classrooms, grade levels, schools, districts, and states). Some research has addressed this issue (e.g., Burns, Appleton, & Stehouwer, 2005), but little research on the relationship between the process of implementation and outcomes has been published.

The development of the program evaluation model for the Florida Statewide Problem- Solving/RtI (PS/RTI) initiative is intended to examine the factors that influence fidelity in the scale-up process. The model was designed to evaluate the relationships between teacher variables (e.g., beliefs), response to professional development (e.g., skill development), implementation levels of PS/RTI, and student outcomes. The term PS/RTI is used throughout the remainder of this article because of the emphasis placed on a data-based problem-solving process to (a) identify target skills, (b) analyze potential reasons for skills and/or performance deficits, (c) develop instruction/intervention plans, and (d) evaluate progress toward proficiency on the target skills in Florida's model (See Castillo, Hines, Batsche, & Curtis, 2011 for a more detailed description of the application of the four-step problem-solving process adopted by the Florida PS/ RTI Project described below). Reviewing the data predicting the outcomes of these relationships should make the process of scaling up both effective and efficient.

Statewide PS/RTI Implementation: A Model for Formative Program Evaluation of Scale-Up Efforts

The complexity of school systems requires that any innovation, regardless of how much evidence exists supporting its use, be implemented by following systems change principles (Curtis, Castillo, & Cohen, 2008; Hall & Hord, 2010). Because engaging in systemic change to facilitate adoption of an innovation does not guarantee that implementation will occur with fidelity, program evaluation models that take into account the complexity of transforming practices in the schools and provide formative data that can be used to inform scale-up efforts are needed. Florida's PS/RTI Project, a collaborative effort between the Florida Department of Education and the University of South Florida, has developed a comprehensive program evaluation model to help facilitate efforts in the State of Florida to implement PS/RTI practices.

The Project adopted an input–process–output (I–P–O) evaluation model (e.g., Bushnell, 1990) as the framework for examining PS/RTI scale-up efforts. I–P–O models allow educators conducting program evaluation to assess interacting components of the educational system that relate to the outcomes of an initiative. In the context of evaluating PS/RTI implementation, inputs are the resources and characteristics (e.g., student population) of schools, districts, and the state. Processes are the actions that educators take to facilitate implementation (e.g., professional development provided, implementation of specific problem solving steps). Finally, outcomes include the targets that implementing PS/RTI practices are intended to impact (e.g., student achievement). Additional variables also are evaluated (e.g., school goals, contextual factors such as school climate, external factors that exert pressures on schools such as state mandates and funding) to provide a complete analysis of the environment in which educators attempt to implement PS/RTI practices.

Data are collected to evaluate key-stakeholder buy-in and skills, implementation of the model, and student and systemic outcomes, among other variables identified through the I–P–O framework in demonstration sites that are representative (e.g., geographically, demographically) of Florida school districts. The data collected in these domains are driven by evaluation questions developed from the research on PS/RTI implementation and systems change. Instrumentation and procedures for data collection have been developed to answer the evaluation questions. The data collected are continuously used to make adjustments to implementation and evaluate the impact of the changes. A thorough explanation of the Project's program evaluation model is beyond the scope of this article; however, Table 1 contains examples of evaluation questions asked, research supporting development of the questions, and data collection methods used. Additional information on the program evaluation model developed by the Project, assessment tools used, and preliminary findings can be found at http://floridarti.usf.edu/resources/program_evaluation/index.html.

Table 1. Examples of Evaluation Questions and Data Collection Methods Derived From Research on PS/RTI Implementation and Systems Change
Evaluation QuestionSupporting ResearchData Collection Methods
To what extent do educators possess beliefs consistent with PS/RTI practices?Educators' beliefs impact practices they are willing to adopt (Fang, 1996)Beliefs Survey administered to educators 1–2 times per year
To what extent does ongoing professional development result in educators developing the skills to implement PS/RTI practices?Majority of teachers implement new practices when an effective professional development model is used (Joyce & Showers, 2002)Perceptions of RTI Skills Survey administered to educators 1–2 times per year.

Direct Skill Assessments administered to school teams following Project delivered trainings
To what extent are the critical components of a PS/RTI model being implemented with fidelity?Research on the steps of problem solving models (e.g., Bergan & Kratochwill, 1990) and assessment of treatment integrity (Noell & Gansle, 2006)Permanent product reviews from, and direction observations of, data meetings evaluating core, supplemental, and intensive instructional strategies
What is the relationship between implementing the critical components of a PS/RTI model and student outcomes?Studies demonstrating improved student outcomes following implementation of PS/RTI practices (e.g., Burns et al., 2005)Implementation fidelity assessment methods referenced above and student outcome assessment data (e.g., reading scores on Florida's statewide reading assessment)

Examples of Formative Program Evaluation Data Used to Guide Practice

To illustrate how formative program evaluation can inform ongoing implementation of PS/RTI, the authors provide three examples below. The first example relates to validating that professional development is providing educators with the knowledge and skills necessary to implement PS/RTI practices. For implementation of PS/RTI to be successful, educators must possess the skills required to implement with fidelity. Measurement of the extent to which educators learned the data-based decision-making skills taught during trainings is essential to addressing implementation issues. One way to engage in this activity is by administering skill assessments at the end of each training session and/or module provided to school-based teams. Skill assessments can be developed that correspond with the scope and sequence of training content (i.e., the skills assessed at any given training session measure the skills taught that day). Educators' mastery of the steps of problem solving applied across the three tiers of the RTI model can be assessed and used to guide future professional development. Skills that educators demonstrate difficulty with can then be retaught while those that are mastered can be reinforced. For example, data collected to examine skill development during the first year of training delivered to demonstration sites indicated that educators, on average, showed mastery of the four steps of problem-solving applied to Tier 1 issues (the primary focus of training content during the first year). Educators receiving training from Project staff received 85%, 93%, 76%, and 88% of the points possible on assessments of their problem identification, problem analysis, intervention development and implementation, and program evaluation skills respectively.

Demonstrating mastery of data-based decision-making skills on a series of case studies does not guarantee implementation of PS/RTI in schools. Therefore, it is necessary for anyone evaluating the impact of PS/RTI to examine the extent to which the model is implemented with fidelity. Noell and Gansle (2006) discuss three methods for evaluating intervention fidelity: self-reports, permanent product reviews, and direct observations. Self-report methods typically involve asking the educators responsible for implementation to respond to surveys, checklists, or other tools designed to assess fidelity. Although educator self-report is typically the most efficient means of gathering data, it also tends to result in inflated estimates of fidelity. Conversely, direct observations of the intervention's implementation tend to yield the most reliable data but are resource intensive (e.g., personnel allocation and time). Finally, permanent product reviews that typically involve examining documents, charts, worksheets, or other products that would result from implementation of the intervention for evidence of the critical steps provide a balance between reliable data and resources needed to collect the data. Although Noell and Gansle focus primarily on evaluating the fidelity of interventions implemented in the classroom setting, the principles and practices discussed are applicable to evaluating implementation of multitiered services and problem-solving. What follows is an example of data derived from a permanent product review protocol examining problem-solving implementation focused on Tier 1 and 2 instruction (see Figure 1).

The data represented in Figure 1 were derived from theTier 1 and 2 Critical Components Checklist. The checklist contains items that assess the extent to which each of the four steps of problem-solving are evident in products from meetings focused on Tier 1 and/or 2 student progress (e.g., meetings examining school-wide data, grade-level data, and/or small group intervention data). A standard rubric is used to rate evidence of a component's presence using a 3-point scale (0 = Absent; 1 = Partially Present; 2 = Present). District-based personnel who received training, technical assistance, and support from Project staff completed the checklist in pilot and comparison schools three times a year to correspond with common universal screening windows across the state. See Castillo et al. (2010) for additional information on the checklist, including intended uses, recommended administration procedures, and technical adequacy.

Implementation Levels vs. Tier 1,2 Critical Components

Figure 1 includes data on the extent of change in implementation levels demonstrated in pilot versus comparison schools during the first 2 years of training and support provided by the Project. Each item measured by the checklist is reflected on the x-axis. The change in average rating provided by district-based personnel is reflected on the y-axis. Although direct causal statements cannot be made, visual analysis of the data suggests that pilot schools that received systematic training and support demonstrated greater increases in implementation fidelity of the problem-solving steps across the 2-year period than the comparison schools.

Research on implementation of innovations suggests that schools vary in the extent to which they adopt practices with fidelity (e.g., Sarason, 1990). Consistent with this assertion, Project staff noted variable implementation fidelity levels across methods used to evaluate implementation of PS/RTI. Thus, relating the extent of implementation of the PS/RTI model in schools to outcomes achieved is necessary when evaluating outcomes associated with the model's adoption. Although additional data and analyses are needed before more definitive conclusions can be reached, preliminary data demonstrating changes in implementation (see Figure 1) and outcomes from preto post-PS/RTI implementation appear promising. See Table 2 for data demonstrating outcomes associated with pilot and comparison schools.

Table 2. Changes in the Percent of Students Scoring in the Proficient Range on Florida’s Statewide Reading Assessment From 2006–2007 to 2008–2009: Pilot Versus Comparison Schools
Direction of ChangePilot Schools (%)Comparison Schools (%)
Increased65%48%
Decreased22%41%
No Change13%11%
Note. The values provided represent the percent of schools that demonstrated increases, decreases, or no changes in students scoring at the proficient level.

School Psychologists and Program Evaluation of PS/RTI

Research exists on the outcomes of large-scale implementation of school reform efforts. However, little information is available to inform the implementation of an effective and efficient PS/RTI model scaled to district and state levels. The decision to develop the formative program evaluation model described above was based on the prediction that ongoing data collection would inform necessary changes in the implementation process and increase the probability that the implementation would be successful. The length of time necessary to achieve consensus in a district and the factors that influence that consensus inform the length of time for implementation. The amount of drift that occurs between direct training and implementation at the building level will inform the amount of technical assistance and coaching that is necessary to ensure implementation with fidelity. The relationship between levels of effective core instruction and the need for supplemental and intensive instruction demonstrated by students inform personnel deployment decisions, master schedules, and instructional planning. The list goes on, but these are the types of issues frequently encountered when implementing PS/RTI at a district level.

School psychologists can play a critical role in helping schools and districts support PS/RTI implementation. Research suggests that teachers often lack the knowledge and skills to evaluate progress toward established goals (McLeod, 2005). The use of facilitators or data coaches has been proposed as one mechanism for providing support to educators evaluating practices (e.g., Center for Collaborative Education, 2004; Godber, 2008). Furthermore, Curtis et al. (2008), Godber (2008), and others suggest that school psychologists are a potential resource for providing support to schools and districts implementing and evaluating new practices. However, data suggest that many school psychologists continue to engage in traditional individual student, special education related activities the majority of the time rather than fulfilling this role (Castillo, Curtis, Chappel, & Cunningham, 2011).

The rapid implementation of PS/RTI occurring across the nation provides school psychologists with an opportunity to redefine their roles. It has been suggested that the field of school psychology would benefit from actively engaging in data-based decisionmaking to address issues important to educational stakeholders and to further develop and promote the links between program evaluation and the role of school psychologists (e.g., Godber, 2008). Systematic, ongoing program evaluation of PS/RTI implementation is needed to inform efforts to effectively scale up adoption of the model. School psychologists have the potential to support the process of implementing PS/RTI by engaging in program evaluation and disseminating findings to educational leaders, policy makers, and other key stakeholders facilitating the change initiative. The question that remains is whether school psychologists will embrace this opportunity to expand the services they deliver to support the instructional goals of educators, or whether they will continue in their traditional roles.

References

Batsche, G., Kavale, K. A., & Kovaleski, J. F. (2006). Competing views: A dialogue on response to intervention. Assessment for Effective Intervention, 32(1), 6–19.

Berkeley, S., Bender, W. N., Peaster, L., & Saunders, L. (2009). Implementation of response to intervention: A snapshot of progress. Journal of Learning Disabilities, 42(1), 85–95.

Bergan, J. R., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York, NY: Plenum. Burns, M., Appleton, J. J., & Stehouwer, J. D. (2005). Meta-analytic review of responsiveness-to-intervention research: Examining field-based and research-implemented models. Journal of Psychoeducational Assessment, 23, 381–394.

Bushnell, D. S. (1990). Input, process, output: A model for evaluating training. Training and Development Journal, 44(3), 41–43. Center for Collaborative Education. (2004). The challenge of coaching: Providing cohesion among multiple reform agendas. Boston, MA: Author.

Castillo, J. M., Batsche, G. M., Curtis, M. J., Stockslager, K., March, A., & Minch, D. (2010). Problem solving/response to intervention evaluation tool technical assistance manual. Tampa, FL: University of South Florida, Florida Problem Solving/Response to Intervention Project.

Castillo, J. M., Curtis, M. J., Chappel, A., & Cunningham, J. (2011). School psychology 2010: Results of the national membership study. Paper presented at the annual National Association of School Psychologists convention, San Francisco, CA.

Castillo, J. M., Hines, C. M., Batsche, G. M., & Curtis, M. J. (2011). The Florida Problem Solving/ Response to Intervention Project: Year 3 evaluation report. Tampa, FL: University of South Florida, Florida Problem Solving/Response to Intervention Project.

Curtis, M. J., Castillo, J. M., & Cohen, R. C. (2008). Best practices in system-level change. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 887– 902). Bethesda, MD: National Association of School Psychologists.

Fang, Z. (1996). A review of research on teacher beliefs and practices. Educational Research, 38(1), 47–65.

Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53, 199–208.

Gerber, M. M. (2005). Teachers are still the test: Limitations of response to instruction strategies for identifying children with learning disabilities. Journal of Learning Disabilities, 38(6), 516–524.

Godber, Y. (2008). Best practices in program evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 2193–2206). Bethesda, MD: National Association of School Psychologists.

Hall, G. E., & Hord, S. M. (2010). Implementing change: Patterns, principles and potholes (3rd ed.). Boston, MA: Allyn & Bacon.

Individuals with Disabilities Education Improvement Act, U.S.C. H.R. 1350 (2004). Public Law 108-446 (20 U.S.C. 1400 et seq.)

Harr-Robins, J. J., Shambaugh, L. S., & Parrish, T. (2009). The status of state-level response to intervention policies and procedures in the West Region states and five other states (Issues & Answers Report, REL 2009–No. 077). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory West. Retrieved from http://ies.ed.gov/ncee/edlabs

Individuals With Disabilities Education Act Regulations. 34 C.F.R. Part 300 (2006).

Joyce, B., & Showers, B. (2002). Student achievement through staff development (3rd ed). Alexandria, VA: Association for Staff and Curriculum Development.

Lentz, F. E., Jr., Allen, S. J., & Ehrhardt, K. E. (1996). Conceptual elements of strong interventions in school settings. School Psychology Quarterly, 11, 118–136.

Literacy Education for All, Results for the Nation (LEARN) Act of 2009, H.R. 4037, 111th Congress, 1st Session (2009).

McLeod, S. (2005). Data-driven teachers. Retrieved August 4, 2010, from http://www.microsoft.com/education/ThoughtLeadersDDDM.mspx

Noell, G. H. (2008). Research examining the relationships among consultation process, treatment integrity, and outcomes. In W. P. Erchul & S. M. Sheridan (Eds.), Handbook of research in school consultation: Empirical foundations for the field (pp. 323–342). Mahwah, NJ: Erlbaum.

Noell, G. H., & Gansle, K. A. (2006). Assuring the form has substance: Treatment plan implementation as the foundation of assessing response to intervention. Assessment for Effective Intervention, 32, 32–39.

President's Commission on Excellence in Special Education. (2002). A new era: Revitalizing special education for children and their families (U.S. Department of Education Contract No. ED-02-PO-0791). Washington, DC: U.S. Department of Education.

Sarason, S. B. (1990). The predictable failure of school reform. San Francisco, CA: Jossey-Bass. Spectrum K12 School Solutions. (2011). Response to intervention adoption survey 2011. Retrieved from http://www.spectrumk12.com//uploads/file/RTI%20Report%202011%20FINAL.pdf

Taylor J., & Miller, M. (1997). Treatment integrity and functional assessment. School Psychology Quarterly, 12, 4–22.

Telzrow, C. F., McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29, 443–461.

United States Department of Education, Office of Planning, Evaluation and Policy Development. (2010). A blueprint for reform: The reauthorization of the Elementary and Secondary Education Act. Washington, DC: Author.

VanDerHeyden, A . M., Witt, J. C., & Naquin, G. (2003). Development of validation of a process for screening referrals to special education. School Psychology Review, 32, 204–227.

Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Troubleshooting behavioral interventions: A systematic process for finding and eliminating problems. School Psychology Review, 33, 363–383.


Jose M. Castillo, PhD, is an assistant professor in the school psychology program at the University of South Florida. George M. Batsche, EdD, is a professor and coordinator of the school psychology programs (EdS/PhD) at the University of South Florida. He is the codirector of the Institute for School Reform and the Florida Statewide Problem-Solving/RTI Project.