Populations
Skip Navigation LinksNASP Home Publications Communiqué Volume 34, Issue 3 Conceptual Confusion Within Response-to-Intervention Vernacular: Clarifying Meaningful Differences

NASP Communiqué, Vol. 34, #3
November 2005

Exploring RTI

Conceptual Confusion Within Response-to-Intervention Vernacular: Clarifying Meaningful Differences

By Theodore J. Christ, Matthew K. Burns, & James E. Ysseldyke

The Individuals With Disabilities Education Improvement Act of 2004 states that a local education agency “may use a process that determines if the child responds to scientific, research-based intervention as a part of the evaluation procedures” (Pub. L. No. 108-446 § 614 [b][6][A]; § 614 [b][2 & 3]). This is commonly referred to as Response to Intervention (RTI) and represents a substantive departure from traditional diagnostic and subsequent resource allocation in the 30 years since the discrepancy model was first operationalized in federal regulations.  The traditional discrepancy model approach to learning disability (LD) diagnosis has experienced widespread criticism (Aaron, 1997; Fletcher et al., 1998) due to many factors, including inconsistent implementation (Haight, Patriarcha, & Burns, 2001; Scruggs & Mastropieri, 2002), failure to differentiate low achievement from LD (Fletcher et al., 1998), and a lack of treatment validity (Aaron, 1997). As a result the Office of Special Education Programs conducted an LD Summit in 2001 to examine LD diagnostic procedures. RTI was presented as one alternative approach to the discrepancy model (Gresham, 2001) and was later endorsed by the President’s Commission on Excellence in Special Education (2002) and by several professional organizations (Fuchs, Mock, Morgan, & Young, 2003).

Although the explicit inclusion of RTI in federal special education law is a recent event, RTI has existed in the field for many years (Fuchs et al., 2003).  Existing RTI models that were implemented on a large-scale basis demonstrated strong and positive effects (Burns, Appleton, & Stehouwer, in press), but some important inconsistencies among them were noted (Burns & Ysseldyke, in press). A review of the literature and many professional discussions reveal that there are meaningful subtleties in the RTI vernacular that have the potential to confuse professionals and negatively affect implementation. The purpose of this paper is to propose clarifications to the language related to four separate issues: a) RTI-problem solving vs. RTI standard protocol; b) response vs. resistance to intervention; c) response vs. responsiveness to intervention; and d) response to instruction vs. intervention.

RTI – Problem-Solving vs. RTI – Standard Protocol

Problem-solving (PS) is a general term that describes any set of activities designed to “eliminate the difference between ‘what is’ and ‘what should be’ with respect to student development” (Deno, 2002; p. 38). In contrast, RTI refers to any set of activities designed to evaluate the affect of instruction, or intervention, on student achievement. RTI is an approach to evaluate a student’s response to an ecological context of instruction and/or intervention. We propose that the terms RTI and PS each represent distinct processes which may converge, but are not synonymous. PS is a systematic process designed to change student outcomes. RTI is a systematic process to determine whether change has occurred and under what conditions.

Fuchs et al. (2003) described two groups of RTI advocates: early interventionists who advocate for standardized and validated treatment protocols (Standard Protocol; RTI-SP) and behaviorally-oriented school psychologists who see RTI as synonymous with problem-solving (RTI-PS). While the conceptual distinction between the two approaches is sound, the language is confusing because the contrast implies that RTI-SP is somehow distinct from, or inconsistent with, problem solving. On the contrary, both RTI-SP and RTI-PS can fit within a problem solving framework. The fundamental difference between RTI-SP and RTI-PS is the level of individualization and depth of problem analysis that occurs prior to the selection, design, and implementation of an intervention. Because the difference between the two RTI approaches is not related to their potential applications within a problem solving framework, it is confusing to label one approach as PS.

In an RTI-SP application, a standard set of empirically supported instructional approaches are implemented to prevent and remediate academic problems. Such approaches might include partnered reading activities, direct instruction of phonological or phonics skills, or reinforcement of skills through computer programs (Case, Speece, & Molloy, 2003). A key feature of RTI-SP is that standard instruction/intervention protocols are used with minimal analysis of the deficit skill (e.g., Peer-Assisted Learning Strategies; Fuchs, Fuchs, Mathes, & Simmons, 1997; McMaster, Fuchs, Fuchs, & Compton, 2005). In contrast, RTI-PS is a more flexible process with an emphasis on individualized interventions that derive from the analysis of instructional/environmental conditions and skill deficits (Tilly, Reschly, & Grimes, 1999). RTI-PS is guided by a systematic analysis of instructional variables that is designed to isolate target skill/sub-skill deficits and shape targeted interventions (Barnett, Daly, Jones & Lentz, 2004). Procedural problem analysis examples include the functional assessment of academic skills (Daly et al., 1996; Daly et al., 1999; Daly et al., 1997) and Curriculum-Based Evaluation (Heartland AEA 11, 2000; Howell & Nolet, 2000; Upah, 2002).

Neither RTI-SP nor RTI-PS is more consistent with the spirit of problem solving as described by Deno (2002) in that both RTI applications are designed to reach the same goal, which is the reduction or elimination of an academic problem. The difference is the analysis of instructional/environmental conditions associated with RTI-PS (Barnett et al., 2004). Behavioral consultation literature refers to this identification of environmental conditions that are directly related to the referral problem in order to design and implement interventions as problem analysis (Kratochwill & Bergan, 1990; Tilley, 2002). Thus, instead of perpetuating a false dichotomy between the two RTI applications, RTI-PS should be replaced with RTI-problem analysis (RTI-PA). The distinction between the RTI procedures persists, but the alternate language is less likely to spur confusion.

While RTI-PA and RTI-SP are distinct, these procedures can be combined as part of a larger problem solving system, or what some researchers have termed a progressive intervention approach (O'Shaughnessy et al., 2003). A progressive intervention approach typically includes primary (Phase I), secondary (Phase 2), and tertiary (Phase 3) levels of instruction and/or intervention. Each successive level is associated with a more intensive level of treatment and allocation of additional resources to solve the problem. That is, RTI-SP is more likely to be used during the initial intervention phases to prevent and/or remediate less severe problems before they have the potential to establish disabling conditions. RTI-PA/RTI-PS applications can then be reserved for the more persistent and atypical problems, which would typically correspond with the problems that were not resolved by standard interventions. This conception is consistent with the problem solving and resource allocation models presented by Tilly (2002).

Response vs. Resistance

The “R” in RTI may represent either response/responsiveness or resistance/non-responsiveness to intervention (although there may be slight distinctions, resistance and non-responsiveness will be used interchangeably). The distinction between response and resistance paradigms is important because the purpose, procedures, and conclusions of each process are distinct. The resistance model is diagnostically focused whereas a response model is intervention focused.

Gresham (2001) stated that “a resistance-to-intervention approach to eligibility determination identifies students as having a learning disability if their academic performance in relevant areas does not change in response to a validated intervention implemented with integrity” (p. 4). Within a resistance-to-intervention model, a student is diagnosed and/or deemed eligible for services based on his/her lack of response to specific interventions. As part of the resistance to intervention diagnostic process, empirically supported interventions are selected and implemented to determine whether an individual is resistant to “effective” interventions. When resistance is the focus, the emphasis of the process is to determine whether there is some within-child deficit, deficiency, disorder, or disability that impedes achievement, or a within-child deficit that warrants a diagnosis of disability (Mann, 1979). Services are conferred based on an evaluation process that is designed to determine the conditions from which a child does not benefit. The process is most likely to terminate with diagnosis when the evaluation team determines that some specified set of procedures does not work.

Individuals who respond are deemed ineligible for special education-related resources and individuals who fail to respond are deemed resistant, and considered in need of special education resources. The paradox of the resistance model is that it could fail both responders and non-responders/resistant students. School resources are often too limited to provide ongoing services to students who would be ineligible to benefit from special education-related resources based on their response to an intervention. Once effective instructional conditions are identified for the responder group, they often are placed back into general education without sufficient support for ongoing effective instruction. Moreover, the students who do not respond to instruction are deemed eligible based on what does not work instead of what does work. In effect, the assessment fails to inform special education service delivery.

Gresham (2001) described a response-to-intervention model that “comes from the applied behavior analysis (ABA) camp, which offers a functional rather than a structural explanation for children’s academic difficulties – that is, understanding academic failure attempts to relate academic performance to environmental events” (p. 7). Although Gresham did not label the distinction between the two models as response- versus resistance/non-responsiveness-to-intervention, he offered the definitions that helps clarify the distinction between the two models. Within a response-to-intervention model, a student is diagnosed and/or deemed eligible to access special education related resources based on his/her response to intervention, which is used to determine his/her instructional needs. As part of the response to intervention process, empirically supported interventions are selected and implemented to determine what set of instructional conditions most benefits the student. When response is the focus, the emphasis of the process is to determine what set of conditions the student needs to benefit from instruction. The process is designed to first identify the set of conditions that benefit the child and then determine whether services should be provided using general education or special education resources. Thus, the response model is more of a resource allocation method than a diagnostic tool because diagnosis is secondary to the primary determination of what benefits the student.

Unlike the paradoxical approach of resistance models, which place diagnosis before instructional decisions, response models place instructional decisions before diagnosis. This should not imply that response models are inconsistent with diagnostically orientated eligibility decisions. On the contrary, response models are premised on the recognition that the identification of effective treatments precedes and supersedes categorical labels. The process of assessment and evaluation does not terminate until the conditions for response are established. Subsequently, the magnitude of resources necessary to meet the individual’s needs is used to guide diagnostic decisions and inform ongoing treatment.

Response vs. Responsiveness

The “R” in RTI may also represent either response or responsiveness to intervention. A review of the professional literature suggested that when “responsiveness” was used in place of “response,” the authors were referring to a within-child phenomenon, much like the distinction between response and resistance. For example, in a recent article Fuchs, Fuchs, and Compton (2004) proposed that, “students are identified as LD when their response to generally effective instruction (i.e., instruction to which most student respond) is dramatically inferior to that of their peers…If a child is nonresponsive to instruction that benefits a majority of students…it suggests that disability is responsible and that specialized intervention is necessary” (p. 217).

This response and responsiveness distinction might seem subtle, but its implications are significant.  Research literature that used the term “response,” as opposed to “responsiveness,” referred to a process of inductive hypothesis testing to low-inference phenomena (Barnett et al., 2004). Thus, a response to intervention approach emphasizes ecological manipulations that promote achievement without any unwarranted attention to within-child causation. Such is more consistent with the principles of behavioral analysis and the experimental discipline of school psychology that is described by Reschly and Ysseldyke (2002). In contrast, the responsiveness to intervention approach emphasizes discovering whether there is a within-child cause. Such is more consistent with hypothetical deductions and high level inferences that are associated with a less experimental correlational discipline (Reschly & Ysseldyke, 2002).

Instruction vs. Intervention

The “I” in RTI within the literature usually stands for one of two possibilities, either instruction or intervention. Speece, Case, and Molloy (2003) described RTI procedures where an individual’s response was evaluated in the context of general education reading instruction, which is akin to when student performance is evaluated against standards/benchmarks of expected performance subsequent to instruction. This evaluation of a child’s response to typical instruction can be used to guide resource allocation decisions and determine which individuals should be considered for more intensive instructional procedures. Response to instruction is the referent when procedures are designed to evaluate a students’ response to typical instruction and/or slightly modified/intensive instruction. These procedures typically correspond with primary (Phase I) and secondary (Phase 2) levels of a multi-tiered model.

In contrast, Barnett et al. (2004) described RTI procedures where an individual’s response was evaluated in the context of a highly modified and intensive set of instructional conditions. When typical instructional procedures are highly modified, then the services comprise an intervention, and the referent is response to intervention. Thus, “response-to-instruction” refers to response to core instruction or universal programming, whereas “response-to-intervention” refers to a student’s response to a substantially modified set of instructional procedures that are distinct from universal programming (Salvia & Ysseldyke, in press).

The distinction between response to instruction and intervention amounts to the intervention and evaluation activities. While it is difficult to define the parameters for what is typical, modified, and/or highly modified instruction, the more fundamental difference between response to instruction versus intervention is the frequency and use of assessment data to evaluate response (Salvia & Ysseldyke, in press). With an RTI approach, assessment may occur in either a continuous, periodic (e.g., occurring 3-10 times per year), or annual (1 time per year) schedule. When evaluating response to instruction, the purpose of assessment is to evaluate general program effectiveness for the group and identify individuals who will need or benefit from more intensive instruction. As the intensity of an instruction increases so should the density of the assessment schedule. Thus, a response to intervention approach requires both intensive, substantially modified instruction and intensive assessment and evaluation to monitor, evaluate, and modify interventions as necessary to ensure effect.

Core instruction is for all students; enhanced instruction is for some students; and intensive instruction (i.e., intervention) is for only a few students (Salvia & Ysseldyke, in press).  In a multi-tiered framework, assessment and evaluation activities become more frequent, or formative, with the progression from primary to secondary and to the tertiary level of instruction and intervention. At the tertiary level, frequent and direct measurements of student response are used to guide ongoing development and evaluation of intervention activities in response to the individual student (Barnett et al., 2004).

Conclusions

LD was first operationalized in federal regulations in 1977, but federal funding for research to examine the diagnostic process for LD did not occur until the early 1980s (Burns & Ysseldyke, in press). Thus, the practical application of the LD model was not adequately examined until after it was implemented. As a result, inconsistencies in practice were common, and the diagnostic process was heavily criticized for the widespread inconsistencies in implementation. RTI is at risk for sharing a similar fate if there is not a concerted effort to establish shared language and improved dissemination of procedures for the implementation of existing models (Ysseldyke, 2005). The goal of the current article is to propose language that will reduce the potential for confusion as research is conducted and policy decisions are made.

In the proposed language, we advocate the use of RTI-PA (i.e., RTI-problem-analysis) in place of RTI-PS (i.e., RTI-problem-solving). We advocate that the term “response” is more closely aligned with an intervention-linked assessment and evaluation, and that such terms as “resistance” and “responsiveness” are more diagnostically oriented. We believe that it is important that the context and services be evaluated in reference to the child’s response rather than the child evaluated in reference to the context and services. Regardless of individual beliefs, the appropriate language should be used to promote a clear understanding of the paradigm. Finally, we find that the distinction between response to “instruction” versus “intervention” reflects the intensity and typicality of instruction and evaluation activities. When taken together, these proposed definitions establish a foundation to communicate more clearly and use language more consistently.

References

Aaron, P. G. (1997). The impending demise of the discrepancy formula. Review of Educational Research, 67, 461-502.

Barnett, D. W., Daly, E. J., Jones, K. M., & Lentz, F. E. (2004).  Response to intervention: Empirically based special service decisions from single-case designs of increasing and decreasing intensity.  The Journal of Special Education, 38, 66-79.

Bergan, J., & Kratochwill, T.R. (1990). Behavioral consultation and therapy. New York: Plenum Press.

Burns, M.K., Appleton, J.J., & Stehouwer, J.D. (in press). Meta-analysis of response-to intervention research: Examining field-based and research-implemented models. Journal of Psychoeducational Assessment.

Burns, M. K., & Ysseldyke, J.E. (in press). Questions about responsiveness-to-intervention implementation: Seeking answers from existing models. California School Psychologist.

Case, L. P., Speece, D. L., Molloy, D. E. (2003). The validity of a response-to-instruction paradigm to identify reading disabilities: A longitudinal analysis of individual differences and contextual factors. School Psychology Review, 32, 557-582.

Conte, K. L., & Hintze, J. M. (2000). The effects of performance feedback and goal setting on oral reading fluency within CBM. Diagnostique, 25, 85-98.

Daly, E. J., Lentz, F. E., & Boyer, J. (1996). The instructional hierarchy: A conceptual model of understanding the effective components of reading interventions. School Psychology Quarterly, 11, 369-386.

Daly, E. J., Martens, B. K., Hamler, K. R., Dool, E. J., & Eckert, T. L. (1999). A brief experimental analysis for identifying instructional components needed to improve oral reading fluency. Journal of Applied Behavior Analysis, 32, 83-94.

Daly, E. J., Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting a functional analysis of academic performance problems. School Psychology Review, 26, 554-574.

Deno, S. L. (2002). Problem solving as best practices. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology, 4th ed. (pp. 37-56). Bethesda, MD: National Association of School Psychologists.

Fletcher, J. M., Francis, D. J., Shaywitz, S. E., Lyon, G. R., Foorman, B. R., Stuebing, K. K., & Shaywitz, B. A. (1998). Intelligence testing and the discrepancy model for children with learning disabilities. Learning Disabilities Research & Practice, 13, 186-203.

Fuchs, D., Fuchs, L. S., & Compton, D. L. (2004). Identifying reading disability by responsiveness-to-instruction: Specifying measure and criteria. Learning Disability Quarterly, 27, 216-227.

Fuchs, D., Fuchs, L. S., Mathes, P. G., & Simmons, D. C. (1997). Peer-assisted learning strategies: Making classrooms more responsive to diversity. American Educational Research Journal, 34, 174-206.

Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157-171.

Gresham, F. (2001). Responsiveness to intervention: An alternative to the identification of learning disabilities. Paper presented at the 2001 Learning Disabilities Summit: Building a Foundation for the Future   Retrieved March 8, 2002, from http://www.air.org/ldsummit/download

Gresham, F.M. (2002). Responsiveness to intervention: An alternative approach to the identification of learning disabilities. In R. Bradley, L. Danielson, & D. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 467-519). Mahwah, NJ: Lawrence Erlbaum.

Haight, S. L., Patriarca, L. A., & Burns, M. K. (2001). A statewide analysis of the eligibility criteria and procedures for determining learning disabilities. Learning Disabilities: A Multidisciplinary Journal, 11 (2), 39-46.

Heartland AEA 11. (2000). Program manual for special education. Johnston, IA: Author.

Howell, K. W., & Nolet, V. (2000). Curriculum based evaluation: Teaching and decision making (3rd ed.). Belmont, CA: Wadsworth.

Ikeda, M. J., Tilly, D. W., Stumme, J., Volmer, L., & Allison, R. (1996). Agency-wide implementation of problem-solving consultation: Foundations, current implementation, and future directions. School Psychology Quarterly, 11, 228-243.

Kratochwill, T. R., & Bergan, J. R. (1990). Behavioral consultation in applied settings: An individual guide. New York: Plenum Press.

Mann, L. (1979).  On the trail of process.  New York: Grune & Stratton. 

McComas, J. J., Wacker, D. P., Cooper, L. J., Asmus, J. M., Richman, D., & Stoner, B. (1996). Brief experimental analysis of stimulus prompts for accurate responding on academic tasks in an outpatient clinic. Journal of Applied Behavior Analysis, 29, 397-401.

McMaster, K. L., Fuchs, D., Fuchs, L. S., & Compton, D. L. (2005).  Responding to nonresponders: An experimental field trial of identification and intervention methods.  Exceptional Children, 71, 445-463..

O'Shaughnessy, T. E., Lane, K. L., Gresham, F. M., & Beebe-Frankenberger, M. E. (2003). Children placed at risk for learning and behavioral difficulties: Implementing a school-wide system of early identification and intervention. Remedial & Special Education, 24, 27-35.

President’s Commission on Excellence in Special Education (2002). A new era: Revitalizing special education for children and their families. Washington, DC: US Department of Education.

Reschly, D. J., & Ysseldyke, J. E. (2002). Paradigm shift: The past is not the future. In A. Thomas & J. Grimes (eds.) Best practices in school psychology, 4th ed. (pp. 3-21). Bethesda, MD: National Association of School Psychologists.

Salvia, J., & Ysseldyke, J. E. (in press). Assessment: In special and inclusive education (10th ed.). Boston: Houghton Mifflin.

Scruggs, T. E., & Mastropieri, M. A. (2002). On babies and bathwater: Addressing the problems of identification of learning disabilities. Learning Disability Quarterly, 25, 155-169.

Speece, D. L., Case, L. P., & Molloy, D. E. (2003). Responsiveness to general education instruction as the first gate to learning disabilities identification. Learning Disabilities Research & Practice, 18, 147-156.

Tilly W. D. III. (2002). Best practices in school psychology as a problem-solving enterprise. In A. Thomas & J. Grimes (Eds.) Best practices in school psychology ,4th ed.(pp. 21-36). Bethesda, MD: National Association of School Psychologists.

Tilly, W. D. III, Reschly, D. J., & Grimes, J. (1999). Disability determination in problem solving systems: Conceptual foundations and critical components. In D. J. Reschly, W. D. Tilly, & J. P. Grimes (Eds.), Special education in transition: Functional assessment and noncategorical programming (pp. 221–251). Longman, CO: Sopris West.

Upah, K. R. F., & Tilly, W. D. III. (2002). Best practices in designing, implementing and evaluating quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology, 4th ed. (pp. 483-502). Bethesda, MD: National Association of School Psychologists.

Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to instruction: The promise and potential problems. Learning Disabilities Research & Practice, 18, 137-146.

Ysseldyke, J.E. (2005).  Assessment and decision making for students with learning disabilities: What if this is as good as it gets?  Learning Disability Quarterly, 28, 125-128.

© 2005, National Association of School Psychologists. Theodore Christ, PhD, Matthew Burns, PhD, and James E. Ysseldyke, PhD, are on the faculty of the School Psychology Training Program at the University of Minnesota.