Populations
Skip Navigation LinksNASP Home Publications Communiqué Volume 34, Issue 8 Evaluating Evidence-Based Practice in RTI Systems

NASP Communiqué, Vol. 34, #8
June 2006

Evaluating Intervention Outcomes

Evaluating Evidence-Based Practice in Response-to-Intervention Systems

By Martin J. Ikeda, Alecia Rahn-Blakeslee, Bradley C. Niebling, Randy Allison, NCSP, & James Stumme

Assistant Ed. Note: In this issue (and occasionally in future issues), the Research Committee deviates from its typical two-column format of research reviews and outcome evaluation protocols to focus on a specific research paradigm. —Steve Landau

Motivational speakers often illustratively declare that the Chinese symbol, “Wei-ji” is made up of 2 characters, danger and opportunity.1 The No Child Left Behind Act of 2001 (Pub. L. No. 107-110, 115 Stat. 1425) and the 2004 Reauthorization of the Individuals with Disabilities Education Act (20 U.S.C. Sect. 1400 et seq.) present school psychologists with situations they can view as dangerous as opportunity.

A primary debate point, in particular as related to IDEA, is around Response to Intervention (Fuchs, Mock, Morgan, & Young, 2003; Tilly, Rahn-Blakeslee, Grimes, Gruba, Allison, Stumme et al., 2005)). There are those who believe RTI is under-researched (Fuchs et al., 2003; Hale, Naglieri, Kaufman, & Kavale, 2004), and an example of policy preceding practice. There are others who purport that, in the absence of evidence of effectiveness and efficiency of current practices for identification of learning disabilities (Bradley, Danielson, & Hallahan, 2002; President’s Commission on Excellence in Special Education, 2002), large-scale implementation of alternative practices for the betterment of students is not only defensible but acceptable (Danielson, Doolittle, & Bradley, 2005; Tilly et al., 2005).

Regardless of one’s perspective, policies around implementation of RTI are likely to surface within state and local education agencies as soon as the federal regulations are finalized. Through this change in policy, research and practice needs will evolve as well. University-based professionals will have the task of engaging in more experimental research around effective assessment and intervention, particularly within an RTI framework. Field-based professionals have the task of demonstrating that RTI practices result in meaningful outcomes for children and families. Perhaps, as never before seen in education and psychology, RTI presents an opportunity for these disciplines to truly integrate the rhetoric of bridging science and practice, espoused since 1949 (Raimy, 1950).

An important aspect of bridging the science-and-practice gap is to identify core questions and issues generated from both research and applied practice. It is our goal is to discuss some of these important questions and issues around evaluating the impact of educational and psychological practices as they occur within an RTI framework, principally the role of school psychologists in RTI systems. Issues that are of particular interest in this discussion are: (a) the role of school psychologist as a scientist-practitioner, (b) applying a scientist-practitioner framework to RTI practices, and (c) promoting socially relevant outcomes.

School Psychologists as Scientist-Practitioners

Understanding that school psychologists serve as scientist-practitioners is critical, because the translation of RTI principles to school-based applications will change school psychological practice. Since the 1950s, the scientist-practitioner model has been promoted and adhered to by psychology training programs (Raimy, 1950); however, both psychology and school psychology have yet to fully integrate science and practice (Stoner & Green, 1992).

As summarized by Stoner and Green (1992), a well-functioning scientist-practitioner model would result in: (a) psychological services provided by professionals with research orientations and skills, (b) experimental research informing professional practice, and (c) psychologists integrating research and practice to impact important social issues.

Perhaps by embracing our different roles in the process of improving school psychological services, we can all improve our contributions to those services. One important difference in understanding and implementing RTI may be that university-based school psychologists could engage in more ongoing conversations and work with school-based practitioners to inform future work and dissemination. At the same time, school-based practitioners could spend more time better articulating the important concerns and issues that they face in applied settings, and collaborating with university-based professionals to solve those problems. In other words, instead of professionals in both settings talking about the need to bridge the research-to-practice gap, we could all spend more time building that bridge and meeting in the middle.

In the next section, we will outline and discuss features of an RTI system, and how each of these features plays a role in impacting social outcomes. There are two areas of foci. First, all efforts should have positive impact on socially relevant outcomes for all students. Second, the scientist-practitioner framework has great potential for evaluating the efficacy and effectiveness of school psychological services in general, and the efficacy and effectiveness of RTI systems in particular.

Features of an RTI System

In order to evaluate practices within an RTI system, as well as the overall impact of an RTI system on student outcomes, it is important to have a common understanding of the core components of an RTI system. We propose and have used an RTI model that addresses four essential questions. First, what screening measures can be used to judge pervasiveness of problems across students? Second, what diagnostic measures can identify what problems are exhibited by what students? Third, what research-based practices can be applied to solve the problem? Fourth, how can student progress be measured to effect appropriate changes to programming?

Screening Decisions

Figure 1

Screening involves the collection of assessment information for all students in order to make judgments about skills and performance relative to peers or expectations. At a systems level, screening answers the question, “is it a group problem or an individual problem?” The data in Figure 1 are illustrative of the scenario in which problems are widespread, and in which class-wide interventions may be warranted. The data in Figure 2 illustrate the scenario in which more individualized interventions may be warranted.

Figure 2

While such data and graphs can be helpful, this type of information is not always available to researchers or practitioners. Working as a scientist-practitioner, the school psychologist typically investigates whether a teacher report of student problem is more like the situation in Figure 1 or more like the situation in Figure 2. However, unless using an RTI system, this investigation often occurs without the benefit of the type of data presented in Figures 1 and 2.  Research has taught us that students referred by teachers tend to have problems (VanDerHeyden et al., 2003). As scientist-practitioners in any setting, we strive to collect this type of information and to continue developing better ways of summarizing, analyzing, and using these data to benefit all students.

In Figures 1 and 2, the vertical bars represent individual student performance. The “proficient” line is drawn on the y-axis.  Students whose performance is adequate perform at or above the proficient line. Using widely available spreadsheet and graphing tools, school psychologists can easily create graphs such as those depicted in Figures 1 and 2. Graphically summarizing screening data can assist the scientist-practitioner in evaluating not only the performance of individual students, but the services delivered to all students, either before or after instructional changes are made.

Because so many children in the classroom depicted in Figure 1 are not proficient, it would be inefficient to rely on teacher referral of individual students as a means of identifying students needing supplemental resources. It would also be inappropriate to expect special education to harbor such large numbers of what look to be curricular casualties. Instead, the school psychologist and school administration need to discuss what enhancements can be made to the core curriculum to improve achievement for all learners. In Figure 2, if the teacher expresses concern about “Mark,” the school psychologist would follow-up to understand why such concerns were not raised with “John” or “Jenni.”

Measures that have proven adequate for such large-scale screening share similar characteristics: (a) reliabilities of .80 or higher (Salvia & Ysseldyke, 1997), (b) efficient administration and scoring (short time frames), (c) sufficient parallel forms to allow for repeated administration, and (d) links to the standards and benchmarks of a given school system. For example, the Dynamic Indictors of Basic Early Literacy Skills (DIBELS) (Good, Gruba, & Kaminski, 2002) and Curriculum-based Measures (CBM) (Shinn, 1989, 1995) have demonstrated utility for use as screening measures. In the area of behavior, office referrals (Sugai & Horner, 2002; March & Horner, 2002) have demonstrated utility for determining overall health of the behavioral system.  For areas like math, writing, and science, the technology for screening is not well developed, although our experience working with schools suggests that results from district-wide assessments, although given only one time per year, can be used in screening.

If one can accept the argument that having effective core instruction in all academic areas and in behavior is the first step for implementing and evaluating an RTI model, then screening data become the first set of outcome data used to evaluate the efficacy and effectiveness of a core program. The question of interest is typically, “is our current curriculum and instruction resulting in high levels of learning for at least 80% of our students?” If the answer is no, instructional enhancement to core programming is needed. When the answer is yes, the school psychologist and others in the educational system are able to make more defensible evaluative decisions not only about the overall functioning of services delivered within an RTI system, but also about the need for more individualized resource allocation.

In our experience in school systems in which core instruction is overlooked, there are three sources of danger. First, too many students are identified as having disabilities. Special education is used to mask general education problems, and problems like overrepresentation can occur. Second, services tend to be fragmented: general education does one thing, special education does another thing, talented and gifted does another thing, and Title I does another thing. Fragmented resources create confusion and do not promote student achievement. Third, when characteristics of children are regarded as the cause of the problem (e.g., poverty, lack of parental support), teachers are not motivated to change instruction.

These dangers pose a real threat to the outcomes for all students in such a system. Without adequate screening data, the system will typically rely on teacher referral to access supports for student improvement. Without good screening data, these systems not only lack adequate information to ensure appropriate instructional decision making, but also lack adequate measures to evaluate the impact of any changes in practice that might occur. Therefore, screening data collected for all students in the system are necessary for that system to function adequately.

Diagnostic Decision Making

After screening, psychologists working as scientist-practitioners need to be effective diagnosticians. Scholarly writings from scientist-practitioners in university-based settings are most prevalent in the areas of reading and behavior (Fuchs & Vaughn, 2006; Sugai & Horner, 2002) although the logic set can be applied to math, science, writing, and other areas as well.

In diagnosing problems after initial screening, a framework using five “big ideas” in reading is helpful: phonological awareness, phonics, fluency, vocabulary, and comprehension (National Institute of Child Health and Human Development, 2000). By gathering data on all students in all relevant “big idea” areas, the school psychologist and others in the school can start asking questions such as, “which subset of students are how far below criteria in what important learning areas?” By asking such questions, the school team can start to align the research base on effective instruction with the learning needs demonstrated by students. If students are highly deficient in phonemic awareness, the school team will search for evidence-based practices with large effect size for phonemic awareness.

Research-Based Practices

Figure 3

Teachers are using what they believe to be their most effective teaching tools (Carnine, 1992). However, this perception and effort does not always translate to employing research-based practices in the classroom. School psychologists, with their knowledge base in assessment and research (Stoner & Green, 1992), are vital supports to school staff in evaluating research and on judging effective versus ineffective practice.

School psychologists provide leadership to schools by helping differentiate the professional practice literature (e.g., the Communiqué) from research publications (e.g., School Psychology Review). School psychologists also help school personnel understand the difference between research-based practices (e.g., practices supported by a body of studies with similar effect on student achievement) versus someone’s opinion of what  the professional literature suggests about a topic (e.g., a publication that is void of data, or is merely a comprehensive literature review around a topic).

Once school psychologists have helped instructional staff target students for specific instruction, and have identified the research-backed strategy for implementation, it is important to recognize that (a) teachers need to implement the instruction in a manner that is similar to how it was validated in research, and (b) student progress as a result of differentiated instruction should be monitored to evaluate the impact of instruction on student learning.

It is in the area of research-based practices that university-based professionals and school-based professionals can come together in a mutually beneficial way to promote better practices in research and school applications. It is imperative that information collected from practice informs future research, and that we continue to disseminate information from research in a manner that has high utility for school-based practitioners.

Monitoring Implementation

An important aspect of effective RTI practice involves the evaluation of impact these practices have on student learning. In addition, we must determine the degree to which research-based practices are implemented as designed. Instructional leaders from within the district can facilitate this support by deciding the method best suited for implementation monitoring in local settings. These methods include (a) walk-throughs or other structured observations, (b) implementation checklists, or (c) portfolio samples. These methods, along with others, have been proposed through the professional literature as reasonable strategies for monitoring implementation (Downey, Steffy, English, Frase, & Poston, 2004). Specific to consultation on problems related to an RTI framework, Noell and colleagues (2005) reported that performance feedback to teachers, throughout the intervention, resulted in high levels of treatment integrity. Clearly, since RTI systems rely on research-based interventions implemented with high fidelity for making educational decisions, school psychologists should play a central role in helping schools monitor the implementation of their practices.

Monitoring Student Progress

With our background in assessment, school psychologists should provide leadership to schools implementing RTI in the area of progress monitoring. This support can take multiple forms. For example, for groups of students, the school psychologist can assist school systems by teaching someone at the school how to put achievement information into tables, so that teachers can monitor effects of instruction. A table need not be complicated. Factors that are important but that do not change include student name, desired performance level, and students’ benchmark performance over time. Inferences and actions taken based on the data can also be put into a table. For example, “…move to supplemental group 3X/week” might be an action logged within a class-wide progress monitoring table, as might be “…transition back to core instruction.”

The scientist-practitioner can also analyze data displayed in tables and graphs to interpret effects of instructional planning for individual students for whom more specialized interventions are implemented. Individualized instruction can be provided as: (a) part of the intensive services provided through general education resource alignment in RTI, (b) part of the special education entitlement process to answer the question, “are the instructional resources needed to solve the problem specialized enough to warrant protections under IDEA?”, or (c) the specialized instruction of an already entitled student.

In the professional literature, evaluating individual student progress to determine if instruction is having the desired effect is called formative assessment or formative evaluation (Black & William, 1998, Deno, 1985; Fuchs and Fuchs, 1986). The important characteristics of formative evaluation are: (a) data depicted on a graph, (b) use of equal interval scales, (c) performance plotted against an ambitious goal line, (d) making instructional changes by following data decision rules. With No Child Left Behind, one way to define an ambitious goal line is to use grade-appropriate expectations. District CBM norms can be used to help define reasonable fluency rates for Fall, Winter, or Spring. Fuchs (2002) provides a framework for writing ambitious goals using CBM.

The data on formative evaluation are clear: for students whose teachers monitor progress and modify instruction based on data, achievement goes up an average effect size of .7 (Black & William, 1998; Fuchs & Fuchs, 1986).  An effect of .7 would raise math achievement among U.S. students from “average” to within the top five in the world (Black & William, 1998). When combined with reinforcement of goal attainment, formative evaluation results in effect sizes of over one standard deviation (enough to raise achievement from the 16th- to the 50th percentile) (Fuchs & Fuchs, 1986).

Figure 3 is an example of monitoring student performance using CBM math probes. In Figure 3, four consecutive data points fall below the goal line. In this scenario, the decision would likely be, “make an instructional change” by (a) altering the difficulty level of the material being presented, (b) increasing the engagement within materials in which the child can be successful, or (c) changing the reinforcer for fluent performance. This process is at the heart of what RTI is all about: using data to make instructional decisions about how to best meet the needs of students. When teachers use data decision rules to change instruction, student achievement increases (Fuchs, Fuchs, & Hamlett, 1989).

Another Examination of Social Validity

            Recently, Myers and Sylvester (2006) discussed the concept of examining effects of qualitative research from a social validity perspective. Myers and Sylvester (2006) purported that effective practitioners would investigate the goal acceptability, treatment acceptability, and goal outcomes of their work. Social validity is a concept developed in the behavior analytic literature (Wolf, 1978), and challenges scientist-practitioners to evaluate the nobility of the goals of the intervention, to examine the social acceptability of the procedures used in treatment, and to examine the social impact of the procedures.

            The scientist-practitioner in an RTI model strives for three outcomes. First, the goal of the treatment is to allow every child a floor of opportunity to access the important life function of learning. Second, the methods used to promote skill acquisition (a) reside within the system, (b) reside within the hands of caring educators, and (c) focus on strategies that directly target skill deficits. The social impact of the procedures is that all students receive instructional changes when they are not progressing toward goals identified as important by the state or district.

Conclusions

            An effective RTI system has four components: (a) efficient, direct measures of student performance to screen magnitude of problems, (b) diagnostic measures that identify areas in need of further academic skills instruction, (c) research-supported strategies implemented with integrity, and (d) continual assessment of student performance against ambitious standards.

In an RTI system, school psychologists need instructional and behavioral consultation skills. Data management, setting up effective instruction based on skill deficits rather than on diagnosis, and making decisions about when instructional changes need to occur become the cornerstone skills upon which school systems rely. Relevant questions asked at all levels of the system target not only resource allocation, but also instructional effect. By blending the best of science into practice, school psychologists become vital partners in ensuring that children of America have a floor of opportunity to access life, liberty, and happiness. For us, RTI represents not danger, but rather, opportunity.

References

Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy, & Practice, 5, 7-74.

Bradley, R., Danielson, L., & Hallahan, D. (Eds.) (2002).  Identification of learning disabilities:  Research to practice.  Mahwah NJ: Erlbaum. Available: www.air.org/ldsummit

Carnine, D. (1992). Expanding the notion of teachers' rights: Access to tools that work. Journal of Applied Behavior Analysis, 25, 13-19.

Danielson, L., Doolittle, J., & Bradley, R. (2005). Past accomplishments and future challenges. Learning Disability Quarterly, 28 (2), 137-139.

Downey, C, J,, Steffy, B, E,, English, F, W,, Frase, L, E,, &r Poston, W, K,, Jr. (2004), The three-minute classroom walk-through: Changing school supervisory practice one teacher at a time. Thousand Oaks, CA: Corwin Press.

Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157-171.

Fuchs, L. S. (2002). Best practices in defining student goals and outcomes. In A. Thomas and J. Grimes (Eds.). Best Practices in School Psychology IV (pp. 553-563). Bethesda, MD: National Association of School Psychologists.

Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53, 199-208.

Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989). Effects of alternative goal structures within curriculum-based measurement. Exceptional Children, 57, 443-452.

Fuchs, L. S., & Vaughn, S. R. (2006). Response-to-intervention as a framework for the identification of learning disabilities. NASP Communique, 34, 1, 4-6.

Good, R. H. III, Gruba, J., & Kaminski, R. A. (2002). Best practices in using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an outcomes-driven model. In A. Thomas & J. Grimes (Eds.). Best Practices in School Psychology IV (pp. 699-720). Bethesda, MD: National Association of School Psychologists.

Hale, J. B., Naglieri, J. A., Kaufman, A. S., & Kavale, K. A. (2004). Specific learning disability classification in the new Individuals with Disabilities Education Improvement Act: The danger of good ideas. The School Psychologist, 6- 13, 29.

Individuals With Disabilities Education Act Amendments of 2004, 20 U.S.C. Sect. 1400 et seq.

McKee, W. T., Witt, J. C., Elliott, S. N., Pardue, M. & Judycki, A. (1987). Practice informing research. A survey of research dissemination and knowledge utilization. School Psychology Review, 16, 338-347.

March, R. E., & Horner, R. H. (2002). Feasibility and Contributions of Functional Behavioral Assessment in Schools.  Journal of Emotional & Behavioral Disorders, 10, 158-170.

Meyers, A. B., & Sylvester, B. A. (2006, February). The role of qualitative research methods in evidence-based practice. NASP Communiqué, 34 (5).

National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00-4769). Washington, DC: U.S. Government Printing Office.

No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002).

Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L., Williams, K. L., et al. (2005). Treatment implementation following behavioral consultation in schools: A comparison of three follow-up strategies.  School Psychology Review, 34, 87-106.

President’s Commission on Excellence in Special Education (2002). A new era: Revitalizing special education for children and their families. Available: http://www.ed.gov/inits/commissionsboards

Raimy. V. C. (Ed.). (1950). Training in clinical psychology (Boulder Conference). New York: Prentice-Hall.

Salvia, J., & Ysseldyke, J. E. (1997). Assessment. Houghton-Mifflin.

Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. New York: Guilford.

Shinn, M. R. Curriculum-based measurement and its use in a problem-solving model (1995). In A. Thomas & J. Grimes (Eds.). Best Practices in School Psychology-III (pp. 547-568). Bethesda, MD: National Association of School Psychologists.

Stoner, G. & Green, S. K. (1992). Reconsidering the scientist-practitioner model for school psychology practice.  School Psychology Review, 21, 155-166.

Sugai, G., & Horner, R. H. (2002). Introduction to the special series on Positive Behavior Support in Schools. Journal of Emotional & Behavioral Disorders, 10, 130-135.

Tilly, W. D. III, Rahn-Blakeslee, A., Grimes, J., Gruba, J., Allison, R., Stumme, J., et al. (2005, November). It's not about us, folks: It's about the kids. NASP Communiqué, 34(3).

VanDerHeyden, A. M., Witt, J. C., & Naquin, G. (2003). Development and validation of a process for screening referrals to special education. School Psychology Review, 32, 204-233.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203-214.

Ysseldyke, J. E., Vanderwood, M. L., & Shriner, J.  (1997). Changes over the past decade in special education referral to placement probability: An incredibly reliable practice. Diagnostique, 23, 193-201.

Footnote

1 Roughly translated, Wei-ji means “precarious moment,” more in-line with “crisis” than any kind of paradox (see: www.straightdope.com/columns/001103.html).

.

© 2006, National Association of School Psychologists. Martin J. Ikeda, PhD, is Coordinator of Special Projects; Alecia Rahn-Blakeslee, Ph.D, is a Research/Evaluation Practitioner; Bradley C. Niebling, PhD, is a School Psychologist and a Curriculum/Standards Alignment Specialist; Randy Allison, NCSP, is Coordinator of System Supports for Educational Results, and James Stumme, Ed.D, is Associate Administrator/Director of Special Education at Heartland Area Education Agency 11, Johnston, IA. Each is involved in systems-level efforts to improve students’ educational outcomes using efficient practices.

Figure 2 footnote: The authors would like to thank Joe Witt for sharing the format depicted in Figures 1 and 2