Populations
Skip Navigation LinksNASP Home Publications Communiqué Volume 34, Issue 7 Assessment Practices & Response to Intervention

NASP Communiqué, Vol. 34, #7
May 2006

Implementing RTI

Assessment Practices and Response to Intervention

By John L. Hosp, NCSP

One of the primary job functions of school psychologists, of course, is assessment. As pointed out by Allison and Upah (February 2006 Communiqué), when implementing RTI, many school psychologists may worry about potential changes to their job role and a devaluing of their skills. In practice, most school psychologists have found that RTI actually makes greater use of their skill sets than whatever their role before. Because of its reliance on data to make decisions, RTI can enhance the need for a school psychologist and her/his skills in a school. However, some changes in practice may be necessary and it is important for the school psychologist to be aware of these in order to help others navigate them.

Assessment Versus Evaluation

Often these terms are used interchangeably, but it can be helpful in navigating the implementation of RTI to think of how these terms are differentiated. If you think of assessment as the process of collecting information, it becomes easier to convey to teachers the need for standardization, reliability, validity, and using different assessments for different purposes. This leads to thinking about evaluation as the process of using information to make decisions (i.e., information collected through assessment). We often get caught up in the process of conducting an assessment because we had to or someone told us to do so. If we think about evaluation, it starts a dialogue about why we are conducting assessments. Teachers have a lot of different things to do every day. Having a reason to do something (or to not do it) can be very reinforcing as their time is valuable and at a premium. This can just be the starting point—other team members might begin to consider the purpose of their activities and find time for new by eliminating some of the old.

Making Decisions

Because making decisions is a key part of evaluation, a school psychologist can guide others to look ahead to outcomes (what would you like to see happen?) and to what needs to happen in order to get there. Thus, in addition to the assessment/evaluation skill sets, a school psychologist’s consultation training is also critical. Working with others to develop observable, measurable outcomes as well as planning for the steps of implementation to get there is crucial. Within RTI, it is important to be thinking of the desired outcomes. The rule of thumb is that educational decisions should be about meeting educational goals. One of our primary tasks is to provide high quality instruction to our students (for academic, behavioral, social, vocational, transitional goals, etc.). This means that assessment data should be used to make decisions that lead directly to instruction.

Direct Measures

In order for our assessment data to be used to guide instruction, we have to measure things that are important to developing, evaluating, or modifying instruction. As much as possible, we want measures that directly assess the skills we are interested in (sometimes called “low inference” measures). If we are interested in a student’s ability to read words fluently in connected text, we should select a measure that requires the student to read connected text—not one that has the student skip, put puzzles together, or copy line drawings. We want to use measures that require the least amount of inference as possible. Directly observing a student perform the task of interest is at the lowest level of inference in our assessment.

Educationally Relevant/Not Relevant

Just because a measure is direct, though, does not mean that it is relevant to the decision we are trying to make. Generally there are three questions that must be answered affirmatively when deciding if information is relevant:

1. Does this information align with the purpose for which I am conducting this assessment?

This takes us back to the use of direct measures. Make sure the assessment data have been validated for the purposed for which you need them.

2. Is this information about an alterable variable (or related to something alterable)?

If it is something we can not control, or does not affect our instruction, we should not spend time assessing it. We can control academic and behavioral performance; therefore these skills might be relevant to instructional decision making. Although we do not have control over a student’s visual acuity, there are accommodations we can make that are important (preferential seating, enlarged print). However, most things that we do not control do not help instructional planning (e.g., knowing how many people live in the student’s home).

3.  Does this information link directly to instruction or interventions? Again, it is important to discuss validation of assessment measures.

Known/Unknown

After we have determined what is relevant or not relevant, we also need to determine whether we can obtain that information or not. Information that is relevant must be known. If we do not already have it available, we need to plan how to collect it (i.e., via assessment). Data that are not educationally relevant do not need to be collected. Including educationally irrelevant information in our decision making can mask otherwise valuable solutions or distract us from solving problems and working toward goals. As new information is gathered, it is sometimes useful to reconsider whether other pieces of information are relevant or not. Occasionally, new information makes us consider other information in a whole new light.

The RIOT/ICEL Matrix

When thinking about assessment and evaluation, it is important to remember (and help others understand) that there are different ways of collecting the information needed to make decisions—tests are not the sole method of assessment. A handy rubric that is often used is RIOT—Review, Interview, Observe, Test. (See Figure 1.)

  • Review: The first step in conducting an assessment should be to review prior records or any other type of permanent product that might be relevant.
  • Interview: Anyone with knowledge of the student and his skills should be interviewed. This might include teachers, administrators, parents, or the student herself. Multiple perspectives and input are crucial to decision making.
  • Observe: Sometimes we need to actually see what is occurring in a classroom or other setting. Whether to use structured or informal approaches should depend on what type of information we are looking for (i.e., relevant yet unknown).
  • Test: This is what most people think of when we talk to them about assessment. There’s good reason—sometimes it is important to administer tests to students because it is the best way to get certain types of information.

Using the methodologies of RIOT is usually common sense for most school psychologists. However, in education,  we often focus all of our assessment efforts on the student and his or her characteristics. However, there are many other things that might impact a student’s performance, yet are still alterable by educators. These other sources are sometimes called domains and are represented by the acronym ICEL—Instruction, Curriculum, Environment, Learner (see Figure 1).

  • Instruction: This is what we usually think of as teaching. How content is presented to students can vary in many different ways: type of materials, grouping, opportunities to respond, etc.
  • Curriculum: This is the content that is actually taught. Scope and sequence would be included here as well as pacing within and between topics.
  • Environment: This means the classroom environment—things such as physical arrangement of the room, where the student sits and next to whom, lighting, noise, etc.
  • Learner: Obviously the student himself. It is important to put the student and his performance in the broader context of the instruction, curriculum, and environment before we determine why a student is performing as he is or how to address difficulties.

Figure 1

The RIOT/ICEL Matrix

 

R

Review

I

Interview

O

Observe

T

Test

I

Instruction

Review

Instruction

Interview

Instruction

Observe

Instruction

Test

Instruction

C

Curriculum

Review

Curriculum

Interview

Curriculum

Observe

Curriculum

Test

Curriculum

E

Environment

Review

Environment

Interview

Environment

Observe

Environment

Test

Environment

L

Learner

Review

Learner

Interview

Learner

Observe

Learner

Test

Learner

Figure courtesy of Heartland Area Education Agency 11, Johnston, Iowa.

Saturation

Considering the purposes for assessment and evaluation, what information is relevant, what is known or unknown, and planning assessment through the RIOT/ICEL matrix sounds like an awful lot to do. In actuality, the time required will vary from student to student and problem to problem. Saturation is the point at which a person or team feels that there is enough information to make an informed decision. There is no sure-fire way to identify when you have enough, but it is important to make our jobs as efficient as possible. Selecting assessment methods that are the most reliable and provide for the most valid interpretations is important to consider. Also, if there are two ways to get the same information and one takes half the time, but is less reliable, it might make more sense to use the faster procedure if time is at a premium. Using two different methods that take less time than one procedure is an even better use of time. This is another area where the training of a school psychologist becomes valuable to others in schools—helping with time management decisions. It is important to make sure that we balance the effort we put into tasks with the benefits of decision making and the desired outcomes.

A Medical Analogy

While I am usually reluctant to use medical analogies for educational issues, I think this is actually a case where it might be useful. This is how a process of assessment and evaluation in RTI could look—very similar to how doctors diagnose and treat many illnesses:

A few years ago, I saw my doctor for a routine checkup. She started off measuring my vital signs--weight, blood pressure, temperature (akin to screening assessment in RTI). Two of the three (weight, blood pressure) indicated risk factors, predicting future difficulty if not addressed. At this point, she ordered some slightly more complex tests such as a cholesterol test--a diagnostic assessment. At the same time, she recommended that I improve my diet and start to exercise regularly--a Tier I intervention, something generally effective for all people and many problems. Now I had to buy a home blood pressure machine and measure my BP twice a day--progress monitoring or formative evaluation. In addition, I was scheduled for a follow-up test of my cholesterol, etc.--a sort of post-test of my Tier I intervention (summative evaluation). At this time, my physician also outlined potential Tier II and III interventions which were scientifically based on my symptoms. Tier II would be cholesterol-lowering drugs and possibly BP drugs if my elevated BP didn't respond to the change in diet and exercise. Tier III would be the most intensive intervention--we would only move to that if I exhibited a severe need (possibly the analog to a disability). I assumed "severe need" included a heart attack or stroke. There would be additional tests to determine these risks (a comprehensive evaluation?). At every stage, she collected data and used the data to guide decisions about which treatment to use. She selected these treatments because they had been validated to address my specific problems.

The diagnostic tests suggested additional problems which she thought might require extreme (Tier III) interventions--she was going to skip Tier II if the data indicated a severe need. Fortunately, we were monitoring my progress and over time, the additional tests showed that the "severe needs" responded to the Tier I intervention. If we hadn't monitored my progress, I would probably be missing an internal organ right now (which I don't believe happens in education), but more relevantly, resources would have been wasted on an unnecessary intervention—resources that could have been better used elsewhere.

So What Does This All Mean?

Assessment and evaluation in RTI often require that we think differently about what we do as well as how and why we do it. Does it require different skills than those we normally use? Sometimes, but certainly skills that should be well established in any school psychologist’s repertoire. One of the most valuable contributions school psychologists can offer schools is our training using data to make decisions and to judge the adequacy of the data we use. School psychologists are in a prime position to serve as a resource to other educators to navigate changes in what, how, and why we evaluate students.

Resources

For more information about assessment and evaluation in RTI see:

Howell, K., Hosp, J., & Hosp, M. (in press). Curriculum-based evaluation: Linking assessment and instruction. Belmont, CA: Wadsworth.

Jimerson, S., Burns, M., & VanDerheyden, A. (in press). The handbook of Response to Intervention: The science and practice of assessment and intervention. New York: Springer Publishing.

National Association of School Psychologists (2006). Assessment alternatives under IDEA 2004 (CD Rom Toolkit). Bethesda, MD: Author.

Salvia, J., Yseeldyke, J., & Bolt. S. (2007). Assessment (10th edition). Boston: Houghton-Mifflin. (Particularly see Chapter 30, Assessing Response to Intervention)

© 2006, National Association of School Psychologists. John L. Hosp, PhD, is on the faculty of the University of Utah. This article was invited by Contributing Editor W. David Tilly, III.