Communiqué

2018 Convention News

Presenters in Focus: Updating Your Toolkit for Monitoring Progress of Younger Struggling Readers

By Nathan Clemens

Volume 46 Issue 3, pp. 24-25

By Nathan Clemens

Using a data-driven approach to effectively monitor progress remains an important staple for school psychologists. However, progress monitoring approaches vary significantly based on the age and development of the student, requiring a nuanced and unique approach, particularly for young emerging readers. In this Presenters in Focus Q&A, convention presenter Nathan Clemens discusses best practice strategies for progress monitoring among young readers (grades K–2). He will explore these issues in more depth during his Field-Based Skills Session, Updating Your Toolkit for Monitoring Progress of Younger Struggling Readers, at the 2018 national convention in Chicago.

What are some unique aspects to monitoring reading progress for younger students when compared to older students?

Unlike middle and upper elementary school, where fluency in reading connected text represents a fairly stable index of overall reading achievement, a number of subskills are relevant as students’ reading skills develop. This is reflected in the number of progress monitoring measures available for monitoring early reading, which include various measures of phonological and phonemic awareness, letter names and sounds, decoding and word reading, reading connected text, and computer-based measures that can assess a range of skills.

The array of measures can be overwhelming. When deciding on a progress monitoring measure to use, it is important to consider the primary goal of the instruction or intervention. Across the fall and winter of kindergarten, measures of letter sound fluency tend to be strong indicators of students’ early literacy acquisition, owing to the focus of letter sounds in kindergarten reading instruction and their importance for word reading. Measures of word reading fluency are good indices of response to instruction during the second half of kindergarten, because most kindergarten programs focus on decoding and word reading across this time period. Measures of word reading and fluency, and reading connected text, are preferred for first and second grades, especially for students who are struggling in decoding and word recognition skills.

Did You Know?

Founded in 1868, the Lincoln Park Zoo is among the oldest zoos in North America and still one of the few with no admission fee. The first animal purchased for the zoo was a bear cub, which later became the inspiration for the name of the Chicago Cubs, 2016 World Series Champions.

Describe some common errors and pitfalls school teams experience when interpreting progress monitoring data.

Practitioners tend to struggle the most in using progress monitoring data to make timely and appropriate instructional decisions. They may regularly collect progress monitoring data and enter it into a Web-based reporting system, yet neglect to actually look at students’ graphs. Goal-setting is a critical aspect for determining whether student progress is consistent with expectations; however, practitioners often struggle to set appropriate goals or do not set them at all. They sometimes pay too much attention to individual outlying data points (e.g., an individual score that is much lower or higher than most of the surrounding data points) and waste time speculating on why the outlier occurred. Most important, they may fail to recognize when progress monitoring data indicate the need for an adjustment to instruction or a goal. When school teams recognize the need for an adjustment, they may opt for wholesale changes to programs when simpler or less intrusive adjustments may have been preferred.

Progress monitoring should, first and foremost, be a tool that helps teachers provide instruction that meets their students’ unique needs. If progress monitoring data are not helping teachers provide better instruction, then school teams need to (a) figure out how to make the data more useful, or (b) use some other form of assessment that will inform timely, valid, and practical decisions on how to maximize student learning.

How can school psychologists be responsive to cultural and linguistic diversity when engaging in progress monitoring, particularly for young English learners?

Keep both short-term and long-term goals for the student in mind. Ask if and how progress monitoring data will inform the student's instruction. If reading in English is a short- or long-term goal for the student, it is important to carefully consider the measures used to monitor English reading skills. Understand that some English learners may learn phonics rules fairly quickly, which enables them to read words accurately; however, their underdeveloped knowledge of word meanings will constrain their ability to understand what they read. Monitoring progress in vocabulary or reading comprehension may help better understand reading development for culturally and linguistically diverse learners.

Identify some key components to look for when selecting progress monitoring tools for students in early grades.

Regardless of the number of measures that publishers offer, consider those that align with the primary goal of your intervention. Rather than using multiple subskill measures simultaneously (which takes too much time and makes interpretation very difficult), look for a single measure that requires the integration of several subskills and reflects year-end expectations for your intervention. For example, decoding is a primary goal of early reading instruction and is a skill that requires the successful coordination of letter–sound knowledge and phonemic awareness. Thus, a decodable words fluency, or word list measure, should provide a good index of student progress. Use measures that have demonstrated adequate reliability and validity for their intended grade level. The tools charts at www.intensiveintervention.org can be helpful for making those decisions, but keep in mind that the ratings are based on information that the measure developers chose to submit.

Do you ever advise school psychologists to develop their own progress monitoring tools?

Only if there are no available options. There was a time when all progress monitoring measures were user-developed. Today, there are plenty of tools that have undergone a rigorous development process and have evidence for their technical adequacy, including for the early grades, where there are a number of measures that cover the major domains of early reading development. So, by and large, it is advisable to use established tools unless no other options exist.

For example, if a situation calls for monitoring progress in a student's first language and (a) you have no measures in that language, and (b) you have a staff member who is skilled in that language, it may be acceptable to create a set of progress monitoring tools using available staff expertise in that language. Another situation is when a student has a specific goal in the Individualized Education Program (IEP) and you have no measures that would be appropriate for monitoring progress toward that goal. In the event where you must create a progress monitoring tool, start by creating a pool of items that reflect an appropriate range of difficulty. Randomly sample from that item pool to create a set of probes so that the item difficulty will be randomly distributed across your probe set. Use existing measures to guide how you construct the measure and the administration and scoring procedures. Always use caution when interpreting data from user-constructed measures, and like with any assessment, consider multiple sources of information (in addition to progress monitoring data) when making educational decisions.

How often should teams review progress monitoring data and consider instructional adjustments? Does this differ for younger struggling readers when compared to older struggling readers?

Two aspects are important when determining whether an intervention should continue or if an instructional adjustment is needed: (a) the amount of intervention time and sessions that have transpired, and (b) the number of data points you have collected. For younger struggling readers, I usually recommend that progress be evaluated more frequently than for older readers, because reading growth tends to be more rapid for younger students, and we therefore should expect more growth in shorter spans of time. However, you must still allow enough time for the intervention to have a meaningful effect, and this depends on the nature of the intervention and its frequency. For a student receiving intervention of at least moderate intensity (i.e., four sessions or more per week), I suggest evaluating progress every 4–6 weeks. With a less-frequent, less intensive intervention, you should require a little more time to pass before making a decision. Monitor progress at least once per week; any time you evaluate progress monitoring data to make a decision, you should have enough data points to have confidence that the student's trend is reliable and interpretable. Compared to research with older students, evidence is lacking on the number of data points needed to make a decision for younger students, but a preliminary rule of thumb might be at least six data points. In short, consider evaluating and reviewing progress every 4 weeks provided (a) you would expect meaningful growth in that time frame and (b) you have at least 6 new data points at the time of the review.

© 2017, National Association of School Psychologists
November 2017, Volume 46, Number 3