Considerations for Academic Assessments and Interventions Upon the Return to School
COVID-19 has caused the closure of nearly all schools in the United States, affecting more than 55 million students. Efforts to continue education for children via remote instruction have been highly variable, ranging from daily contact via the web with the student’s regular teacher(s) to no contact at all. In fact, in the Los Angeles Times, Blume and Kohli reported that one third of high school students in L.A. Unified had not checked in daily online with their teachers since schools had closed, and a much smaller number (15,000) had never checked in at all.
The onset of the COVID-19 pandemic raised immediate worries about students including their access to a safe and supervised environment comparable to what they would get in school, access to food programs, access to routine and compensatory special education services for students with disabilities, and the provision of general instruction toward important grade-level objectives necessary for success as children continue in school.
Schools are working now to determine when and how students and staff may return to school safely. One of the challenges that schools must address is the significant disruption to the learning process. Because students’ experiences during remote learning were highly variable, schools will need to assume that children have lost about 25% of the prior grade level’s instruction because most schools were closed for 8–10 weeks of the typical 36-week school year. Compounding the problem of lost instruction will be missing assessment data. Children are routinely screened for important milestones in reading, math, and writing and participate in year-end accountability assessments to quantify the degree to which the schools are providing instruction that is sufficient to help most children attain proficiency. Because of the timing of the closures, spring screenings and year-end accountability assessment data will not be available.
These converging events—loss of instruction and an absence of data—create a perfect storm for school psychologists who are responsible for helping schools meet the needs of diverse learners, including identifying and making eligible those students who are in need of special education. NASP has developed a series of resources and webinars to provide actionable how-to advice to cope with missing academic data, identify children in need of instructional supports, and use the resulting data to inform referral and eligibility decisions. These are available in the NASP COVID-19 Resource Center at www.nasponline.org/COVID-19. Importantly, many students will be returning to school with increased social–emotional and mental health issues associated with the crisis, which will complicate school function in many ways. It will be imperative that schools attend to the mental wellness of students on a school-wide, classroom, and individual basis as intentionally as academic interventions and supports. Resources regarding students’ mental health are also available in the NASP COVID-19 Resource Center.
New Screening Procedures Will Be Required
Schools—and school psychologists—will be eager to collect fall screening data to make decisions as quickly as possible upon a return to face-to-face learning. However, fall screening must proceed differently than it has in the past.
There will be a higher prevalence of academic risk in nearly all schools. Children will be arriving at the next grade level having only received about a 75% dose of the prior year's academic instruction. To deal with this higher base rate of risk, screening procedures must account for base rates.
The figure below shows the posttest probabilities of academic failure across varying levels of risk. The greater the prevalence of risk (move toward the right on the x-axis), the less accurate the screening will be for ruling students out as not needing academic intervention, which is the purpose of academic screening. Negative posttest probability is the probability of academic failure when a student has passed the academic screening. So at 50% risk, 10% of students passing a screening that has .90 sensitivity and .90 specificity will actually experience academic failure. As prevalence increases, negative posttest probability climbs. Once negative posttest probability is greater than 10% (VanDerHeyden, 2013), or greater than your local base rate of risk which you can estimate from past year’s proficiency rates on the year-end test, the screening is not useful to rule students out as needing more intensive academic intervention than is currently provided in their general education environment. The key message here is that single-point-in-time screenings will not be sufficient for determining academic risk in the fall.
Use Class-Wide Intervention to Improve Decision Accuracy and Provide Learning Gains for Students
How can the school psychologist proceed in an environment in which academic screenings will not be useful to determine who is really at risk? Introduce instructional trials as rapidly as possible and measure students’ learning gains as the second screening gate. Class-wide intervention (e.g., PALS, class-wide peer tutoring, PRESS center reading, Spring Math class-wide intervention) lowers the base rate of risk to allow for academic screenings to function more accurately.
In a recent study, decision accuracy was examined for fall screening, winter screening, and response to class-wide intervention with above 20th percentile performance on the year-end test as the gold standard for students in kindergarten and grades 1, 3, 5, and 7 in mathematics. Negative posttest probabilities were stronger (lower) when response to class-wide intervention was used as the screening criterion (VanDerHeyden, Broussard, & Burns, 2019).
Here is another way to view the effect of class-wide intervention as a screening gate. In this class, at the beginning of intervention, the score range is highly restricted, which makes distinguishing which children are truly at risk technically difficult if not impossible. Introducing a daily 15-min class-wide intervention increases the score ranges over weeks of intervention and makes apparent the student who really requires intensified instruction or a comprehensive eligibility evaluation.
The figures below, reprinted from VanDerHeyden (2013) shows that the same screening is not useful due to a high base rate of risk before intervention, but following class-wide intervention becomes very useful for ruling students out as needing academic intervention.
Accuracy of the Mathematics Screener for Students Who Receive a Free or Reduced-Price Lunch
Illustration of the Use of Intervention to Reduce Overall Risk and Permit More Accurate Screening Decisions
Note. From “Universal Screening May Not Be for Everyone: Using a Threshold Model as a Smarter Way to Determine Risk,” by A. M. VanDerHeyden, 2013, School Psychology Review, 42, p. 410. (https://doi.org/10.1080/02796015.2013.12087462). Copyright 2013 by the National Association of School Psychologists. Reprinted with permission.
Relying on a Period of Waiting for General Education to Improve Base Rates Is Inefficient and Unlikely to Work
There will likely be a sense of urgency around completing pending evaluations and perhaps even new evaluations. All evaluation teams are required to determine if a student's academic concerns are a result of a lack of instruction when considering specific learning disability (SLD) identification regardless of the approach to eligibility determination that is used. Assessing the quality of instruction provided during the COVID-19 school closing is fraught with problems. Whether the instruction at home was delivered by caregivers or through an internet connection with teachers, decision teams cannot presume that the quality of core instruction replicated what would have happened in school. Except in unusual cases, the quality of instruction likely cannot be ruled sufficient.
Instruction as a cause (the most likely cause) of poor performance can only be ruled out by delivering a dose of instruction and measuring the child's response directly. There is no substitute for that step and even if you choose to use a method other than response to intervention (RTI) to satisfy criterion 1 and 2, you still must satisfy criterion 4 to determine eligibility for SLD.
School psychologists may be tempted to institute waiting periods before recommending Tier 2 or 3 interventions as a means to avoid overpopulating those intervention groups and depleting resources. Waiting times have not been shown to lower risk over time. At best it is a tactic that will be highly variable (i.e., dependent on the quality of core instruction and teacher-initiated supplementation of core instruction) and at worst, it will be less efficient.
School psychologists should not enter a hands-off waiting period with schools. Rather, school psychologists should return to school equipped to help teachers boost their core instruction, given that children will likely be arriving with skill gaps. School psychologists can support teachers in delivering class-wide intervention and small groups to provide acquisition instruction for missing prerequisite skills and fluency-building intervention for skills that are foundational for subsequent learning at each grade level.
Decision teams can use the resulting performance data of students to determine who really needs a diagnostic assessment, individualized instruction, and potentially an eligibility evaluation. Controlling the dose of instruction allows this identification to occur in a more rapid and nimble fashion than would be possible otherwise. It is possible to make a decision about the need for more intensive academic intervention following only 4 weeks of well-implemented class-wide intervention.
Delivering High-Quality Class-Wide Intervention Requires Focus on Implementation
A new survey study out by Silva et al. (in press) examines actions taken in the name of multitiered systems of support (MTSS) and RTI. This survey replicates the findings of an earlier study (Burns, Peters, & Noell, 2008) finding that very particular barriers continue to interfere with the capacity of school psychologists to help schools use MTSS to improve achievement. School psychologists encounter the same barriers now as we did in 2008: we struggle to interpret the data we collect, to effectively get interventions underway, and to use implementation science to ensure high-quality implementation of academic interventions. In the Silva et al. (2020) study, only 7% of respondents reported looking at intervention integrity when an intervention was not working as planned.
In a context of elevated base rates of academic risk, we must do better. When children return to school, hopefully this fall, there will be an opportunity for school psychologists to be highly useful instructional allies to teachers. We can use our rapport and trust with teachers to connect, support, and empower them to do what works. Implementing class-wide academic intervention will produce achievement gains for students and as a wonderful side effect, will give us the best data upon which to base referral and eligibility decisions.
This series of resources and webinars will equip you to move forward with the right actions to screen, implement class-wide interventions in reading, writing, and math, and to use the resulting data for referral and eligibility decision making regarding SLD.
Blume, H., & Kohli, S. (2020, March 30). 15,000 L.A. high school students are AWOL online, 40,000 fail to check in daily amid coronavirus closures. Los Angeles Times. https://www.latimes.com/california/story/2020-03-30/coronavirus-los-angeles-schools-15000-high-school-students-absent
Burns, M. K., Peters, R., & Noell, G. H. (2008). Using performance feedback to enhance implementation fidelity of the problem-solving team process. Journal of School Psychology, 46, 537–550. doi:10.1016/j.jsp.2008.04.001
Silva, M. R., Collier-Meek, M. A., Codding, R. S., Kleinert, W. L., & Feinberg, A. (2020). Data Collection and Analysis in Response-to-Intervention: A Survey of School Psychologists. Contemporary School Psychology. Advance online publication. https://doi.org/10.1007/s40688-020-00280-2
VanDerHeyden, A. M. (2013). Universal screening may not be for everyone: Using a threshold model as a smarter way to determine risk. School Psychology Review, 42, 402–414.
VanDerHeyden, A. M., Broussard, C., & Burns, M. K. (2019). Classification Agreement for Gated Screening in Mathematics: Subskill Mastery Measurement and Classwide Intervention. Assessment for Effective Intervention. Advance online publication. https://doi.org/10.1177/1534508419882484
Contributor: Amanda VanDerHeyden
Please cite as:
National Association of School Psychologists. (2020). Considerations for academic assessments and interventions upon a return to school [handout]. Author.
© 2020, National Association of School Psychologists, 4340 East West Highway, Suite 402, Bethesda, MD 20814, 301-657-0270, www.nasponline.org