Communiqué

Research-Based Practice

Effect of Cognitive Processing Assessments and Interventions on Academic Outcomes: Can 200 Studies Be Wrong?

By Matthew K. Burns

Volume 44 Issue 5,

By Matthew K. Burns

The goal of school psychology is to improve outcomes for students and to enhance the capacity of schools to meet the needs of students (Ysseldyke et al., 2006), but how to best accomplish these is a matter of considerable debate. School psychologists in the dawn of the field debated whether it was preferable to understand human behavior by discovering laws derived from groups or to investigate the unique history and properties of the individual person (Fagan & Wise, 2007). The debate evolved into the Cronbach (1957) delineation of correlational versus experimental scientific psychology, in which the former relied on relating individual differences to performance and the latter on carefully finding interventions that changed performance (Ysselyke & Reschly, 2014). The correlational versus experimental debate has played out in research, training, and practice, and has renewed the conversation about what types of data are most useful to the intervention process (Batsche, Kavale, & Kovaleski, 2006).

The current national implementation of response-to-intervention frameworks has intensified the debate regarding underlying causes of student deficits and how to best assess and intervene for them. Several scholars have advocated for using measures of cognitive processing to analyze academic difficulties and design individualized interventions (e.g., Feifer, 2008; Fiorello, Hale, & Snyder, 2006; Floyd, Evans, & McGrew, 2003; Hale, Fiorello, Bertin, & Sherman, 2003; Hale, Fiorello, Kavanagh, Hoeppner, & Gaither, 2001). Feifer (2008) proposed using measures of underlying cognitive abilities for the purpose of selecting interventions and recommended several contemporary tests of intelligence, memory, and executive functioning to do so. There are also multiple resources available to school psychologists that describe interventions based on remediating underlying cognitive deficits. For example, there are books that list general reading interventions based on neuropsychology (Feifer & De Fina, 2007) and interventions for specific cognitive processes such as working memory (Dehn, 2008). Moreover, there were five mini-skills and documented sessions at the 2015 National Association of School Psychologists annual convention that provided free guidance on using data from cognitive measures to remediate reading difficulties, and multiple paid workshops at both the national and summer conferences with similar foci.

Cronbach and Snow (1977) famously concluded that they were unable to find a link between underlying cognitive processes and academic outcomes despite researching the topic for 20 years. Many considered that seminal report to be the end of the aptitude-by-treatment interaction perspective to intervention. So, what has changed in the past almost 40 years that now makes the use of cognitive measures for intervention more acceptable? Swanson (1987) suggested that the cognitive measures used in Cronbach's research were inferior to those used by more modern researchers. For example, the studies reported in the 1977 report used the Illinois Test of Psycholinguistic Abilities (Kirk, McCarthy, & Kirk, 1967), which has long been discredited because of poor psychometric properties. Others have suggested that recent advances in theories of intelligence may result in different findings than in previous research (Vanderwood, McGrew, Flanagan, & Keith, 2002). However, the recommendations by Feifer (2008) were not accompanied by research data to evaluate them, and many of the recommendations were based on correlational evidence (Hale et al., 2003; Floyd et al., 2003; Vanderwood et al., 2002) or case studies (Fiorello et al., 2006).

Practitioners should be cautious about interpreting single studies because research has to build over series of multiple studies with consistent results in order to shape practice or policy. Kavale and Forness (2000) recommended that meta-analytic research be used in conversations about practice and policy because meta-analytic research can show “a pattern of research findings across a landscape of different circumstances” and provide “policy makers with an explicit and objective rendering about what the research says” (p. 318). Meta-analytic data are commonly used to create sources of evidence-based practices in human service fields, most notably medicine (Cochrane, 2014) and education (Hattie, 2009). Therefore, the role of cognitive measures in the intervention process should be examined with meta-analytic research that combines multiple studies into one estimate of effect.

Meta-analyses were proposed by Gene Glass (1976) as a way to synthesize a research literature to better understand its findings. He proposed use of standardized mean differences in which the results of the study would be reported in standard deviation units that represented the difference between the treatment and control group. Cohen (1988) proposed the now famous d statistic, which is the difference of the two group means (control and experimental) divided by the pooled standard deviation, and indicated that a d of 0.20 was a small effect, 0.50 was a moderate effect, and 0.80 was a large effect. Other metrics are also used, such as r and r 2, but all approaches can be converted to each other for common comparisons.

If cognitive measures are useful to intervention planning, then experimental research should be able to demonstrate that use of cognitively focused interventions generate academic performance gains better than standard instructional practices that can be used in the absence of cognitive processing data (e.g., increasing corrective feedback, improving teacher clarity). Fortunately, there have been several recent meta-analyses regarding the role of cognitive measures to inform academic interventions. Burns et al. (in press) examined the effectiveness of using neuropsychological data to derive academic interventions; Kearns and Fuchs (2013) studied the effects of cognitively focused interventions on reading and math; Melby-Lervag and Hulme (2013) and Schwaighofer, Fischer, and Buhner (2015) studied the effects of working memory intervention on reading and mathematics; Scholin and Burns (2012) and Stuebing, Barth, Mofese, Weiss, and Fletcher (2009) correlated IQ with academic outcomes; and Stuebing et al. (2015) compared data from different cognitive measures to student response to intervention. The purpose of the current article is to review the effects of these seven meta-analyses in order to synthesize the research literature on using cognitive assessments and interventions to improve academic outcomes for students.

The Data

The results of the seven meta-analyses are listed in Table 1. Some of the metaanalyses used standardized mean differences such as d and some used correlations such as r. All of the effect sizes were converted to a standardized mean difference (d) in order to compare and combine the results across studies. There were 203 studies included in the seven meta-analyses. The effect sizes ranged from 0.07 to 0.58, with an overall average effect size of 0.26, which suggested a small effect. Three of the meta-analyses (Kearns & Fuchs, 2013; Melby-Lervag & Hulme, 2013; Schwaighofer et al., 2015) examined the effects of cognitively focused interventions on reading and mathematics, with two of them focusing on working memory training. Kearns and Fuchs (2013) found moderate effects for cognitively focused interventions (e.g., long-term memory, planning, processing speed, working memory, visual-spatial processing) when compared to no intervention at all, but only a small effect (0.26) when compared to academic interventions. Both meta-analyses for working memory training found consistently negligible effects (0.07 and 0.09 for mathematics, 0.13 and 0.15 for reading decoding, and 0.13 and 0.21 for reading comprehension). These 55 studies consistently show that working memory training has little to no effect on reading or mathematics performance improvements (or achievement gains).

Table 1 Summary of Meta-Analyses Regarding Cognitive Processes and Academic Interventions

Compared to academic interventions340.26Verbal ability (comprehension)80.13Baseline characteristics and posttest540.30Verbal ability (comprehension)290.21

STUDYDESCRIPTIONkd
Burns et al. (in press) Academic interventions from cognitive processing measures 37 0.17
Kearns & Fuchs (2013)*
Melby-Lervag & Hulme, (2013)
Scholin & Burns (2012) Predicting response to intervention for reading with IQ 18 0.27
Stuebing et al. (2009) Relationship between IQ and academic outcomes 22 0.32
Stuebing et al. (2015)
Schwaighofer et al. (2015)
Total 203 0.27

* One effect was identified as an outlier by Kearns and Fuchs (2013) and was removed.

The remaining four studies examined the effects of using cognitive measures in the intervention process. Burns et al. (in press) studied the effects of determining reading and mathematics interventions from diagnostic measures of cognitive processing, which resulted in a small effect (0.17) from 37 studies. Three of the metaanalyses examined the relationship between student response to intervention and preintervention cognitive measures such as IQ (Scholin & Burns, 2012; Stuebing et al., 2009). The relationship between student response to intervention in reading and mathematics resulted in an average effect size of 0.35, from 94 individual studies, which equals an r of .17 and suggests a negligible to weak relationship.

Some reading this article might wonder about executive functioning. Although executive functioning was addressed in some of the meta-analyses included in this review, none of the studies examined the construct differentially from other cognitive constructs. Jacob and Parkinson (in press) reviewed 67 studies and concluded that (a) most of the research occurred in 2010 or later, (b) there was a correlation between executive functioning and academic skills, (c) the correlation with executive functioning was approximately equal for reading and mathematics, and (d) changing skills in executive functioning through various interventions did not lead to increased skills in reading and mathematics. The authors concluded that there was little to no evidence that executive functioning and academic skills were causally linked. The study is not included in the data for the current review because the article is not yet available; readers are encouraged to watch for its release.

IMPLICATIONS

Over 200 studies synthesized in seven meta-analyses found a negligible to small effect for cognitive assessments and interventions on reading and mathematics performance improvements. There were a few clear implications of these data. First, working memory interventions do nothing to improve reading or mathematics skills. Although books and conference sessions profess that they do, there is not convincing evidence to support using working memory interventions for reading or mathematics. A second clear implication is that IQ and other measures of cognitive functioning might correlate well with reading and mathematics achievement, but they correlate poorly with student response to intervention. In other words, IQ tells us how much a student knows within the range of developmentally typical intelligence (i.e., child does not have cognitive disability), but IQ tells us very little about how well a student will respond to intervention. Practitioners should not use IQ to triage student need or to determine which interventions are appropriate for individual students.

Over 200 studies synthesized in seven metaanalyses found a negligible to small effect for cognitive assessments and interventions on reading and mathematics improvements. Examining cognitive processing data does not improve intervention effectiveness, and doing so could distract attention from more effective interventions.

What is the harm in using cognitive measures to inform intervention? The data from these 200 studies suggest that examining cognitive processing data does not improve intervention effectiveness, and doing so could distract attention from more effective interventions. Unfortunately, educators often have a fascination with that which is new, so much so that “brandnew mediocrity is thought more of than accustomed excellence” (p. xi, Ellis, 2001). School psychologists may turn from effective interventions such as explicit instruction (d = 0.84; Kavale & Forness, 2000), teaching reading comprehension strategies (d = 1.13; Kavale & Forness, 2000), and repeated reading (d = 0.83 for fluency and d = 0.67 for comprehension; Therrien, 2004) in favor of interventions that have strong intuitive appeal but little to no research evidence. School psychologists should help school personnel keep focused on that which we know works.

What Should We Do Instead? Skill-By-Treatment Interaction

Although the average effects were small, that is not to say that all of the studies found small effects, only that a vast majority of them did. For example, Kearns and Fuchs (2013) found some moderate effects for cognitive interventions (e.g., training to improve processing speed or visual–spatial processing) when compared to no intervention at all, but the results were much smaller when compared to reading and mathematics interventions. School psychologists should skeptically review individual studies about interventions because no study is perfect and systematic replication lends credibility to measured effects. Research syntheses of multiple studies offer practitioners a stronger basis for choosing what strategies are likely to work.

There is an entire literature about effective mathematics and reading interventions. Consider the large effects found for the interventions listed above such as explicit instruction, teaching reading comprehension strategies, and repeated reading, and other interventions such as incremental rehearsal (d = 1.71, Burns, Zaslofsky, Kanive, & Parker, 2012), vocabulary instruction with adolescent students (d = 1.62; Scammacca et al., 2007), strategy instruction for writing (d = 1.02; Graham, McKeown, Kiuhara, & Harris, 2012), and explicit instruction in solving mathematical word problems (d = 1.85; Zhang & Xin, 2012). The effect sizes for remediating academic deficits by intervening directly for the reading or mathematics deficit dwarf the effects of remediating assumed underlying cognitive deficits. It seems that what has always been true remains true still: The best way to teach a child how to read is to teach her or him how to read.

The low correlation between student response to intervention and cognitive measures could also be compared to correlations between student response data and measures of the skill. For example, Scholin and Burns (2012) found a low correlation between student response to intervention and IQ (r = -.11, which converts to d = -0.22), but the relationship between reading growth and baseline measures of reading fluency (r = .37, d = 0.80) and word attack (r = .36, d = 0.77) were much higher. Swanson, Trainin, Necoechea, and Hammill (2003) found in their review of 35 studies that IQ correlated with real-word reading at about r = .35 (converts to d = 0.75) and memory correlated with real-word reading at r = .31 (d = 0.65), but pseudoword reading (r = .61, d = 1.54), as a measure of word attack, spelling (r = .70, d = 1.96), and reading comprehension (r = .64, d = 1.67) correlated with real-word reading at much higher levels. Stuebing et al. (2015) found that rapid automatic naming (r = .34, d = 0.72) and working memory (r= .34, d = 0.72) were moderately correlated with change in reading scores, but baseline measures of reading comprehension (r = .60, d = 1.50) were much better predictors and they questioned the unique variance beyond phonological awareness that the cognitive measures provided. Finally, the Burns et al. (in press) meta-analysis found a small effect (d = 0.17) for deriving interventions from cognitive measures, but measures of reading fluency (d = 0.43) and phonological awareness (d = 0.50) were moderate. It seems that much like the intervention perspective, the best way to assess if a student can read is to have her or him read.

Direct measures of the skill can provide useful information for designing interventions for reading and mathematics (Burns, Codding, Boice, & Lukito, 2010; Burns et al., in press), but cognitive measures seem to provide little information that is useful to the intervention process. Thus, school psychologists should consider a skill-by-treatment interaction rather than an aptitude-by-treatment interaction in which baseline measures of the skill are used to drive intervention. For example, a school psychologist could tell from a sample of oral reading the fluency level and accuracy with which a student reads, which could suggest the need for repeated reading or decoding interventions. A sample of mathematics problems can suggest on what skills interventionists should target their efforts.

Conclusion

This is not the first review to conclude that direct measures of the relevant skills provide more useful data for intervention design than do measures of cognitive processing, and that interventions in reading and mathematics interventions lead to stronger effects in reading and mathematics than do interventions that address underlying cognitive functions. Kavale and Forness (2000) compared interventions that fit into a category that they called SPECIAL education to those that they called special EDUCATION. The former focused on methods that were unique from general education (e.g., psycholinguistic training, modality instruction, perceptual–motor training) and resulted in a small mean effect size of d = 0.25. The latter included methods that adapted or modified instructional methods (e.g., reading comprehension instruction, behavior modification, direct instruction, peer tutoring, and early intervention) and resulted in a large mean effect of d = 0.91.

Kavale and Forness (2000) began their meta-analytic research in the era of learning styles. Previous efforts to identify interventions based on individual student differences were based largely on presumed preferred styles of learning such as being a visual learner, an auditory learner, or a kinesthetic learner (Dunn & Dunn, 1978). Research has clearly shown that the learning style heuristic to intervention is not effective. So much so that Kavale and Forness (2000) indicated that practitioners who still espouse its value are demonstrating examples of clinical beliefs overshadowing research data. Assessing students’ underlying cognitive abilities to determine appropriate academic interventions has clearly been shown to be ineffective by multiple meta-analyses. Researchers will likely continue searching for an effective aptitude-by-treatment interaction because many have been trained in that tradition, it makes intuitive sense, and perhaps the research will help us better understand human learning even if it does not lead to practical applications. However, continued training or practice in which cognitive assessments and cognitively focused interventions are used to remediate reading and mathematics difficulties seems to be another example of clinical beliefs overshadowing research data.

School psychologists trained in an aptitude-by-treatment interaction tradition will hopefully see this article as reason to reflect on their practice and to consider ways to more effectively support the children they serve. It is likely that many who read this article will passionately support the findings described above, an equal number will passionately search for ways to discredit or dismiss the data, and nothing written above will persuade either group. School psychologists should be the resident scientists in their schools, which involves consuming research, synthesizing research, and conducting applied research (Keith, 2008). Hopefully those who read this article will read the meta-analyses cited here, examine the studies included in the meta-analyses, and collect data to ensure that their practices are effective. Doing so will lead to improved outcomes for students and enhanced capacity of schools to meet the needs of students, but will also ensure that their practices are scientific in nature and not the result of clinical beliefs overshadowing research data.


Matthew K. Burns, PhD, is the Associate Dean for Research with the College of Education and a Professor of School Psychology at the University of Missouri. He is also a contributing editor for Communiqué