NASP Communiqué, Vol. 35, #1
Reviews/Evaluating Intervention Outcomes
Single-Subject Research in the Practice of School Psychology
By Michelle Marchant, Tyler Renshaw & Ellie Young, NCSP
Associate Ed. note: In
this issue (and occasionally in future issues), the Research Committee
deviates from its typical two-column format of research reviews and outcome
evaluation protocols to focus on a specific research paradigm.—Steve
The ratio of school psychologists to students
is continually improving, but far from ideal (Fagan & Wise, 2000).
Recent estimates range from a high of 1:1816 (Thomas, 1999) to a low of
1:1500 (Bramlett, Murphy, Wallingsford, & Hall, 2002). National surveys
have indicated that current practitioners spend 50% of their time on assessment,
with only 20 to 35 % of their time immersed in intervention, consultation,
and problem-solving processes with student problem behavior (Bramlet et
al., 2002; Reschly & Wilson, 1995; Stinett, Havey, & Oehler-Stinett,
1994). The secondary beneficiaries of school psychologists’ services, (i.e.,
parents and school personnel for whom direct time and attention would be
most helpful) are difficult to support due to time constraints. With new
policy mandates, priorities for school psychologists tend to be focusing
on positive learning and behavioral outcomes rather than lengthy assessment
Teachers often seek a school psychologist’s expertise regarding
behavioral issues within the classroom: How can I improve the on-task behavior
of my students? What can be done to help a student who cannot or will not
follow classroom routines? Is it possible to motivate a student to work
independently? Parents, on the other hand, are likely to approach school
psychologists with a greater variety of behavioral concerns: What can I
do to help my son play less aggressively with his siblings? Why does my
daughter resist coming to school? How do I stop my child from engaging
in self-injurious behavior? Confronted with these questions, school psychologists
problem-solve by answering the following questions: How can this problem
be solved most efficiently? What intervention will best meet the needs
or the child and the adults involved with the child? How can I ensure the
change in the student’s behavior will be significant and lasting? Considering
practitioners’ previously mentioned time limitations, single-subject research
designs can be helpful in identifying strategies to aid teams in finding
and monitoring effective solutions.
Linking Needs With Research-Based Practices
The use of empirical research to design interventions, while
not a new emphasis in the field of school psychology, is becoming prominent
in the educational literature. In response to the federal No Child Left
Behind Act of 2001, the United State Department of Education Institute
of Education Sciences designed a user-friendly guide to assist educators
to identify and implement educational practices supported by research (U.S.
Department of Education, 2003). The intent of this guide is to reduce the common practice
of using strategies that are merely popular among practitioners or other
influential individuals within the educational arena, by providing tools
to those who need “to distinguish intervention supported by scientifically-rigorous
evidence from those which are not” (U.S. Department of Education, p. iii).
This effort encourages practitioners to improve the services they provide
to students by promoting carefully selected interventions that are supported
Various research methodologies help school psychologists
become “practitioner-researchers” who have acquired the skills to systematically
evaluate their own practice and share their results with others. One research
methodology is single-subject research methods and designs that are integrally
connected to behavior-analytic theory (Tawney & Gast, 1984). What follows
is a definition of single-subject research, along with its corresponding
characteristics, techniques, and advantages, as well as techniques to effectively
implement these principles within an educational setting.
research emerged as a necessary product of Applied Behavior Analysis (ABA),
a discipline aimed at understanding and improving maladaptive human behavior.
Unlike other methodologies with similar intent, ABA accomplishes this by
targeting only observable behaviors of social significance (Cooper, Heron, & Heward,
1987). ABA does not seek a complete analysis of a
behavior—accounting for all possible contributions of a behavior’s cause—but
rather an applied analysis. As referred to in the discipline’s name, applied connotes
using isolated variables to obtain practical and meaningful change in behavior
(Bailey & Burch, 2002); analysis implies that these variable
effects on behavior are reliable and replicable (Cooper et al., 1987).
In short, ABA requires more than an isolated incident
of demonstrated causation; it demands establishment of a functional (or
predictable) relationship between variable(s) and behavior. This relationship
is achieved only through replication in which one variable (or a package
of variables) is the only element (or elements) varying within experimental
conditions (Tawney & Gast, 1984). Thus, single-subject designs arose
as behavior analysts organized procedures to ensure verification of meaningful
behavior change that contributes to academic and social success.
The following synopsis on behalf of the field of applied
behavior analysis and its integration with research seems to nicely summarize
the ideals of ABA and single-subject research:
of applied behavior analysis stresses the study of socially important
behavior that can be readily observed, and it uses research designs that
demonstrate functional control usually at the level of the individual
performer. The procedures developed by this field must be replicable,
and the extent of the resulting behavior change must have important practical
significance for the social community” (Bailey & Burch, 2002, p.
Characteristics and Techniques of Single-Subject
As presented above, the overarching purpose
of applied behavior analysis or single-subject research is to improve the
behavior of individuals (Cooper et al., 1987). To be successful, the practitioner-researcher
must adhere to designated characteristics associated with single-subject
designs. These characteristics include repeated measures, baseline performance,
intervention measures, and experimental control that will be described
in more detail, along with their corresponding techniques.
The first step in the research process is identifying the
problem behavior and its magnitude. This is done by collecting data daily,
weekly, or even more frequently, and then placing the data on a graph where
analysis can be easily done for all conditions including baseline and treatment. To
obtain valid data, the target behavior (dependent variable) must be defined
in operational terms and then measured with sensitive and reliable methods
(Alberto & Troutman, 2006; Tawney & Gast, 1984). In an example
of single-subject research, Marchant and Young (2001) targeted and measured
the compliant behavior of preschool-age children. A direct observation
system was used to measure the operationalized definition of compliance
that consisted of the child looking, saying “OK,” beginning the task within
five seconds, and checking back with the person who gave the instruction.
Specifically, event recording, making a notation to indicate if the behavior
did or did not occur, was the data collection system of choice because
the objective of this particular study was to increase a discrete behavior
As the observers viewed the parent-child interaction, they
attended to each step of compliance separately rather than as a collective
behavior. To capture the behavior in detail, compliance was broken into
the four steps mentioned above and data were collected separately for each
step. If the child “looked” at the parent accurately, based on the designated
definition, a “+” was placed in a box on the data collection form. The
observers marked a “–” if eye contact did not occur or did not match the
definition. This same process transpired for the remaining three steps
of compliance. This provides an example of how single-subject research
describes behavior in very finite components.
For this study, data were, on average,
collected three times per week by independent observers in order to acquire
the necessary repeated measures on the participants’ behavior. The observations
occurred in the home when the parents were giving the child instructions
to complete household chores. Typically, these instructions were dispensed
at the same time for each session during late afternoon or evening. These
data collection efforts supplied the researchers with sufficient repeated
measures that could then be graphed and analyzed often in order to make
data-based decisions. Ultimately, it is the repeated measures that contribute
to making the study analytic.
Data were analyzed by first graphing the
repeated measures immediately following the data collection session so
that the data path was regularly updated. Next, it was critical that the
practitioner-researcher consistently looked at the visual picture of the
trend, level, and variability of the data and interpreted the updated outcomes
to make informed decisions for the next research step. More information
about these evaluation procedures will be discussed in the following section
on baseline performance.
Graphic analysis. Baseline is the phase
of the single-subject research design where the intervention is absent.
Data are repeatedly collected and then graphed on the student’s target
behavior in the pre-intervention conditions. This provides a visual description
of the child’s behavior before application of the treatment (independent
variable) begins. A key function of the baseline condition is for researchers
to obtain the necessary repeated measures that allow for an analysis between
the baseline and treatment conditions. Practitioners can use the single-subject
research design methodology to validate interventions by graphing a child’s
out-of-seat behavior, physical aggression, or sensory stimulation.
Prediction. Another function of baseline data collection
is prediction. “Prediction may be defined as the anticipated outcome of
a presently unknown or future measurement” (Johnston & Pennypacker
(1980, p. 120). Alberto and Troutman (2006) compared the baseline to a
pretest because it provides the bases from which the practitioner-researcher
can project the effect of the planned intervention. For example, if a child
has a high rate of physical aggression on the playground during the baseline
phase, and the pattern is consistent, it is reasonable to predict that
the aggression will continue if a suitable intervention is not put in place.
Therefore, without an intervention, the practitioner-researcher can predict
that the visual display of aggression will be highly similar from baseline
In order to effectively project into the intervention phase,
it is necessary for baseline data to be stable. Data that fall within a
narrow range of values are considered stable, whereas, those that fall
outside of this range are considered to show some degree of variability.
High rates of variability indicate that the practitioner-researcher should
investigate the possibility of confounding variables (uncontrolled environmental
events or conditions, such as a change in the recess schedule), a poorly
defined behavior, and/or complications with measurement procedures as possible
threats to the lack of stability (Alberto & Troutman, 2006; Cooper
et al., 1987).
Trend indicates the direction of the behavior and is another
critical factor associated with stability of the performance of the behavior.
Consider the following scenario that offers an example of the implications
of trend in single-subject research: A student’s talk-outs increased in
his math class. Specifically, the school psychologist and teacher captured
data indicating that over five days the trend of the talk-outs was steadily
ascending beginning at five per class for the first data point of baseline
and eventually arriving at 20 for the final data point of baseline. In
this scenario, the behavioral trend is moving in the appropriate direction
for a baseline condition because the school psychologist and teacher desire
to reduce the talk-outs during the treatment condition. If the talk-outs
were to reverse into a descending trend, that would suggest the baseline
is no longer stable and the intervention should not be applied until the
trend is reversed. If the intervention is implemented while the trend is
descending, it will be difficult to determine if the change in behavior
is a function of intervention or other unidentified factors. Cooper et
al. (1987) offered the following about a stable baseline: “Stable responding…enables
the behavior analyst to employ a powerful form of inductive reasoning sometimes
called baseline logic. Baseline logic…entails three elements: prediction,
verification, and replication” (p. 154). To this point, we have discussed
only one element, prediction. The second, verification, will be presented
in the following section with the single subject design characteristic
Intervention Measures—Experimental Control
Verification involves successfully
demonstrating that when the independent variable is imposed on the dependent
variable, the desired effect has occurred. For example, Marchant, Lindberg,
Young, Fisher, and Solano (2004) demonstrated that a treatment package
consisting of playground rules, supervision, positive reinforcement, and
self-management reduced the playground aggression of three students. In
this study, the desired outcome or reduction in playground aggression was
observed, which was the first step toward verifying the prediction that
the independent variable or intervention package produced a desirable outcome
(Cooper et al, 1987). However, this temporary confirmation is purely the
first step. Additional efforts must be made to solidify this assumption;
otherwise, one could contend that a confounding variable was the source
of changing the targeted behavior. Methods for solidifying this assumption
are presented in the next section.
Replication establishes experimental
control, an important component of single-subject research because it supports
the baseline logic critical to demonstrating a functional relationship
between the independent and dependent variables (Cooper et al., 1987).
This strengthens the argument that the treatment is the variable most likely
producing the desired change in the behavior. As previously discussed,
replication is demonstrated by repeated measures within one condition,
such as baseline and intervention. “Replication demonstrates the reliability
and generality of data. It reduces the scientist’s margin of error and
increases confidence that findings that withstand repeated test are real,
not accidental” (Tawney & Gast, 1984, p. 95-96).
Experimental control, or replication, is best demonstrated
using rigorous research designs that facilitate control over possible confounding
variables (Alberto & Troutman, 2006). Options for rigorous, single-subject
research designs include reversal, changing criterion, multiple baseline,
and alternating treatment (Bailey & Burch, 2002; Cooper et al., 1987;
Tawney & Gast, 1984). These designs provide a systematic structure
for collecting and analyzing data so that the practitioner-researcher can
make confident statements about the relationship between the independent
and dependent variables (Alberto & Troutman, 2006). The following examples
describe how practitioner-researchers and researchers attempted to establish
experimental control and demonstrate a functional relationship between
independent and dependent variables.
Multiple baseline design. In the Marchant and
Young (2001) study, positive parenting skills were taught to the parents
of children identified with antisocial behavior problems. The parents then
used their training and skills to increase their children’s compliant behavior.
A multiple baseline design across participants allowed the researchers
to simultaneously investigate the replication of the intervention across
four children’s compliant behavior. In this design, a stable baseline was
achieved with the first child before the intervention was introduced, and
baseline data collection continued for the remaining participants. Once
the first participant was introduced to the intervention and stable data
were achieved, the second participant received the treatment after his
baseline data were stable. The same pattern continued with the third and
fourth participants. Results of the replication across the four parent-child
dyads suggested that the replication was successful due to the impact of
the independent variable. This design also contributes to establishing
a functional relationship between the parents’ implementation of the positive
parenting skills (independent variable) and the child’s compliance (dependent
variable). This study is an example of how a multiple-baseline-across-participants
design allowed for verification and replication of the practitioner-researcher’s
prediction that parenting skills could increase the child’s compliance.
Reversal design. Just as a multiple
baseline design allows practitioner-researchers to establish experimental
control for multiple variables, the reversal design is used to investigate
the effectiveness of a single independent variable (Alberto & Troutman,
2006). Using the reversal design, Christensen, Young, and Marchant (2004)
investigated a peer-mediated self-management strategy in assisting a student
with behavior problems to develop both self-evaluation capabilities and
appropriate social skills. Simply stated, when using the reversal design,
an intervention is sequentially applied and then withdrawn with one participant. In
the Christensen et al. (2004) study, the peer-mediated self-management
intervention was applied and then withdrawn with one student repeatedly
while the practitioner-researcher, a behavior specialist in a local public
school, oversaw the details of the research efforts to ensure experimental
control. The results were favorable as the participant showed a significant
increase in his level of socially appropriate classroom behavior during
the treatment conditions when the self-management strategy was implemented.
A functional relationship was clearly demonstrated across the independent
(self-management) and dependent variables (externalizing behaviors).
have shared two of the four possible single-subject research designs that
permit practitioner-researchers to establish experimental control. These
examples suggest that practitioner-researchers can successfully demonstrate
experimental control by repeating an intervention several times and observing
its effect on a dependent variable (Alberto & Troutman, 2006). For
further information about each of these designs, it is recommended that
the reader access the following texts: Applied Behavior Analysis for
Teachers (Alberto & Troutman, 2006), Applied Behavior Analysis (Cooper
et al., 1987), and Research Methods in Applied Behavior Analysis (Bailey & Burch,
2002). These texts offer straight-forward information about the field of
Applied Behavior Analysis and strategies for effectively developing single-subject
Advantages of Single-Subject Research Designs
indicated in the introduction, school psychologists answer questions about
intervening with students’ problem behaviors. Experimental designs, such
as the single-subject designs presented above, are helpful structures for
use in school-based settings in drawing scientific conclusions about student
behavior. Both single-subject and group designs can facilitate scientific
options for determining evidence-based practices that will reliably evaluate
student behavior change. Though both designs facilitate credible results,
the type of conclusions derived from the results depends directly on design
Practicality. Group designs are formatted to evaluate
the effectiveness of an intervention on the behavior of a population of
students (e.g., school or class) or a representative sample of the desired
population. Their conclusions, based on statistical procedures, are concerned
with generalizing and inferring results to a larger group of interest.
The inherent problem with this design approach is that it is seldom practical
when seeking to improve individual or small group student behavior. Chances
are slim that there will ever be a class in which every student presents
with identical behavior problems. Comparatively, single-subject designs
facilitate improving student behavior by evaluating the effects of variables
on the specific behavior of a single student. In other words, “Single-subject
designs emphasize the clinical significance of an individual rather than
statistical significance among groups” (Alberto & Troutman, 2006, p.
Establishing scientific rigor. Furthermore,
single-subject research designs scientifically structure how questions
are asked and how data are collected and analyzed, so that meaningful behavior
change may be achieved, verified, and believable (Alberto and Troutman,
2006; Cooper et al., 1987). Unlike group designs, in which scientific rigor
is concluded pre-study after proper selection of sampling, assignment,
and statistical procedures, single-subject designs require that scientific
rigor be continually established and calibrated throughout the entire research
process. Scientific rigor is judged on three stringent criterions: (a)
demonstration of a functional relationship, (b) achievement of clinically
significant (or socially important) behavior change, and (c) approval of
social validity (Alberto & Troutman, 2006). Scientific rigor can not
be assumed or loosely attributed; it must be established by verifying the
relationship between variables, and then it is watchfully maintained through
the study’s completion. A functional relationship can be inferred only
after a relationship has been successfully replicated. Clinically significant
change in behavior is evaluated throughout the study and as the study nears
completion. Social validity questions are considered pre-, during, and
post- study. If scientific rigor is ultimately concluded at the study’s
end, then consideration may be given to generalization of the results.
Generalization allows practitioner-researchers
and researchers alike to export their results so that they are useful to
others in the school community. Using repeated single-subject design studies,
the successful intervention may be tested with varying students and behaviors. Thus, though single-subject research seeks
first, and most importantly, for a meaningful change in behavior, it also
supports the dissemination of evidence-based interventions that have been
rigorously tested and proven to be generalizable under a range of circumstances.
Practitioners can adopt these procedures with confidence and assume they
will work due to the intensive process by which they were tried and found
effective (Alberto & Troutman, 2006).
Implications for School Psychologists
psychologists who read and apply single-subject-research benefit from learning
about carefully controlled intervention strategies that can be used to
facilitate behavioral change for students. Single-subject research contributes
to the practice of school psychology through identifying specific and detailed
interventions and variables that influence change; it helps practitioners
recognize the necessity of understanding the functional relationships between
behaviors and outcomes. In addition, this type of research has strong internal
validity so that there is little, and hopefully no, question about the
relationship between dependent and independent variables.
single-subject research is characterized by quite specific procedures and
designs, some procedures may be cumbersome and time consuming for practitioners
to replicate. Most practitioners do not have access to the extensive resources
used for data collection and analysis in published studies. Furthermore,
some designs require an effective intervention to be withdrawn in order
to demonstrate that the intervention was effective, but doing so may not
please teachers or parents who have witnessed desirable behavioral change.
Another concern arises when applying single-subject research in new settings
because research environments, even field-based research settings, have
very controlled environments and contingencies. This level of control may
not be available in other settings. Furthermore, the target student described
in the research may be quite different from the target student in a field-based
setting. Reinforcement schedules or types of reinforcement may be distinct
for each student, thus changing underlying and important principles which
were imperative to the success of the studied intervention.
Even with these caveats, these designs
are important for school psychologists to use competently in their work.
Generalization of interventions can be easily done if the researcher carefully
explained and demonstrated the functional relationship between the studied
intervention and the outcome. If the intervention is tried in a new setting
and with careful attention to baseline data, treatment integrity, and ongoing
data collection and analysis, successful generalization can occur. Learning
new strategies for intervention and understanding the environment variables
that sustain behavioral change are two of the fundamental contributions
of single-subject research for school psychology practice.
P. A., & Troutman, A. C. (2006). Applied behavior analysis for teachers (7th ed.). Upper Saddle River, NJ: Pearson Education,
S., & Burch, M. R. (2002). Research methods in applied behavior
analysis. Thousand Oaks,
CA: Sage Publications.
R. K., Murphy, J. J., Johnson, J., Wallingsford, L., & Hall, J. D.
(2002). Contemporary practices in school psychology: A national survey
of roles and referral problems. Psychology in the Schools, 39, 327-335.
L., Young, K. R., & Marchant, M. (2004). The effects of a peer-mediated
positive behavior support program on socially appropriate classroom behavior. Education
and Treatment of Children, 27, 199-234.
O., Heron, T. E., & Heward, W. L. (1987). Applied behavior analysis. Upper Saddle River, NJ: Pearson-Hall, Inc.
K., & Wise, P. S. (2000). School psychology: Past, present, and
future (2nd ed.). Bethesda, MD: National Association
of School Psychologists.
J. M., & Pennypacker, H. S. (1980). Strategies and tactics for human
behavioral research. Hillsdale, NJ: Lawrence Erlbaum.
M., Lindberg, J., Young, K. R., Fisher, A. K., & Solano, B. (2004,
May). A treatment package for improving playground behavior among elementary
students. Poster presented at the 30th Annual
Convention for Applied Behavior Analysis, Boston, MA.
Marchant, M., & Young, K. R. (2001). The effects of a parent coach
on parents’ acquisition and implementation of parenting skills. Education and Treatment
of Children, 24(3), 351-373.
D. J., & Wilson, M. S. (1995). School psychology practitioners and
faculty: 1986 to 1991 – 1992—trends in demographics, roles, satisfaction,
and system reform. School Psychology Review, 24, 62-80.
T. A., Havey, J. M., & Oehler-Stinnet, J. (1994). Current test usage
by practicing school psychologists: A national survey. Journal of Psychoeducational
Assessment, 12, 331-350.
W., & Gast, D. L. (1984). Single-subject research in special education. Columbus, OH: Merrill.
(1999). School psychology 2000. NASP Communiqué, 28, 28.
U.S. Department of Education.
(2003). Identifying and implementing educational practices supported by rigorous evidence: A user friendly
guide. Washington, DC: Author. Washington, DC: Author.
National Association of School Psychologists. Michelle Marchant, PhD,
is an Assistant Professor in the Department of Counseling Psychology
and Special Education at Brigham Young University.
Tyler Renshaw is an undergraduate Psychology student at BYU who will
graduate in April 2007, with plans for graduate school. Ellie L. Young,
PhD, NCSP, is an Assistant Professor of School Psychology at BYU.