School Psychology Review
Generalizability and Dependability of Behavior Assessment Methods to Estimate Academic Engagement: A Comparison of Systematic Direct Observation and Direct Behavior Rating
Amy M. Briesch, Sandra M. Chafouleas, and T. Chris Riley-Tillman
Special Series: Behavioral Assessment Within Problem-Solving Models
NASP Members: Log in to download this article
Abstract. Although substantial attention has been directed toward building the psychometric evidence base for academic assessment methods (e.g., state mastery tests, curriculum-based measurement), similar examination of behavior assessment methods has been comparatively limited, particularly with regard to assessment purposes most desirable within problem-solving models. Therefore, an informed weighing of the psychometric benefits of two behavior assessment options, systematic direct observation and direct behavior rating, was conducted to better inform decisions regarding method selection, use, and interpretation related to initial identification and monitoring of behavior concerns. Results of generalizability theory analyses revealed that both methods were equally sensitive to intra-individual differences in academic engagement; however, differences were noted with regard to the influences of both rater and time. That is, a large proportion of systematic direct observation rating variance was explained by changes in student behavior across days and rating occasions, whereas rater-related effects accounted for the greatest proportion of direct behavior rating variance. Both limitations of the current study, as well as recommendations for research and practice, are discussed.