Accountability for Specialists
By Frank L. Miller
My home state is in the process of developing a system to evaluate specialists: school psychologists, speech pathologists, guidance counselors, and others. Despite the fact that we are very much in demand, as evidenced by our crowded calendars and warm greetings when we arrive on site (“Boy am I glad to see you!”), we still have to demonstrate to the bean counters that we are worth what they are paying us. For years we were evaluated using teacher criteria: how we developed our “lesson plans” and how we managed our “classrooms.” Then they moved to measure planning and preparation, professional practice and delivery of service, professional consultation and collaboration, professional responsibilities, and last but certainly not least, student improvement. We could relate to all of these constructs, but student improvement? Was there a straight line between what we did everyday with a student and improved test scores?
I never had any problem with the first four requirements: I have always been well prepared; I am well trained and have decades of experience in how I do what I do; I live for my opportunities to work with colleagues; and I communicate clearly and professionally with students, parents, teachers, and even administrators. The last component was never automatic; I had to stop and think about it every year. Who was I going to work with, and monitor, to demonstrate how my direct services had a positive impact on a student or students? One year, I followed a group of special education students who were involved in reading interventions. I provided follow-up tutoring, reinforcing lessons taught in class, and then quizzed them. The following year I targeted several “frequent fliers,” students with attendance and discipline issues whom I took under my wing. I met with them every week and talked about expectations and how they had met them since the previous session. They demonstrated a dramatic drop in referrals for attendance problems and discipline. Grades also improved.
And just when you think you have it made … they’re changing things yet again. Now we have to have real data, data subjected to statistical analysis like effect size and PND (for those of you who have conveniently forgotten most of what you learned in statistics, Percentage of Nonoverlapping Data Points). Next year, I will most likely have to select one or more students (my students) and treat them, and then measure the impact of my efforts across six exemplars (three behavioral/ three academic). And … not only will I be judged by their response to my interventions, to use a familiar term, but also by whether or not they met AYP. Personally, I’ve never met an AYP I didn’t like, but for many, if not most of “my” students, AYP might just as well stand for “A Yearly Pilgrimage” or an attempt to make it through another year in school without feeling completely at a loss. Sorry to say, but for many of “our kids,” they haven’t seen a year’s growth in the last 5, and to expect them to miraculously overcome all their obstacles to learning as 2014 approaches when all good children will be on level is one of the— no not one—is the most ridiculous concept I have run into over a long (36-year) career. So, maybe it’s good that next year will be my last year in public education, because I don’t expect to get any younger, I don’t expect to get any smarter, and I certainly don’t expect all my kids—those with or without disabilities—to suddenly meet all the expectations placed upon them by well-meaning but seriously misguided individuals. I’ll gladly sit on the sidelines and watch. Or maybe I will become even more outspoken, freed from the shackles of a paycheck.
Frank L. Miller, NCSP, is a school psychologist at Central Elementary School in the Lake Forest School District in Felton, DE.