Evaluating a Student’s ‘Non-Responder’ Status: An RTI Checklist
When a school attempts to determine whether a particular general-education student has responded adequately to an academic RTI plan, it must conduct a kind of ‘intervention audit’—reviewing documentation of the full range of interventions attempted. The intervention-audit process is complex. After all, before a school can decide whether a struggling student has truly failed to respond to intervention, it must first have confidence that in fact each link in the chain of RTI general-education support was in place for the student and was implemented with quality.
Presented below are the most crucial links in the RTI chain. This listing summarizes important RTI elements to support intervention, assessment, and data analysis. A school must ensure that all of these elements are in place in the general-education setting before that school can have decide with confidence whether a particular student is a ‘non-responder’ to intervention. Schools can use this RTI ‘non-responder’ checklist both to evaluate whether general-education has yet done all that it can to support a struggling student and whether that student should be considered for possible special education services.
All students receive high-quality core instruction in the area of academic concern. ‘High quality’ is defined as at least 80% of students in the classroom or grade level performing at or above gradewide academic screening benchmarks through classroom instructional support alone (Christ, 2008).
- Instructional programs or practices used in the intervention meet the district’s criteria of ‘evidence-based’.
- The Tier 2/3 intervention is selected because it logically addressed the area(s) of academic deficit for the target student (e.g., an intervention to address reading fluency was chosen for a student whose primary deficit was in reading fluency).
- If the supplemental intervention is group-based, all students enrolled in the Tier 2/3 intervention group have a shared intervention need that could reasonably be addressed through the group instruction provided.
- The student-teacher ratio in the group-based intervention provides adequate student support. NOTE: For Tier 2, group sizes should be capped at 7 students. Tier 3 interventions may be delivered in smaller groups (e.g., 3 students or fewer) or individually.
- The intervention provides contact time adequate to address the student academic deficit. NOTE: Tier 2 interventions should take place a minimum of 3-5 times per week in sessions of 30 minutes or more; Tier 3 interventions should take place daily in sessions of 30 minutes or more (Burns & Gibbons, 2008).
- Slope Cut-Off Option 1 (for use with external and local norm slopes): The student’s slope is divided by the comparison peer slope (derived from external or local norms). If the quotient falls below 1.0, the student’s rate of improvement is less than that of the comparison peer slope. A quotient greater than 1.0 indicates that the student’s rate of improvement exceeds that of the comparison peer slope. The school can set a fixed cut-off value (e.g., 0.75 or below) as a threshold for defining a student slope as discrepant from the comparison peer slope.
- Slope Cut-Off Option 2 (for use with local screening data only): To derive a slope cut-off value from local norms, the school uses data collected during its schoolwide academic screening. Because each student included in the screening will have three screening data points on a given measure – e.g., in oral reading fluency-- by the end of the year, the school can use those successive data points to generate slopes for each student. Once slopes for each student have been calculated, the school can compute a mean and standard deviation for the entire collection of student slopes at a grade level. Any student found to have a slope that is at least one standard deviation below the mean slope would be considered to be ‘discrepant’ (Burns & Gibbons, 2008).
- Student Baseline Calculated. For each Tier 2/3 intervention being reviewed, the school calculates the student’s baseline level, or starting point, in the academic skill before starting the intervention (Witt, VanDerHeyden, & Gilbertson, 2004)..
- Student Goal Calculated. For each Tier 2/3 intervention being reviewed, the school calculates a ‘predicted’ goal for student progress to be attained by the end of the intervention period: (1) The goal is based on acceptable norms for student growth (i.e., research-based growth norms, proprietary growth norms developed as part of a reputable commercial assessment product, or growth norms derived from the local student population). (2) The goal represents a realistic prediction of student growth that is sufficiently ambitious—assuming that the intervention is successful—eventually to close the gap between the student and grade-level peers.
- Regular Progress-Monitoring Conducted. Each Tier 2/3 intervention is tracked on a regular basis. If at Tier 2, the intervention is monitored at least 1-2 times per month (Burns & Gibbons, 2008), while if at Tier 3, the intervention is monitored at least 1-2 times per week (Burns & Gibbons, 2008; Howell, Hosp, & Kurns, 2008).
How to Use This Checklist. When a struggling student on RTI intervention fails to make expected progress despite several attempts, educators must first have confidence that each link in the RTI chain is fully in place and of high quality for that student. Only then can it be concluded that general education has exhausted its RTI options, that this struggling student is a ‘non-responder’ to RTI, and that the student may require special education services. The checklist presented here represents a quick survey of the key components of RTI to be verified in any general-education setting before a student may be considered a 'non-responder' to intervention.
- Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools. Routledge: New York.
- Christ, T. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 159-176). Bethesda, MD: National Association of School Psychologists.
- Fuchs, L. (2003). Assessing intervention responsiveness: Conceptual and technical issues. Learning Disabilities Research & Practice, 18(3), 172-186.
- Gansle, K. A., & Noell, G. H. (2007). The fundamental role of intervention implementation in assessing response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Response to intervention: The science and practice of assessment and intervention (pp. 244-251). New York: Springer Publishing.
- Hosp, M. K., Hosp, J. L., & Howell, K. W. (2007). The ABCs of CBM: A practical guide to curriculum-based measurement. New York: Guilford Press.
- Howell, K. W., Hosp, J. L., & Kurns, S. (2008). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.349-362). Bethesda, MD: National Association of School Psychologists.
- Roach, A. T., & Elliott, S. N. (2008). Best practices in facilitating and evaluating intervention integrity. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.195-208).
- Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Troubleshooting behavioral interventions. A systematic process for finding and eliminating problems. School Psychology Review, 33, 363-383.