Evaluating a Student’s ‘Non-Responder’ Status: An RTI Checklist

When a school attempts to determine whether a particular general-education student has responded adequately to an academic RTI plan, it must conduct a kind of ‘intervention audit’—reviewing documentation of the full range of interventions attempted. The intervention-audit process is complex. After all, before a school can decide whether a struggling student has truly failed to respond to intervention, it must first have confidence that in fact each link in the chain of RTI general-education support was in place for the student and was implemented with quality.
Presented below are the most crucial links in the RTI chain. This listing summarizes important RTI elements to support intervention, assessment, and data analysis. A school must ensure that all of these elements are in place in the general-education setting before that school can have decide with confidence whether a particular student is a ‘non-responder’ to intervention. Schools can use this RTI ‘non-responder’ checklist both to evaluate whether general-education has yet done all that it can to support a struggling student and whether that student should be considered for possible special education services.

Tier 1: Classroom Interventions. The classroom teacher is the ‘first responder’ for students with academic delays. Classroom efforts to instruct and individually support the student should be documented.

All students receive high-quality core instruction in the area of academic concern. ‘High quality’ is defined as at least 80% of students in the classroom or grade level performing at or above gradewide academic screening benchmarks through classroom instructional support alone (Christ, 2008).

The classroom teacher also gives additional individualized academic support to any ‘red flag’ student who continues to struggle in core instruction. Classroom strategies used to support the student are evidence-based and recorded in a written plan. Student academic baseline and goals are calculated, and sufficient progress-monitoring data collected to measure the impact of the plan. The teacher allocates an adequate amount of time (e.g., 4-8 instructional weeks) to determine whether this classroom intervention is effective.
Tiers 2 & 3: Supplemental Interventions. Interventions at Tiers 2 and 3 supplement core instruction and specifically target a struggling student’s academic deficits. The school has established decision rules for the minimum number of supplemental intervention trials to be attempted with a student, as well as the minimum length that each Tier 2 or 3 intervention trial should last (e.g., 6-8 instructional weeks).
Tier 2/3 intervention plans are documented in writing. Each supplemental intervention is constructed according to these quality indicators:
  1. Instructional programs or practices used in the intervention meet the district’s criteria of ‘evidence-based’.
  2. The Tier 2/3 intervention is selected because it logically addressed the area(s) of academic deficit for the target student (e.g., an intervention to address reading fluency was chosen for a student whose primary deficit was in reading fluency).
  3. If the supplemental intervention is group-based, all students enrolled in the Tier 2/3 intervention group have a shared intervention need that could reasonably be addressed through the group instruction provided.
  4. The student-teacher ratio in the group-based intervention provides adequate student support. NOTE: For Tier 2, group sizes should be capped at 7 students. Tier 3 interventions may be delivered in smaller groups (e.g., 3 students or fewer) or individually.
  5. The intervention provides contact time adequate to address the student academic deficit. NOTE: Tier 2 interventions should take place a minimum of 3-5 times per week in sessions of 30 minutes or more; Tier 3 interventions should take place daily in sessions of 30 minutes or more (Burns & Gibbons, 2008).
A final, and crucial, expectation for any Tier 2/3 intervention is that ‘treatment integrity’ data be collected to verify that the intervention is carried out as designed (Gansle & Noell, 2007; Roach & Elliott, 2008). Relevant intervention-integrity data include information about frequency and length of intervention sessions and ratings by the interventionist or an independent observer about whether all steps of the intervention were conducted correctly.
Schoolwide Academic Screenings The school selects efficient measures to be used to screen all students at a grade level in targeted academic areas. First, the school has selected appropriate grade-level screening measures for the academic skill area(s) in which the target student struggles (Hosp, Hosp & Howell, 2007). These screening measure(s) (1) have ‘technical adequacy’ as grade-level screeners—and have been researched and shown to predict future student success in the academic skill(s) targeted, (2) are general enough to give useful information for at least a full school year of the developing academic skill (e.g., General Outcome Measure or Skill-Based Mastery Measure), and (3) include research norms, proprietary norms developed as part of a reputable commercial assessment product, or benchmarks to guide the school in evaluating the risk level of each student screened.
Second, all students at each grade level are administered the relevant academic screening measures at least three times per school year. The results are compiled to provide local norms of academic performance.
Guidelines Established for Determining Student ‘Non-Response’ to Intervention as a Dual Discrepancy. The school has developed definitions for ‘severely discrepant’ academic performance and student growth according to a 'dual discrepancy' model (Fuchs, 2003).
Defining ‘Discrepant’ Academic Performance. Using local norms, research norms, proprietary norms developed as part of a reputable commercial assessment product, or benchmarks, the school sets a ‘cut-point’ below which a student’s academic performance is defined as ‘severely discrepant’ from that of peers in the enrolled grade.
For example, a school conducts a winter screening in Oral Reading Fluency for 3rd grade and finds based on local norms that 10 percent of students at that grade read 40 words correctly read per minute (wcpm) or less. The school therefore sets 40 wcpm as the winter screening cut-point for reading fluency at 3rd grade, defining any student whose performance falls below that level as ‘severely discrepant’ in the skill.
Defining ‘Discrepant’ Slope. The school has selected a formula for determining when a student’s rate of improvement (slope) is severely discrepant from that of peers. Here are two options for generating slope cut-off values:
  • Slope Cut-Off Option 1 (for use with external and local norm slopes): The student’s slope is divided by the comparison peer slope (derived from external or local norms). If the quotient falls below 1.0, the student’s rate of improvement is less than that of the comparison peer slope. A quotient greater than 1.0 indicates that the student’s rate of improvement exceeds that of the comparison peer slope. The school can set a fixed cut-off value (e.g., 0.75 or below) as a threshold for defining a student slope as discrepant from the comparison peer slope.
  • Slope Cut-Off Option 2 (for use with local screening data only): To derive a slope cut-off value from local norms, the school uses data collected during its schoolwide academic screening. Because each student included in the screening will have three screening data points on a given measure – e.g., in oral reading fluency-- by the end of the year, the school can use those successive data points to generate slopes for each student. Once slopes for each student have been calculated, the school can compute a mean and standard deviation for the entire collection of student slopes at a grade level. Any student found to have a slope that is at least one standard deviation below the mean slope would be considered to be ‘discrepant’ (Burns & Gibbons, 2008).
 Progress of Tier 2/3 Interventions Monitored. Student baseline level and goals are calculated for each intervention, and a sufficient number of data points are collected during progress-monitoring to judge accurately whether the intervention has been successful.
  • Student Baseline Calculated. For each Tier 2/3 intervention being reviewed, the school calculates the student’s baseline level, or starting point, in the academic skill before starting the intervention (Witt, VanDerHeyden, & Gilbertson, 2004)..
  • Student Goal Calculated. For each Tier 2/3 intervention being reviewed, the school calculates a ‘predicted’ goal for student progress to be attained by the end of the intervention period: (1) The goal is based on acceptable norms for student growth (i.e., research-based growth norms, proprietary growth norms developed as part of a reputable commercial assessment product, or growth norms derived from the local student population). (2) The goal represents a realistic prediction of student growth that is sufficiently ambitious—assuming that the intervention is successful—eventually to close the gap between the student and grade-level peers.
  • Regular Progress-Monitoring Conducted. Each Tier 2/3 intervention is tracked on a regular basis. If at Tier 2, the intervention is monitored at least 1-2 times per month (Burns & Gibbons, 2008), while if at Tier 3, the intervention is monitored at least 1-2 times per week (Burns & Gibbons, 2008; Howell, Hosp, & Kurns, 2008).
How to Use This Checklist. When a struggling student on RTI intervention fails to make expected progress despite several attempts, educators must first have confidence that each link in the RTI chain is fully in place and of high quality for that student. Only then can it be concluded that general education has exhausted its RTI options, that this struggling student is a ‘non-responder’ to RTI, and that the student may require special education services. The checklist presented here represents a quick survey of the key components of RTI to be verified in any general-education setting before a student may be considered a 'non-responder' to intervention.
NOTE: For a more detailed description of essential RTI components, check out the worksheet Evaluating a Student's 'Non-Responder' Status: An RTI Checklist (see attachment at the bottom of this page).


  • Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools. Routledge: New York. 
  • Christ, T. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 159-176). Bethesda, MD: National Association of School Psychologists.
  • Fuchs, L. (2003). Assessing intervention responsiveness: Conceptual and technical issues. Learning Disabilities Research & Practice, 18(3), 172-186.
  • Gansle, K. A., & Noell, G. H. (2007). The fundamental role of intervention implementation in assessing response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Response to intervention: The science and practice of assessment and intervention (pp. 244-251). New York: Springer Publishing.
  • Hosp, M. K., Hosp, J. L., & Howell, K. W. (2007). The ABCs of CBM: A practical guide to curriculum-based measurement. New York: Guilford Press.
  • Howell, K. W., Hosp, J. L., & Kurns, S. (2008). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.349-362). Bethesda, MD: National Association of School Psychologists.
  • Roach, A. T., & Elliott, S. N. (2008). Best practices in facilitating and evaluating intervention integrity. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.195-208).
  • Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Troubleshooting behavioral interventions. A systematic process for finding and eliminating problems. School Psychology Review, 33, 363-383.