A new, streamlined version of Intervention Central is coming in December 2023. The new site will eliminate user login accounts. If you have a login account, be sure to download and save any documents of importance from that account, as they will be erased when the website is revised.
Main menu
Intervention Integrity Part 2: Using Multiple Measures to Track the Quality With Which Interventions Are Carried Out
As schools implement academic and behavioral interventions, they strive to implement those interventions with consistency and quality in classrooms that are fluid and fast-evolving instructional environments.
On the one hand, teachers must be prepared to improvise moment by moment in response to the changing demands of the classroom: for example, reordering their lesson plans on the fly to maintain student engagement, spending unanticipated extra time answering student questions, or interceding to address sudden behavior problems. On the other hand, it is a basic expectation that specific RTI interventions will be carefully planned and carried out as designed.
So how can a school ensure that interventions are implemented with consistency even in the midst of busy and rapidly shifting instructional settings? The answer is for the school to find efficient ways to track ‘intervention integrity’. After all, if the school lacks basic information about whether an intervention was done right, it cannot have confidence in the outcome of that intervention. And uncertainty about the quality with which the intervention is conducted will prevent the school from distinguishing truly ‘non-responding’ students from cases in which the intervention did not work simply because it was done incorrectly or inconsistently.
There are three general sources of data that can provide direct or indirect information about intervention integrity: (1) work products and records generated during the intervention, (2) teacher self-reports and self-ratings, and (3) direct structured observation of the intervention as it is being carried out. Each of these approaches has potential strengths and drawbacks.
- Work products and records generated during the intervention. Often student work samples and other records generated naturally as part of the intervention can be collected to give some indication of intervention integrity (Gansle & Noell, 2007). If student work samples are generated during an intervention, for example, the teacher can collect these work samples and record on them the date, start time, and end time of the intervention session. Additionally, the teacher can maintain a simple intervention contact log to document basic information for each intervention session, including the names of students attending the session (if a group intervention); date; and start time and end time of the intervention session.
An advantage of using work products and other records generated as a natural part of the intervention is that they are easy to collect. However, such work products and records typically yield only limited information on intervention integrity such as whether interventions occurred with the expected frequency or whether each intervention session met for the appropriate length of time. (The Intervention Contact Log (see attachment at the bottom of this page) is an example of a documentation tool that would track frequency, length of session, and group size for group interventions—although the form can be adapted as well for use with individual students.)
- Teacher self-reports and self-ratings. As another source of data, the teacher or other educators responsible for the intervention can periodically complete formal or informal self-ratings to provide information about whether the intervention is being carried out with integrity. Teacher self-ratings can be done in a variety of ways. For example, the instructor may be asked at the end of each intervention session to complete a brief rating scale (e.g., 0 = intervention did not occur; 4 = intervention was carried out completely and correctly). Or the teacher may periodically (e.g., weekly) be emailed an intervention integrity self-rating to complete.
One advantage of teacher self-ratings is that they are easy to complete, a definite advantage in classrooms where time is a very limited resources. A second advantage of self-ratings, as with any form of self-monitoring of behaviors is that they may prompt teachers to higher levels of intervention compliance (e.g., Kazdin, 1989). A limitation of teacher self-reports and self-ratings, though, is that they tend to be biased in a positive direction (Gansle & Noell, 2007), possibly resulting in an overly optimistic estimate of intervention integrity. (For an exampleof a self-rating format, view the Intervention Contact Log (see attachment at the bottom of this page) which incorporates the self-rating into a daily log sheet .)
- Direct observation of the intervention steps. The most direct way to measure the integrity of any intervention is through observation. First, the intervention is divided into a series of discrete steps to create an observation checklist. An observer would then visit the classroom with checklist in hand to watch the intervention being implemented and to note whether each step of the intervention is completed correctly (Roach & Elliott, 2008).
The direct observation of intervention integrity yields a single figure: ‘percentage of intervention steps correctly completed’. To compute this figure, the observer (1) adds up the number of intervention steps correctly carried out during the observation, (2) divides that sum by the total number of steps in the intervention, and (3) multiplies the quotient by 100 to calculate the percentage of steps in the intervention that were done in an acceptable manner. For example, a teacher conducts a 5-step reading fluency intervention with a student. The observer notes that 4 of the 5 steps were done correctly and that one was omitted. The observer divides the number of correctly completed steps (4) by the total number of possible steps (5) to get a quotient of .80. The observer then multiples the quotient by 100 (.80 X 100), resulting in an intervention integrity figure of 80 percent.
The advantage of directly observing the steps of an intervention is that it gives objective, first-hand information about the degree to which that intervention has been carried out with integrity. However, this approach does have several drawbacks. The first possible hurdle is one of trust: Teachers and other intervention staff may believe that the observer who documents the quality of interventions will use the information to evaluate global job performance rather than simply to give feedback about the quality of a single intervention (Wright, 2007).
A second drawback of direct observations tied to an intervention checklist is that this assessment approach typically assigns equal weight to all intervention steps—when in actual fact some steps may be relatively unimportant while others may be critical to the success of the intervention (Gansle & Noell, 2007). Schools can construct interventions more precisely at the design stage to improve the ability of intervention-integrity checklists to weight the relative importance of various intervention elements. When first developing a step-by-step intervention script, schools should review the research base to determine which of the steps comprising a particular intervention are essential and which could be considered optional or open to interpretation by the interventionist. The teacher would then have a clear understanding of which intervention steps are ‘negotiable’ or ‘non-negotiable’ (Hawkins, Morrison, Musti-Rao, & Hawkins, 2008). Of course, the intervention integrity checklist would also distinguish between the critical and non-critical intervention elements.(The Intervention Script Builder (see attachment at the bottom of this page) can be used to create an intervention integrity checklist by dividing an intervention into its constituent steps and identifying specific steps as ‘negotiable’ or ‘non-negotiable’. )
Attachments
References
-
Gansle, K. A., & Noell, G. H. (2007). The fundamental role of intervention implementation in assessing response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Response to intervention: The science and practice of assessment and intervention (pp. 244-251). New York: Springer Publishing.
-
Hawkins, R. O., Morrison, J. Q., Musti-Rao, S., & Hawkins, J. A. (2008). Treatment integrity for academic interventions in real- world settings. School Psychology Forum, 2(3), 1-15.
-
Montague, M. (1992). The effects of cognitive and metacognitive strategy instruction on the mathematical problem solving of middle school students with learning disabilities. Journal of Learning Disabilities, 25, 230-248.
-
Kazdin, A. E. (1989). Behavior modification in applied settings (4th ed.). Pacific Gove, CA: Brooks/Cole.
-
Rhymer, K. N., Skinner, C. H., Jackson, S., McNeill, S., Smith, T., & Jackson, B. (2002). The 1-minute explicit timing intervention: The influence of mathematics problem difficulty. Journal of Instructional Psychology, 29(4), 305-311.
-
Roach, A. T., & Elliott, S. N. (2008). Best practices in facilitating and evaluating intervention integrity. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.195-208).
-
Wright, J. (2007). The RTI toolkit: A practical guide for schools. Port Chester, NY: National Professional Resources, Inc.