Hostname: page-component-848d4c4894-2pzkn Total loading time: 0 Render date: 2024-05-08T10:19:00.017Z Has data issue: false hasContentIssue false

A Preliminary Study Connecting School Improvement and MTSS With Student Outcomes

Published online by Cambridge University Press:  28 November 2023

Hank S. Bohanon*
Affiliation:
Loyola University Chicago, USA
Meng-Jia Wu
Affiliation:
Loyola University Chicago, USA
Ali Kushki
Affiliation:
Purdue University, USA
Cheyne LeVesseur
Affiliation:
Michigan’s MTSS Technical Assistance Center, USA
*
Corresponding author: Hank Bohanon; Email: hbohano@luc.edu
Rights & Permissions [Opens in a new window]

Abstract

Schools have an increased focus on implementing schoolwide initiatives (e.g., multi-tiered systems of support; MTSS) to address risk factors related to dropping out. These interventions can involve multiple domains, including academic, behavioural, and social and emotional supports. Although researchers suggest that schoolwide interventions are effective, school staff may need help implementing various content (e.g., academic, behaviour) domains into a cohesive plan. This preliminary study focused on nine schools in the Midwestern United States that implemented schoolwide interventions as part of a statewide technical assistance approach. The research included using survey and extant data for all students to determine the connections between schoolwide interventions, school improvement, and student outcomes. Schools in this study that were higher on both school improvement and MTSS implementation had, in general, better student outcomes associated with predictors of dropping out of school. These findings indicate that school improvement and MTSS may be mutually beneficial enterprises that help school staff address factors related to dropping out.

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Australian Association of Special Education

Dropping out of school has been a concern for educators internationally. According to UNESCO (2021), 24 million students are at risk of dropping out globally. School risk factors related to dropout include students’ ability to make academic progress over time, teacher absenteeism, and the quality of the education provided (Sabates et al., Reference Sabates, Akyeampong, Westbrook and Hunt2010). Preventive school climate factors that can decrease the likelihood of dropout include school safety, the physical environment, teaching and learning, and interpersonal relationships (Thapa et al., Reference Thapa, Cohen, Higgins-D’Alessandro and Guffey2012). One strategy for implementing these preventive factors is multi-tiered systems of support (MTSS).

MTSS approaches involve a three-tiered continuum of support (Durrance, Reference Durrance2023; Estrapala et al., Reference Estrapala, Rila and Bruhn2021). These interventions include schoolwide (Tier 1), group level (Tier 2), and individualised (Tier 3) support (Gamm et al., Reference Gamm, Elliott, Halbert, Price-Baugh, Hall, Walston, Uro and Casserly2012). Researchers suggest that staff who effectively implement Tier 1 strategies are likelier to implement more intensive Tier 2 and 3 interventions for their students (Eber et al., Reference Eber, Phillips, Upreti, Hyde, Lewandowski and Rose2009). MTSS approaches can take the form of schoolwide positive behaviour interventions and supports (SWPBIS), response to intervention, social and emotional learning (SEL), and school-based mental health. Schools that have implemented comprehensive MTSS have successfully addressed risk factors related to dropout for all students (Bradshaw et al., Reference Bradshaw, Mitchell and Leaf2010; Burke, Reference Burke2015; Gage et al., Reference Gage, Whitford and Katsiyannis2018) and students with disabilities (Choi et al., Reference Choi, McCart and Sailor2020a). Even with access to MTSS, many schools struggle to integrate MTSS-related strategies (e.g., academic, behavioural) into their settings (Choi et al., Reference Choi, McCart, Miller and Sailor2022; Jarl et al., Reference Jarl, Andersson and Blossing2017; Wei & Johnson, Reference Wei and Johnson2020).

How School Improvement Supports Schoolwide Interventions

School improvement is a mechanism for organising schoolwide interventions, such as MTSS. School improvement involves having a systematic approach to changing instruction and the environment. School improvement involves standardisation of instruction, monitoring progress, administrative support, distributive and collective leadership, providing sufficient resources, and district-level support (Dolph, Reference Dolph2017; Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Improvement processes can be incremental, context specific, and challenging to implement (Donaldson & Weiner, Reference Donaldson and Weiner2017). However, schools with higher levels of schoolwide capacities, such as those implementing MTSS, can better integrate external reforms into their current structures than those with a lower ability (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Schoolwide capacity for improvement involves leadership practices, organisational conditions, teacher motivation, and teacher learning (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Although addressing schoolwide capacity may increase the effectiveness of implementation, there are still challenges to effective school improvement.

Using MTSS to facilitate school improvement may support factors that lead to improved student outcomes. These factors include high expectations, purposeful actions, meaningful relationships (Kaniuka & Vickers, Reference Kaniuka and Vickers2010), and systems components (e.g., administrative support, implementation teams) for interventions. For instance, the effectiveness of school improvement may be mediated by the functioning of the school improvement team (Benoliel, Reference Benoliel2021). The MTSS process provides systems, structures, and strategies that support healthy problem-solving, which could help improve team functioning (Goodman & Bohanon, Reference Goodman and Bohanon2018). Also, developing school improvement plans based on problem identification, a component of MTSS, may lead to more effective outcomes (Mintrop, Reference Mintrop2020). There is emerging research about integrating MTSS and school improvement for all students (Bohanon et al., Reference Bohanon, Wu, Kushki, LeVesseur, Harms, Vera, Carlson-Sanei and Shriberg2021; Freeman et al., Reference Freeman, Miller and Newcomer2015) and for students with disabilities (Choi et al., Reference Choi, McCart, Hicks and Sailor2019, Reference Choi, McCart and Sailor2020b; Sailor et al., Reference Sailor, Satter, Woods, McLeskey and Waldron2017). However, few researchers have examined the interaction between MTSS and school improvement in secondary schools.

Theoretical Framework: Activity Theory

The underlying theory for this study is activity theory, which involves activity systems components and their internal relationships (Engeström, Reference Engeström, Engeström, Miettinen and Punamäki1999). This theory involves a nested triangle model that includes societal and contextual factors. Activity systems consist of a subject (i.e., an individual or a subgroup), object (i.e., the orientation of activity), outcomes, community (i.e., participants with a shared goal), division of labour (i.e., distribution of roles), and rules (i.e., norms and conventions; Engeström, Reference Engeström, Engeström, Miettinen and Punamäki1999). The current model’s premise is that multiple activity systems are integrated by a shared focus on a goal or object (Engeström, Reference Engeström2008). For example, schoolwide approaches for behaviour, academic, and SEL support focus on using explicit instruction for teaching skills (Bohanon & Wu, Reference Bohanon and Wu2011). If the school improvement goals include behaviour, academic, and social outcomes, schoolwide teams may more easily integrate separate MTSS approaches into a joint plan.

Purpose of the Study

Our goal for this preliminary study was to improve our understanding of the connections between school improvement and MTSS implementation. Further, we hoped to learn how commitment to each process could improve student outcomes. We hypothesised that school improvement and MTSS would benefit from simultaneous implementation. This research is a response to calls to address how MTSS-related initiatives can improve student outcomes, including factors connected with dropping out of school (Horner et al., Reference Horner, Sugai and Fixsen2017). The following questions guided this research:

  • RQ1: What were the data patterns for schools above and below the median on measures related to implementing a schoolwide intervention, school improvement, and school improvement outcomes as assessed by statewide report card data?

  • RQ2: What were the changes in data patterns for schools above and below the median over time on measures related to schoolwide intervention implementation, school improvement, and school-level student outcomes (e.g., academic, graduation, dropout)?

  • RQ3: What were the patterns between implementing schoolwide interventions and school-level student outcomes?

  • RQ4: What data patterns emerged in office discipline referrals for individual students related to school improvement and schoolwide interventions?

Methods

This preliminary study occurred over 3 years, from the fall of 2014 to the spring of 2017. The institutional review board at Loyola University Chicago approved this study (IRB# 1396). The state education agency and its technical assistance team provided access to student-level and MTSS fidelity extant data. School-level data were publicly available on the state education agency’s website. Consent was only required for the school improvement measure designed for this research. The following section provides information regarding the participants, independent variables, dependent variables, and data analysis.

Participants

The participants for this study were staff from nine purposively sampled general education high schools. These schools implemented an integrated academic and behavioural schoolwide intervention in a Midwestern state in the United States. A statewide technical assistance team provided support for the implementation of MTSS. Technical assistance focused on providing effective academic and behavioural core instruction, using data for decision-making, developing effective team structures, and developing a continuum of student support. Participation in technical assistance included attending statewide MTSS training and submitting data related to the project (e.g., implementation fidelity data). These nine high schools represented all general education high schools implementing MTSS as a part of this statewide project at the time of data collection.

We mailed surveys (described in the Variables section) to the nine schools for this study to measure levels of school improvement. Table 1 includes demographic information for the sample schools with participants who did (n = 5, 56% response rate) and did not (n = 4) respond to the survey. In addition to the nine schools in the sample, we conducted a comparative analysis with 20 additional randomly selected high schools across the state. This analysis aimed to determine if schools in the study were similar to secondary schools throughout the state. Results from a permutation test indicated that the study schools were not significantly different from randomly selected schools in the state based on student demographic variables (e.g., number of students, socio-economic status, dropout rate).

Table 1. Demographics for Schools With and Without Responders for the TIERS

Note. TIERS = Tiered Inventory of Effective Resources in Schools.

Variables

Fidelity measures for schoolwide supports

Two measures assessed the fidelity of implementation for schoolwide efforts: the Benchmarks of Quality (BoQ; Year 1 of the study – 2014–2015; Kincaid et al., Reference Kincaid, Childs and George2010) and the SWPBIS Tiered Fidelity Inventory (TFI; Year 2 of the study – 2015–2016; Algozzine et al., Reference Algozzine, Barrett, Eber, George, Horner, Lewis, Putnam, Swain-Bradway, McIntosh and Sugai2014). The state technical assistance provider switched between the two instruments during the study. Both tools are self-assessments of MTSS that emphasise SWPBIS. The systems-level components (e.g., administrative support, leadership team development) for both instruments are similar to measures of MTSS for other domains (Bohanon & Wu, Reference Bohanon and Wu2011; e.g., academic support, SEL). Internal schoolwide teams completed the BoQ and TFI with guidance from an external coach familiar with the instruments and MTSS.

Benchmarks of Quality

The BoQ included 53 items related to implementing schoolwide behaviour support at Tier 1 (e.g., faculty commitment, procedures for dealing with discipline). Schoolwide leadership teams scored each item on a Likert scale, ranging from 0 to a maximum of 3 points per item. Each item included a unique description for scoring purposes. Researchers found the BoQ to be valid and reliable for measuring the implementation of SWPBIS. The overall internal consistency of BoQ is α = .96 (Cohen et al., Reference Cohen, Kincaid and Childs2007). The BoQ overall implementation score for this study’s sample was 38%, with a range of 20% to 59%.

Tiered Fidelity Inventory

The TFI also measures implementation fidelity of MTSS related to behaviour. The TFI includes 45 items related to implementing SWPBIS across three tiers (e.g., team composition, discipline policies). The items on the TFI are scored on a Likert scale, with values ranging from 0 to 2 points. The overall internal consistency of the TFI is α = .96. The TFI also includes subscales for all three tiers of schoolwide support. Tiers 1, 2, and 3 sections had alpha scores of .87, .96, and .98, respectively (McIntosh et al., Reference McIntosh, Massar, Algozzine, George, Horner, Lewis and Swain-Bradway2017). Researchers identified moderate convergent validity between scores on the BoQ and the TFI (Mercer et al., Reference Mercer, McIntosh and Hoselton2017). The overall average TFI total score for the study schools was 20%, with a range of 7% to 29%. The Tier 1 average score across schools was 56%, ranging from 6% to 89%. For Tier 2, the average score was 9%, ranging from 0% to 63%. Finally, the Tier 3 school average was 5%, ranging from 0% to 50%. For analysis purposes, the BoQ total score and the TFI Tier 1 total score yielded the best comparisons due to the similarity of items in each instrument. Both BoQ and TFI data were collected during the second semester of each school year.

Tiered Inventory of Effective Resources in Schools

Based on our literature review, we could not find existing tools measuring elements of school improvement related to MTSS. Therefore, we designed the Tiered Inventory of Effective Resources in Schools (TIERS) for this study. We collected the TIERS data at the end of Year 2 and the beginning of Year 3 (2016–2017). The goal of the instrument was to measure components of school improvement implementation and MTSS. Most of the 25 items on the TIERS are scaled and have nominal or ordinal response Likert scale options. The prompts include constructs from both school improvement and MTSS. A copy of the TIERS is included as supplementary material for this article.

We gathered evidence of the validity of internal structures that indicated the degree to which the relationship among test items and components conformed to the constructs under study (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014, p. 13). Specifically, we assessed construct validity by assessing the items’ content validity (Sireci & Padilla, Reference Sireci and Padilla2014). We addressed content validity using two methods. First, we designed the TIERS items from a literature review on school improvement. Second, experts in schoolwide support and psychometrics reviewed the TIERS. Expert reviewers judged that the survey’s content addressed school improvement systems and data factors related to MTSS. The expert reviewers also judged the scaling appropriate to measure the survey constructs (Adams & Lawrence, Reference Adams and Lawrence2018; Forman & Crystal, Reference Forman and Crystal2015). We established the initial criterion-referenced validity (Kim & Shin, Reference Kim and Shin2022) in a previous study (Bohanon et al., Reference Bohanon, Wu, Kushki, LeVesseur, Harms, Vera, Carlson-Sanei and Shriberg2021). A Kendall rank-order coefficient test (W; Kendall, Reference Kendall1938; Puth et al., Reference Puth, Neuhäuser and Ruxton2015) yielded a statistically significant correlation (W = 1.00, p < .025) between the scores on a statewide measure of school improvement and the TIERS. A higher score on the TIERS correlated with higher scores on aggregate school improvement outcomes (e.g., attendance rates, graduation rates, performance on standardised scores).

Based on the TIERS data, 100% of the responders (n = 34) across the five schools that provided data reported having a school improvement plan and leadership team. Most (71%) reported having 6–10 leadership team members. Forty-four percent said it was usually or always true they reviewed data related to their school improvement plan three times per year. Additionally, 53% indicated using data to progress monitor all interventions for students in their schools. Although the schools appeared to have a plan and leadership teams in place, they implemented improvement data practices to a lesser degree.

Outcome data

School-level data

The state provided the school-level data through a public data warehouse. There has been a call for using publicly available existing data due to their cost effectiveness and ability to access data that would not be otherwise available (Watkins & Johnson, Reference Watkins, Johnson, Tierney, Rizvi and Ercikan2023). In the case of this study, we accessed schoolwide student data we could not have collected on our own. We measured school improvement outcomes using school-level scorecard data (RQs 1–2). These datasets included students from all demographics, including those with and without disabilities. The data were available for the second year of MTSS implementation (2015–2016). These data include variables typically associated with early warning systems (Carl et al., Reference Carl, Richardson, Cheng, Kim and Meyer2013). The state provided each school’s raw score and total points based on the scorecard dataset. The scorecard data included a composite of (a) graduation rates; (b) educator evaluations; (c) compliance factors (e.g., submitting a school improvement plan); (d) student proficiency on standardised assessments; (e) the percentage of students who participate in standardised assessments; and (f) attendance rates.

Data were also publicly available at the school level for graduation and dropout rates for Year 1 of MTSS implementation (2014–2015) and the prior year (2013–2014; RQ3). Standardised test scores for college and career readiness for maths and all subject areas were available for Years 1 (2014–2015) and 2 (2015–2016) of MTSS implementation (RQ2) for analysis with school improvement (i.e., TIERS). Only standardised maths scores were available for analysis with schoolwide interventions (i.e., BoQ) for Years 1 and 2 of MTSS implementation. Although student outcome data were collected some time ago, these same variables are still collected to evaluate student progress and school improvement. Therefore, these data appear to be relevant to current educational issues.

Student-level data

We used student-level office discipline referrals (ODR) to study patterns with MTSS and school improvement (RQ4). Both students with and without disabilities received ODRs in schools in this state. However, the state only required schools to collect ODR data for students with disabilities. Due to this limitation, the analysis for RQ4 focused on students with disabilities. These data were not publicly available, and we were granted access after formally applying to the state department of education. Although not all students with disabilities receive ODRs, researchers have shown that these students are more likely to receive disciplinary action (Green et al., Reference Green, Cohen and Stormont2019). However, with proper contextual support, these students may be less likely to receive punitive responses (Hurwitz et al., Reference Hurwitz, Cohen and Perry2021). Therefore, we considered analysis of this subgroup to be a valuable addition to the role of school improvement and MTSS-related interviews. Data for Years 1 (2014–2015) and 2 (2015–2016) of MTSS implementation were available. Based on previous research, ODR data are typically organised by the percentage of students with zero to one (i.e., Tier 1), two to five (i.e., Tier 2), and six or more (i.e., Tier 3) referrals (PBISApps, 2022). These cut points provide a valid method for identifying the support students need related to externalised problem behaviour (McIntosh et al., Reference McIntosh, Campbell, Carter and Zumbo2009). Seven schools (n = 7) reported ODR data for 2014–2015, and eight schools (n = 8) reported ODR data for 2015–2016. The data included a summary of the total number of referrals for each student with a disability by school.

Analysis

We used descriptive statistics for RQ1 and RQ2. We first analysed the TIERS, BoQ, TFI, and school improvement data for all five schools based on total scores on each variable. For RQ1 and RQ2, we analysed the data based on schools whose TIERS scores were above and below the median score. We used average scores and growth between years where the outcome variables were available. For RQ1, we also used the Kendall W to determine if there was a relationship between the rank order of the data on schoolwide interventions as measured by the BoQ and TFI, school improvement as measured by the TIERS, and the school improvement scorecard data.

We applied the Spearman rank-order correlation coefficient (ρ) to analyse RQ3. The focus of RQ3 was the relationship between intervention and student outcomes at the school level. For the analysis, we focused on the BoQ measured in 2015 and its association with the change in three student outcomes: maths (between 2015 and 2016), graduate rate (between 2014 and 2015), and the dropout rate (between 2014 and 2015). Although data interpretation guidelines exist, the cut points are arbitrary and inconsistent across recommendations (Schober et al., Reference Schober, Boer and Schwarte2018). Therefore, readers should use caution when interpreting descriptors associated with cut points for correlation coefficients. The Kendall W and the Spearman ρ yield scores ranging from −1 to +1. The higher the value, the higher the level of association. A positive result indicates that a higher value on one variable is associated with a higher value on another. A positive result is indicative that a higher number on one value is related to a lower number on another variable (Puth et al., Reference Puth, Neuhäuser and Ruxton2015). For the Spearman ρ, general guidance is that 0 to ± 0.20 = negligible result, ± 0.21 to ± 0.40 = weak, ± 0.41 to ± 0.60 = moderate, ± 0.61 to ± 0.80 = strong, and ± 0.81 to ± 1.00 = very strong (Prion & Haerling, Reference Prion and Haerling2014). For the Kendall W, recommendations are 0 = ± 0.05 = negligible result, ± 0.06 to ± 0.25 = weak, ± 0.26 to ± 0.48 = moderate, ± 0.49 to ± 0.70 = strong, and ± 0.71 to ± 1.00 = very strong (Schober et al., Reference Schober, Boer and Schwarte2018). The Kendall W and the Spearman ρ, both nonparametric statistics, were appropriate because of the study’s small sample size. Further, these statistics do not require assumptions of normality of the data (Puth et al., Reference Puth, Neuhäuser and Ruxton2015; Siegel & Castellan, Reference Siegel and Castellan1988).

The ODR data analysis (RQ4) was descriptive and visual. We determined the percentage of students with disabilities with ODRs for each school with data. Specifically, we determined the percentage of students who had zero to one ODRs (who responded to Tier 1 strategies), two to five ODRs (who required Tier 2 supports), and six or more ODRs (who required Tier 3 supports). The comparison included the schools with the highest and lowest mean scores based on the BoQ (n = 6), TFI (n = 8), and TIERS median (n = 4). Caution should be used when interpreting these ODR data results. Districts were only required to report discipline data for students with disabilities, and only if the behavioural incident resulted in suspension of 10 days or more. These data likely underrepresent incidents of schoolwide discipline, even for students with individualised education plans.

Results

We present the results of this study based on the research questions.

Patterns Between School Improvement and Schoolwide Intervention (RQ1)

The analysis for RQ1 involved comparing school improvement, as measured by the TIERS, and schoolwide interventions, as measured by the BoQ and TFI. Table 2 provides information on the five schools’ respondents for the TIERS survey. Although the TIERS data were collected later than the BoQ and TFI, the table illustrates a possible trajectory connected to school improvement and school-level outcomes. The table includes the total points schools could earn across the TIERS, BoQ, TFI, and the school improvement scorecard data. We present the data in the order of the highest to the lowest score on the TIERS. As illustrated in the table, there did appear to be a pattern between the scores on the TIERS and school improvement scorecard data, with higher TIERS scores occurring for schools with higher school improvement scores. Table 3 includes data that compare the scores above and below the median score on the TIERS. The average score for the BoQ (36%) and the TFI at Tier 1 (70%) was higher for schools above the median (n = 2) on the TIERS than for the schools that were below the median (n = 2, BoQ = 29%, TFI = 42%). Additionally, the schools with scores above the median on the BoQ and the TFI for Tier 1 also had higher school improvement data scores.

Table 2. Comparison of the School Improvement Score, Benchmarks of Quality, Tiered Fidelity Inventory, and TIERS

Note. TIERS = Tiered Inventory of Effective Resources in Schools; BoQ = Benchmarks of Quality; TFI = Tiered Fidelity Inventory. Data are presented from the highest to lowest score on the TIERS.

Table 3. Comparing MTSS Implementation Fidelity, School Improvement, and School Improvement Outcomes Based on the Top Two and Bottom Two Scores on the TIERS

Note. MTSS = multi-tiered systems of support; TIERS = Tiered Inventory of Effective Resources in Schools; BoQ = Benchmarks of Quality; TFI = Tiered Fidelity Inventory. Data are presented based on the schools above and below the median score on the TIERS (n = 5 schools).

There were mixed results in the relationship between school improvement, schoolwide interventions, and school improvement outcomes. The relationship between the BoQ and the TIERS was not statistically significant (W = .36, p < .18). Likewise, the TFI Tier 1 scores and the TIERS relationship were not significant (W = .00, p < 1.00). Additionally, the BoQ and the TFI Tier 1 scores were not statistically significant (W = .36, p < .18). However, the BoQ and TFI Tier 1 were each statistically significantly related to the school improvement scorecard data (W = 1.00, p < .025).

Patterns Among School Improvement, Schoolwide Intervention, and School-Level Outcomes (RQ2)

Research Question 2 involved school improvement, schoolwide interventions, and available school-level outcomes. Table 3 includes information comparing the scores for schools above and below the median on the TIERS. In this sample, the schools above the mean on the TIERS (n = 2, M = 80.5%) were also two percentage points higher in school improvement points earned than schools below the median score.

Table 4 includes data on school improvement (as measured by the TIERS) and graduation and dropout rates in terms of change over time. As shown in Table 4, schools with lower levels of school improvement, as measured by the TIERS, observed decreases in graduation rates. In comparison, those with higher levels had comparatively improved graduation rates. Schools above the median on the TIERS also had a greater reduction in dropout rates (M = −35.9%) than schools below the median (M = 86.8%) across years.

Table 4. Analysis of School-Level MTSS Fidelity Data and Graduation Data for Schools Above and Below the Median Score on the TIERS

Note. MTSS = multi-tiered systems of support; TIERS = Tiered Inventory of Effective Resources in Schools. Data are presented based on the schools above and below the median score on the TIERS (n = 5 schools).

Table 5 provides an illustration of the data for school improvement and the percentage of students who were college and career ready, as measured by statewide standardised assessments. Data were only available for maths and a summative score for all subject areas. The schools below the median on TIERS started with higher scores for maths and all subjects. However, the schools with TIERS scores above the median demonstrated more growth for maths (M = 12.4%) and all academic subjects (M = 52.3%) than those below the median. The schools below the median on the TIERS started with higher scores; however, the schools above the median made more growth across the 2 years.

Table 5. Analysis of Readiness for Maths and All Subjects for Schools Above and Below the Median Score on the TIERS

Note. TIERS = Tiered Inventory of Effective Resources in Schools. Data are presented based on the schools above and below the median score on the TIERS (n = 5 schools).

Relationship Among Schoolwide Interventions and Student Outcomes (RQ3)

This analysis includes all nine general education high schools from the sample (n = 9). In some instances, the total score on the BoQ seemed to be related to student outcomes. The relationship between BoQ and the following year’s maths change was positive yet negligible (rs = .07). The relationship between BoQ and graduation rate (rs = .35) was weak. The relationship between BoQ and the dropout rate (rs = −.50) was moderate.

ODR Data Patterns for Schoolwide Interventions and School Improvement (RQ4)

We present the data ODR analysis for students with disabilities by comparisons with the TIERS, BoQ, and TFI at Tier 1. Figure 1 includes a comparison of the schools above and below the median on the TIERS and the percentage of students with zero to one, two to five, and six or more ODRs from the 2015–2016 school year. We chose this year for the ODR comparison because it was closest to the time of TIERS data collection. The average number of students with disabilities at each school was M = 89 (n = 444). All schools with TIERS and ODR data (n = 5) had an average of 90% (SD = 6.87%) of the students with disabilities with zero to one ODRs, 9% (SD = 7.12%) with two to five ODRs, and 1% (SD = 1.42%) had six or more ODRs. However, schools above the median on the TIERS (n = 2) had more students with one or fewer ODRs and fewer students with two to five ODRs than the two schools below the median score. The schools above the median on the TIERS had a lower percentage of students with six or more ODRs.

Figure 1. Percentage of Students With Disabilities With ODRs, TIERS Comparison.

Note. The figure includes a comparison of the schools with office discipline referral (ODR) data from 2015 to 2016 above the Tiered Inventory of Effective Resources in Schools (TIERS) median (n = 2) and below the TIERS median (n = 2). Each portion of the graph represents the percentage of students with ODRs based on a specific frequency range (e.g., 0–1, 2–5, 6 <). Results were rounded to the nearest whole number.

Seven schools had BoQ and ODR data (n = 7) during the 2014–2015 school year. The average number of students per school was M = 101, totalling 708 students with disabilities with ODR data. When comparing the BoQ (see Figure 2) and ODRs, these schools (n = 7) had an average of 94% (SD = 2.72%) of the students with disabilities with one or fewer ODRs, 6% (SD = 2.62%) with two to five ODRs, and 0.11% (SD = 0.3%) had six or more ODRs. Figure 2 provides a comparison of the schools above and below the median on the BoQ in terms of the percentage of students with disabilities with one or fewer, two to five, and six or more ODRs. Schools above the median on the BoQ (n = 3) had more students with one or fewer ODRs and fewer students with two to five ODRs than schools below the median score (n = 3). The schools above and below the median on the BoQ had approximately the same number of students with six or more ODRs.

Figure 2. Percentage of Students With Disabilities With ODRs, BoQ Comparison.

Note. The figure includes a comparison of schools with office discipline referral (ODR) data from 2014 to 2015 above the Benchmarks of Quality (BoQ) median (n = 3) and below the BoQ median (n = 3). Each portion of the graph represents the percentage of students with ODRs based on a specific frequency range (e.g., 0–1, 2–5, 6 <). Results were rounded to the nearest whole number.

Eight schools from the sample had TFI Tier 1 and ODR data (n = 8) for the 2015–2016 school year. The average number of students with disabilities at each school was M = 96 (n = 764). When comparing the TFI at Tier 1 with ODRs (see Figure 3), all schools with data (n = 8) had an average of 92% (SD = 6.58%) of the students with disabilities with one or fewer ODRs, 7% (SD = 6.54%) with two to five ODRs, and .52% (SD = 1.13%) had six or more ODRs. Due to tied scores, we calculated a median for the TFI at Tier 1 based on the total range of the scores. Schools above the median on the TFI Tier 1 (n = 4) had more students with one to zero ODRs and fewer students with two to five ODRs than schools below the median score (n = 4).

Figure 3. Percentage of Students With Disabilities With ODRs, TFI Tier 1 Comparison.

Note. The figure includes a comparison of schools with office discipline referral (ODR) data from 2015 to 2016 above the Tiered Fidelity Inventory (TFI) Tier 1 median (n = 4) and below the TFI Tier 1 median (n = 4). Each portion of the graph represents the percentage of students with ODRs based on a specific frequency range (e.g., 0–1, 2–5, 6<). Results were rounded to the nearest whole number.

Discussion

This preliminary study’s purpose was to better understand the relationships between school improvement, schoolwide interventions, and student outcomes. The underlying theory behind connecting school improvement and MTSS was activity theory (Engeström, Reference Engeström, Engeström, Miettinen and Punamäki1999). At its core, activity theory is concerned with understanding how individuals interact with their environment, including other people, artefacts, and institutions. The framework assumes that human activity is mediated by tools, which can be physical, social, or symbolic. These tools shape our interactions with the environment and influence how we perceive, think, and act.

In the context of school improvement and MTSS, activity theory was used to understand how these systems are designed, implemented, and evaluated. By analysing the activity system, including the roles of each stakeholder, the tools they use, and the institutional contexts in which they operate, we better understand the challenges and opportunities for improving student outcomes. In implementing MTSS, activity theory can help us understand how teachers and administrators use different tools, such as progress monitoring assessments or behaviour interventions, to support student learning and behaviour. It can also help us to identify the institutional barriers that may prevent effective implementation, such as a lack of resources or professional development opportunities.

Specifically, activity theory is helpful in interpreting the study’s results. School improvement and MTSS both have activity systems that could be mutually beneficial (Engeström, Reference Engeström2008). School improvement planning involves developing shared goals and distributing roles and responsibilities (Mintrop, Reference Mintrop2020; Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Both MTSS and school improvement focus on creating systems to support procedures (Benoliel, Reference Benoliel2021; Gamm et al., Reference Gamm, Elliott, Halbert, Price-Baugh, Hall, Walston, Uro and Casserly2012). Further, MTSS includes methods for providing a continuum of support. Therefore, activity theory provides a useful lens with which to identify each system’s similarities, differences, and mutual benefits for improving student outcomes.

In this study, there appeared to be somewhat of a positive relationship between school improvement efforts and the implementation of schoolwide interventions. Additionally, two of the datasets used in this study, the TIERS and school improvement scorecard data, often showed similar patterns (i.e., high or low) within schools. For instance, schools with higher levels of school improvement observed a greater reduction in dropout rates and comparatively improved graduation rates for all students. The reduced ODRs for students with disabilities and the growth of all students’ college and career readiness were greater for schools with higher TIERS scores. These observations may be partially due to the connections between school improvement (e.g., shared goals, division of labour) and schoolwide MTSS interventions (e.g., shared norms and conventions; Benoliel, Reference Benoliel2021; Engeström, Reference Engeström2008; Goodman & Bohanon, Reference Goodman and Bohanon2018).

Similar to other research (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014), the schools in this study with higher levels of schoolwide capacities, as measured by the TFI and BoQ, appeared to be better able to implement school improvement strategies, as measured by the TIERS. This outcome may be partly due to the schools’ focus on leadership capacity, problem-solving, team functioning, and other organisational conditions through their work on MTSS (Goodman & Bohanon, Reference Goodman and Bohanon2018; Lane et al., Reference Lane, Menzies, Ennis and Bezdek2013). None of the schools in this study fully implemented MTSS based on the BoQ and TFI data. However, their capacity to implement school improvement, as measured by the TIERS, may have improved their ability to integrate MTSS into their setting.

The systems development that is a part of MTSS may also have improved the staff’s capacity to address school improvement priorities in their settings (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). For example, the focus on healthy team functioning included in MTSS may have supported the school leadership teams’ ability to implement improvement strategies (Benoliel, Reference Benoliel2021). Schools in the study with higher levels of MTSS implementation demonstrated a higher degree of improvement in outcomes for all students (e.g., maths, dropout). This result may have been a function of the teams’ ability to use MTSS problem-solving strategies (e.g., reviewing screening data, problem identification) within their school improvement approaches (Goodman & Bohanon, Reference Goodman and Bohanon2018; Mintrop, Reference Mintrop2020).

Additionally, the schools above the median on BoQ and TFI reflected the expected proportion of students with disabilities with ODRs based on models of MTSS (i.e., 80% with zero to one ODRs, 5%–15% with two to five ODRs, 5% > with six or more ODRs). Interestingly, there appeared to be a higher proportion of students at the universal or targeted level, such as those with five or fewer ODRs in schools above the median on the BoQ, TFI, and TIERS. These findings are similar to other studies on decreases in ODRs related to MTSS approaches (Bradshaw et al., Reference Bradshaw, Mitchell and Leaf2010; Gage et al., Reference Gage, Whitford and Katsiyannis2018).

This preliminary study expands the connection with reductions in ODRs to include the association with school improvement, as measured through the TIERS. Additionally, the results involving ODRs relates to existing research supporting that schoolwide efforts and school improvement can improve outcomes for students with disabilities (Choi et al., Reference Choi, McCart, Hicks and Sailor2019, Reference Choi, McCart and Sailor2020b; Sailor et al., Reference Sailor, Zuna, Choi, Thomas, McCart and Roger2006). Students with disabilities already receiving disciplinary action (Green et al., Reference Green, Cohen and Stormont2019) are less likely to receive punitive responses once they receive specialised services and support (Hurwitz et al., Reference Hurwitz, Cohen and Perry2021). The school staff’s ability to support students with disabilities and behavioural issues may be related to the effectiveness of their schoolwide environment (Eber et al., Reference Eber, Phillips, Upreti, Hyde, Lewandowski and Rose2009). Although the results of this study may be promising, readers should consider the study outcomes in light of its limitations.

Limitations

Given the small sample size, it was difficult to quantitatively establish the validity of the TIERS for measuring MTSS and school improvement. This preliminary research was a case study based on a sample of all nine general education high schools implementing MTSS in one state. As in the state where this study occurred, far fewer high schools implemented MTSS than primary schools, which made it challenging to increase sample size. However, these results may provide insights that could guide future researchers studying the connections between MTSS and school improvement. Readers should consider how these findings might transfer to their setting, rather than generalising to all high school settings. Further, nonparametric statistics do not assume normality or normal distribution. Also, the nonparametric statistics used in this study are designed to address changes in underlying distributions with smaller sample sizes. Therefore, they are appropriate for applications with smaller datasets (Puth et al., Reference Puth, Neuhäuser and Ruxton2015; Siegel & Castellan, Reference Siegel and Castellan1988). There also were limitations in data availability from the state board of education. To address this concern, we only used outcome data where we could consistently compare data across school years. Additional studies should include datasets across more academic subjects and perhaps include SEL measures.

Although we addressed the TIERS’ content validity through qualitative efforts (e.g., grounding items in the research literature, expert review), researchers of future studies should use a larger sample to determine the TIERS’ psychometric properties. We also attempted to address the TIERS’ construct validity. However, future studies should address its overall reliability and validity. For example, researchers can use cognitive pretesting (Lenzner et al., Reference Lenzner, Neuert and Otto2016) to understand how respondents perceived the TIERS’ items. Although we attempted to reduce respondent fatigue by keeping the tool shorter, further research is needed to pinpoint any underlying problems with the instrument from the participants’ perspective. Although the research does not completely establish criterion validity, it is a small step toward understanding the criterion-referenced validity of the instrument. Researchers of future studies should draw upon larger samples to continue establishing criterion-referenced validity for the TIERS. Due to these limitations, the reader should use caution when considering these results. Future studies with larger sample sizes could also better describe the magnitude of outcomes included in this study. These studies should also include more sites that have reached full MTSS implementation. Researchers also should consider interview and observational data to identify factors related to MTSS and school improvement.

In most cases, MTSS implementation was below the full implementation threshold for MTSS. Only one school met the fidelity of the implementation benchmark on the TFI Tier 1 scale. This lack of fidelity of implementation limits the study’s findings. Although not impossible, implementing schoolwide interventions at full capacity may be more difficult to achieve quickly for secondary than primary schools (Durrance, Reference Durrance2023; Estrapala et al., Reference Estrapala, Rila and Bruhn2021). Contextual barriers include organisational structures, school size, and student development (Estrapala et al., Reference Estrapala, Rila and Bruhn2021). Implementation in secondary settings requires more time and resources to reach full fidelity (Durrance, Reference Durrance2023). At the time of the study, the schools in this sample had 2 years of documented MTSS implementation. Secondary schools may take over 2 years to fully implement MTSS (Durrance, Reference Durrance2023). Again, this sample represented all general education high schools implementing MTSS supported by the state. Therefore, subtle differences in data may provide insights into the early stages of MTSS implementation in secondary schools. In terms of ODRs, future studies should include data for all students, including those without disabilities. Despite ODRs only being available for students with disabilities, we believe interpreting these data tells part of the story of a subgroup keenly impacted by disciplinary actions (Green et al., Reference Green, Cohen and Stormont2019).

Conclusion

Preventing school dropout may include addressing school improvement and schoolwide factors. In this study, schools with higher levels of school improvement generally had higher levels of MTSS implementation. Overall, schools in this preliminary study that were higher on both school improvement and MTSS implementation had better outcomes associated with predictors of dropping out of school. School improvement and MTSS may be mutually beneficial enterprises that help school staff address factors related to dropping out. MTSS may help develop systems capacity and response arrays to better implement school improvement. School improvement may provide a common structure to help MTSS approaches (e.g., behaviour, academic, social and emotional) integrate into one cohesive system. Because the BoQ and TFI did not measure the academic form of MTSS, some of the improvement outcomes related to academic and dropout may have been due to improvements in core instruction, which were not measured. Both measures of MTSS assessed if the system supports were in place to implement schoolwide interventions effectively. More research is needed to understand better how school improvement and MTSS approaches can support improved outcomes for students who are most at risk of dropping out of school, including students with disabilities. We hope this preliminary study contributed to a better understanding of the relationship between these two schoolwide approaches.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/jsi.2023.15

Acknowledgements

The authors thank Karen Berlin, coordinator, Region 4 Training and Technical Assistance Center at George Mason University, and Dr Rachel Freeman, director of state initiatives, Institute on Community Integration at the University of Minnesota, for their feedback on this article. The authors also thank Dr Anna Harms and Dr Steve Goodman from Michigan’s MTSS Technical Assistance Center for supporting this project.

Funding

This research was partially sponsored by an internal research stimulation grant from Loyola University of Chicago.

Footnotes

This manuscript was accepted under the Editorship of Michael Arthur-Kelly.

References

Adams, K. A., & Lawrence, E. K. (2018). Research methods, statistics, and applications (2nd ed.). SAGE Publications.Google Scholar
Algozzine, B., Barrett, S., Eber, L., George, H., Horner, R., Lewis, T., Putnam, B., Swain-Bradway, J., McIntosh, K., & Sugai, G. (2014). SWPBIS Tiered Fidelity Inventory. OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports. https://www.pbis.org Google Scholar
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing (Rev. ed.). American Educational Research Association.Google Scholar
Benoliel, P. (2021). A team-based perspective for school improvement: The mediating role of school management teams. Journal of Research on Educational Effectiveness, 14(2), 442470. https://doi.org/10.1080/19345747.2020.1849481 CrossRefGoogle Scholar
Bohanon, H., & Wu, M.-J. (2011). Can prevention programs work together? An example of school-based mental health with prevention initiatives. Advances in School Mental Health Promotion, 4(4), 3546. https://doi.org/10.1080/1754730X.2011.9715641 CrossRefGoogle Scholar
Bohanon, H. S., Wu, M.-J., Kushki, A., LeVesseur, C., Harms, A., Vera, E., Carlson-Sanei, J., & Shriberg, D. (2021). The role of school improvement planning in the implementation of MTSS in secondary schools. Preventing School Failure, 65(3), 230242. https://doi.org/10.1080/1045988X.2021.1908215 CrossRefGoogle Scholar
Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2010). Examining the effects of schoolwide positive behavioral interventions and supports on student outcomes: Results from a randomized controlled effectiveness trial in elementary schools. Journal of Positive Behavior Interventions, 12(3), 133148. https://doi.org/10.1177/1098300709334798 CrossRefGoogle Scholar
Burke, A. (2015). Early identification of high school graduation outcomes in Oregon Leadership Network schools (REL 2015–079). U.S. Department of Education; Institute of Education Sciences; National Center for Education Evaluation and Regional Assistance; Regional Educational Laboratory Northwest.Google Scholar
Carl, B., Richardson, J. T., Cheng, E., Kim, H., & Meyer, R. H. (2013). Theory and application of early warning systems for high school and beyond. Journal of Education for Students Placed at Risk (JESPAR), 18(1), 2949. https://doi.org/10.1080/10824669.2013.745374 CrossRefGoogle Scholar
Choi, J. H., McCart, A. B., Hicks, T. A., & Sailor, W. (2019). An analysis of mediating effects of school leadership on MTSS implementation. The Journal of Special Education, 53(1), 1527. https://doi.org/10.1177/0022466918804815 CrossRefGoogle Scholar
Choi, J. H., McCart, A. B., Miller, D. H., & Sailor, W. (2022). Issues in statewide scale up of a multi-tiered system of support. Journal of School Leadership, 32(5), 514536. https://doi.org/10.1177/10526846211067650 CrossRefGoogle Scholar
Choi, J. H., McCart, A. B., & Sailor, W. (2020a). Reshaping educational systems to realize the promise of inclusive education. FIRE: Forum for International Research in Education, 6(1), 823. https://doi.org/10.32865/fire202061179 Google Scholar
Choi, J. H., McCart, A. B., & Sailor, W. (2020b). Achievement of students with IEPs and associated relationships with an inclusive MTSS framework. The Journal of Special Education, 54(3), 157168. https://doi.org/10.1177/0022466919897408 CrossRefGoogle Scholar
Cohen, R., Kincaid, D., & Childs, K. E. (2007). Measuring school-wide positive behavior support implementation: Development and validation of the Benchmarks of Quality. Journal of Positive Behavior Interventions, 9(4), 203213. https://doi.org/10.1177/10983007070090040301 CrossRefGoogle Scholar
Dolph, D. (2017). Challenges and opportunities for school improvement: Recommendations for urban school principles. Education and Urban Society, 49(4), 363387. https://doi.org/10.1177/0013124516659110 CrossRefGoogle Scholar
Donaldson, M. L., & Weiner, J. (2017). The science of improvement: Responding to internal and external challenges in a complex school environment. Journal of Cases in Educational Leadership, 20(3), 6575. https://doi.org/10.1177/1555458917705412 CrossRefGoogle Scholar
Durrance, S. (2023). Implementing MTSS in secondary schools: Challenges and strategies. Comprehensive Center Network. https://region6cc.uncg.edu/wp-content/uploads/2022/06/ImplementingMTSSinSecondarySchools_2022_RC6_003.pdf Google Scholar
Eber, L., Phillips, D., Upreti, G., Hyde, K., Lewandowski, H., & Rose, J. (2009). Illinois Positive Behavioral Interventions & Supports (PBIS) network 2008-09 progress report. Illinois Positive Behavioral Interventions & Supports.Google Scholar
Engeström, Y. (1999). Activity theory and individual and social transformation. In Engeström, Y., Miettinen, R., & Punamäki, R.-L. (Eds.), Perspectives on activity theory (pp. 1938). Cambridge University Press. https://doi.org/10.1017/CBO9780511812774.003 CrossRefGoogle Scholar
Engeström, Y. (2008). From teams to knots: Activity-theoretical studies of collaboration and learning at work. Cambridge University Press. https://doi.org/10.1017/CBO9780511619847 CrossRefGoogle Scholar
Estrapala, S., Rila, A., & Bruhn, A. L. (2021). A systematic review of Tier 1 PBIS implementation in high schools. Journal of Positive Behavior Interventions, 23(4), 288302. https://doi.org/10.1177/1098300720929684 CrossRefGoogle Scholar
Forman, S. G., & Crystal, C. D. (2015). Systems consultation for multitiered systems of supports (MTSS): Implementation issues. Journal of Educational and Psychological Consultation, 25(2–3), 276285. https://doi.org/10.1080/10474412.2014.963226 CrossRefGoogle Scholar
Freeman, R., Miller, D., & Newcomer, L. (2015). Integration of academic and behavioral MTSS at the district level using implementation science. Learning Disabilities: A Contemporary Journal, 13(1), 5972.Google Scholar
Gage, N. A., Whitford, D. K., & Katsiyannis, A. (2018). A review of schoolwide positive behavior interventions and supports as a framework for reducing disciplinary exclusions. The Journal of Special Education, 52(3), 142151. https://doi.org/10.1177/0022466918767847 CrossRefGoogle Scholar
Gamm, S., Elliott, J., Halbert, J. W., Price-Baugh, R., Hall, R., Walston, D., Uro, G., & Casserly, M. (2012). Common Core State Standards and diverse urban students: Using multi-tiered systems of support. Council of the Great City Schools. https://files.eric.ed.gov/fulltext/ED537476.pdf Google Scholar
Goodman, S., & Bohanon, H. (2018). A framework for supporting all students: One-size-fits-all no longer works in schools. American School Board Journal, February, 14. https://ecommons.luc.edu/education_facpubs/116/ Google Scholar
Green, A. L., Cohen, D. R., & Stormont, M. (2019). Addressing and preventing disproportionality in exclusionary discipline practices for students of color with disabilities. Intervention in School and Clinic, 54(4), 241245. https://doi.org/10.1177/1053451218782437 CrossRefGoogle Scholar
Horner, R. H., Sugai, G., & Fixsen, D. L. (2017). Implementing effective educational practices at scales of social importance. Clinical Child and Family Psychology Review, 20(1), 2535. https://doi.org/10.1007/s10567-017-0224-7 CrossRefGoogle ScholarPubMed
Hurwitz, S., Cohen, E. D., & Perry, B. L. (2021). Special education is associated with reduced odds of school discipline among students with disabilities. Educational Researcher, 50(2), 8696. https://doi.org/10.3102/0013189X20982589 CrossRefGoogle Scholar
Jarl, M., Andersson, K., & Blossing, U. (2017). Success or failure? Presenting a case selection strategy for studies of school improvement. Education Inquiry, 8(1), 1732. https://doi.org/10.1080/20004508.2016.1275177 CrossRefGoogle Scholar
Kaniuka, T. S., & Vickers, M. (2010). Lessons learned: How early college high schools offer a pathway for high school reform. NASSP Bulletin, 94(3), 165183. https://doi.org/10.1177/0192636510384982 CrossRefGoogle Scholar
Kendall, M. G. (1938). A new measure of rank correlation. Biometrika, 30(1–2), 8193. https://doi.org/10.1093/biomet/30.1-2.81 CrossRefGoogle Scholar
Kim, J., & Shin, S. (2022). Development of the Nursing Practice Readiness Scale for new graduate nurses: A methodological study. Nurse Education in Practice, 59, Article 103298. https://doi.org/10.1016/j.nepr.2022.103298 CrossRefGoogle Scholar
Kincaid, D., Childs, K., & George, H. (2010). School-Wide Benchmarks of Quality (Revised). University of South Florida.Google Scholar
Lane, K. L., Menzies, H. M., Ennis, R. P., & Bezdek, J. (2013). School-wide systems to promote positive behaviors and facilitate instruction. Journal of Curriculum and Instruction, 7(1), 631. https://doi.org/10.3776/joci.2013.v7n1p6-31 CrossRefGoogle Scholar
Lenzner, T., Neuert, C., & Otto, W. (2016). Cognitive pretesting: GESIS survey guidelines. GESIS – Leibniz Institute for the Social Sciences. https://doi.org/10.15465/gesis-sg_en_010 CrossRefGoogle Scholar
McIntosh, K., Campbell, A. L., Carter, D. R., & Zumbo, B. D. (2009). Concurrent validity of office discipline referrals and cut points used in schoolwide positive behavior support. Behavioral Disorders, 34(2), 100113. https://doi.org/10.1177/019874290903400204 CrossRefGoogle Scholar
McIntosh, K., Massar, M. M., Algozzine, R. F., George, H. P., Horner, R. H., Lewis, T. J., & Swain-Bradway, J. (2017). Technical adequacy of the SWPBIS Tiered Fidelity Inventory. Journal of Positive Behavior Interventions, 19(1), 313. https://doi.org/10.1177/1098300716637193 CrossRefGoogle Scholar
Mercer, S. H., McIntosh, K., & Hoselton, R. (2017). Comparability of fidelity measures for assessing Tier 1 schoolwide positive behavioral interventions and supports. Journal of Positive Behavior Interventions, 19(4), 195204. https://doi.org/10.1177/1098300717693384 CrossRefGoogle Scholar
Mintrop, R. (2020). Design-based school improvement: A practical guide for education leaders. Harvard Education Press.Google Scholar
PBISApps. (2022). SWIS summary: 2021–22 academic year. Educational and Community Supports, University of Oregon. https://www.pbisapps.org/resource/swis-data-summary-2021-22-school-year Google Scholar
Prion, S., & Haerling, K. A. (2014). Making sense of methods and measurement: Spearman-rho ranked-order correlation coefficient. Clinical Simulation in Nursing, 10(10), 535536. https://doi.org/10.1016/j.ecns.2014.07.005 CrossRefGoogle Scholar
Puth, M.-T., Neuhäuser, M., & Ruxton, G. D. (2015). Effective use of Spearman’s and Kendall’s correlation coefficients for association between two measured traits. Animal Behaviour, 102, 7784. https://doi.org/10.1016/j.anbehav.2015.01.010 CrossRefGoogle Scholar
Sabates, R., Akyeampong, K., Westbrook, J., & Hunt, F. (2010). School drop out: Patterns, causes, changes and policies (Report No. 2011/ED/EFA/MRT/PI/08). UNESCO.Google Scholar
Sailor, W., Satter, A., Woods, K., McLeskey, J., & Waldron, N. (2017). School improvement through inclusive education. Oxford University Press. https://doi.org/10.1093/obo/9780199756810-0191 CrossRefGoogle Scholar
Sailor, W., Zuna, N., Choi, J.-H., Thomas, J., McCart, A., & Roger, B. (2006). Anchoring schoolwide positive behavior support in structural school reform. Research and Practice for Persons with Severe Disabilities, 31(1), 1830. https://doi.org/10.2511/rpsd.31.1.18 CrossRefGoogle Scholar
Schober, P., Boer, C., & Schwarte, L. A. (2018). Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126(5), 17631768. https://doi.org/10.1213/ANE.0000000000002864 CrossRefGoogle ScholarPubMed
Siegel, S., & Castellan, N. J. Jr. (1988). Nonparametric statistics for the behavioral sciences (2nd ed.). McGraw-Hill.Google Scholar
Sireci, S., & Padilla, J.-L. (2014). Validating assessments: Introduction to the special section. Psicothema, 26(1), 9799.Google Scholar
Sleegers, P. J. C., Thoonen, E. E. J., Oort, F. J., & Peetsma, T. T. D. (2014). Changing classroom practices: The role of school-wide capacity for sustainable improvement. Journal of Educational Administration, 52(5), 617652. https://doi.org/10.1108/JEA-11-2013-0126 CrossRefGoogle Scholar
Thapa, A., Cohen, J., Higgins-D’Alessandro, A., & Guffey, S. (2012). School climate research summary: August 2012 (School Climate Brief No. 3). National School Climate Center.Google Scholar
UNESCO. (2021, March 19). One year into COVID-19 education disruption: Where do we stand? https://www.unesco.org/en/articles/one-year-covid-19-education-disruption-where-do-we-stand Google Scholar
Watkins, D. C., & Johnson, N. C. (2023). Advancing education research through mixed methods with existing data. In Tierney, R. J., Rizvi, F., & Ercikan, K. (Eds.), International encyclopedia of education (4th ed., pp. 636644). Elsevier. https://doi.org/10.1016/B978-0-12-818630-5.11064-4 CrossRefGoogle Scholar
Wei, T., & Johnson, E. (2020). How states and districts support evidence use in school improvement: Study snapshot (NCEE 2020-004). National Center for Education Evaluation and Regional Assistance. https://files.eric.ed.gov/fulltext/ED605885.pdf Google Scholar
Figure 0

Table 1. Demographics for Schools With and Without Responders for the TIERS

Figure 1

Table 2. Comparison of the School Improvement Score, Benchmarks of Quality, Tiered Fidelity Inventory, and TIERS

Figure 2

Table 3. Comparing MTSS Implementation Fidelity, School Improvement, and School Improvement Outcomes Based on the Top Two and Bottom Two Scores on the TIERS

Figure 3

Table 4. Analysis of School-Level MTSS Fidelity Data and Graduation Data for Schools Above and Below the Median Score on the TIERS

Figure 4

Table 5. Analysis of Readiness for Maths and All Subjects for Schools Above and Below the Median Score on the TIERS

Figure 5

Figure 1. Percentage of Students With Disabilities With ODRs, TIERS Comparison.Note. The figure includes a comparison of the schools with office discipline referral (ODR) data from 2015 to 2016 above the Tiered Inventory of Effective Resources in Schools (TIERS) median (n = 2) and below the TIERS median (n = 2). Each portion of the graph represents the percentage of students with ODRs based on a specific frequency range (e.g., 0–1, 2–5, 6 <). Results were rounded to the nearest whole number.

Figure 6

Figure 2. Percentage of Students With Disabilities With ODRs, BoQ Comparison.Note. The figure includes a comparison of schools with office discipline referral (ODR) data from 2014 to 2015 above the Benchmarks of Quality (BoQ) median (n = 3) and below the BoQ median (n = 3). Each portion of the graph represents the percentage of students with ODRs based on a specific frequency range (e.g., 0–1, 2–5, 6 <). Results were rounded to the nearest whole number.

Figure 7

Figure 3. Percentage of Students With Disabilities With ODRs, TFI Tier 1 Comparison.Note. The figure includes a comparison of schools with office discipline referral (ODR) data from 2015 to 2016 above the Tiered Fidelity Inventory (TFI) Tier 1 median (n = 4) and below the TFI Tier 1 median (n = 4). Each portion of the graph represents the percentage of students with ODRs based on a specific frequency range (e.g., 0–1, 2–5, 6<). Results were rounded to the nearest whole number.

Supplementary material: File

Bohanon et al. supplementary material

Bohanon et al. supplementary material

Download Bohanon et al. supplementary material(File)
File 29.7 KB