Can we *repeatably* reduce education gaps?

Mike Preiner
6 min readDec 16, 2021

--

In a previous post, we looked at whether it is theoretically possible to fully close the education gaps in public schools using our current education “technology” (which includes software, interventions, etc.). Encouragingly, the answer was yes, with a few critical assumptions. In this post we’ll revisit that analysis with a set of new data that addresses many of the previous assumptions.

The tldr version: our results at three separate schools in 2021–22 reveal a clear, repeatable process for taking the most struggling students from the most disadvantaged backgrounds and significantly increasing their academic growth rates. If we can main their new growth rates, we’ll be on track for getting them back to (or above) grade level in math before they go to middle school. Interested in the details? Keep reading.

Results

Our previous posts have detailed many of the aspects of our math program (The Math Agency), so let’s jump in by looking at some key performance metrics for the first ten weeks of 2021–22. As the table below shows, we varied several aspects of our program across the schools. This variation helps us identify which components have the most impact on students, which we’ll discuss in detail later. For now, let’s focus on two metrics: historic growth and current growth.

We define the historic growth rate as a student’s academic trajectory before they start our program. To estimate it, we simply take their math skill level when they enter our program and divide by the number of years they have been in school. For example, a student beginning 4th grade who is at a 2nd grade math level would have a historical growth rate of 4/2 = 0.5 grade levels/year.

We calculate a student’s current growth rate in a conceptually similar way. For example, a student who advanced an entire grade level while in our program for half a school year would have a current growth rate of 1/0.5 = 2 grade levels/year.

The comparison between historical growth rates and current growth rates below shows a clear pattern: students in our program have dramatically increased their academic growth rates: by 1.4x-2.2x, depending on the school. Note that our student/coach ratios were significantly higher than in some of our previous work, where they were typically 1:1 or 2:1. This is an important step for making our program more cost efficient.

Comparison of program metrics between our three pilot schools. The # Family Emails Sent counts how many personalized updates were sent to each family, while the Family Email Open % is the percentage of those emails that were opened. Competitions were typically 1-week long and points were based on how much each student practiced. Standard errors for the growth metrics are shown in parenthesis.

At this point we should also mention one more key metric: fun. We are still working to find the best method to measure it, but here is one way: at the end of the 10-week session at Northgate, we asked the students if they’d like to have more math sessions during the week. Over 45% of them said yes. A large number of students volunteering for more after-school math seems like a solid indication that we are bringing the fun:)

Projections

The increased academic growth rates are encouraging, but as one of our coaches recently asked: what is the goal? Almost all of our students are significantly behind grade level; one very reasonable goal would be to get them fully caught up. To show what that would entail, we’ve plotted average student trajectories from one of our schools (Northgate) below. The results are broadly similar across all three schools. The graph show three scenarios:

  1. Business as usual: this is what our student cohort’s trajectory would look like if they continue progressing at their historic growth rate of 0.7 grade levels per year. By the time they start middle school, they would be on average almost two grade levels behind in math.
  2. The Math Agency (academic year): this trajectory assumes that students continue forward at their current growth rate of 1.5 grade levels per academic year. In this case, they are almost on grade level by the time they start sixth grade.
  3. The Math Agency (academic year + summer): this trajectory assumes that students keep their current growth rate and enroll in a summer program similar to the one we piloted this year. This gives a total growth rate of 1.5 grade levels/year + 0.4 grade levels/year = 1.9 grade levels/year. In this scenario, they are materially above grade level, on average, by the time they start middle school.
Projections of student skill level as a function of grade for the students in our program at Northgate Elementary.

In some sense, these projections aren’t shocking; they are pretty similar to our previous analysis using estimated effect sizes from the research on academic interventions. However, results from academic papers (especially in education) are often quite difficult to reproduce in the real world. All of the growth metrics used here are taken from our actual pilot programs, and at this point we’re fairly confident that we can repeat (and likely improve) them over time.

There are also a number of program details and caveats worth mentioning.

Detailed Discussion (including caveats)

There are many reasons why a student (or even an entire school) can be be extremely behind in math. That being said, at this point we’ve worked with very high poverty schools (two of the schools above have 80% or more of their students on free-or-reduced price lunches), large numbers of English-language learners, and even many homeless students. In education jargon, our students so far have been almost entirely Tier II or Tier III students: nominally the lowest performing 15% students, though our schools often have much higher percentages. In other words, it is pretty clear that our program can work with the most struggling students at the most struggling schools.

One important question we’ve had is whether the academic growth we measure with our high-frequency assessments will match up to growth measured via standardized tests. The good news is that we were able to directly compare our assessment data with standardized tests this fall: both the Measures of Academic Progress (MAP) and the Smarter Balanced Assessment (SBA). Overall, we found that our diagnostic measurements generally correlate well with standardized tests. Interestingly, in the few cases where the results were materially different, we did deep-dives with teachers and coaches and concluded that our high-frequency diagnostics better captured the student’s skill level than the standardized tests. You can find a detailed analysis here, but in general it looks likely that the growth we’re measuring now will show up in standardized tests in the spring.

Another important question relates to the differences in growth rates between our three schools. While we don’t have enough data to conclusively identify causal relationships, we definitely see interesting trends. On one hand, we have two intensive “low student:coach ratio, high dosage” schools (Lowell and Cedar Park) that didn’t utilize all of our tools for family engagement or student motivation via competitions. Conversely, Northgate was a “high ratio, low dosage” school, but heavily leveraged family engagement and student competitions, and still saw very significant growth improvements. Of course, there are many differences between the schools that could explain the variation in growth rates, and we’ll definitely want to explore these relationships further.

Finally, it is worth laying out some of the many caveats in our current data and analysis:) They include, but aren’t limited to:

  • We are making relatively short-term growth measurements (10 weeks) on a relatively small sample (~70 students). This means there is fair amount of noise in the growth estimates, as shown by our standard errors. Our confidence in our results will increase over time and as we work with more students.
  • For the sake of simplicity, our projections show student averages. This obscures much of the variability of student outcomes. It is possible to get the “average” student above grade level but still have some students below grade level. Of course, it is still a big win to improve the average!
  • As always, correlation does not prove causation. It is possible that the increases in growth we are seeing at our schools are due to something unrelated to our program, such as curriculum changes, other interventions, etc. This doesn’t seem very likely at this point, but is always a possibility we need to keep in mind.

Next Steps

Our results so far are very encouraging, but there are many more things to improve (and measure) in the short term. The most important pieces include:

  • Demonstrating that we can maintain our growth rates for an entire year, and matching those rates to growth measured via standardized tests.
  • Working with more students to increase our sample size, and thus the reliability of our measurements.
  • Seeing if we can improve growth rates by transferring practices (such as high-intensity parent engagement and student competitions) across schools.

To wrap it up, our most recent data indicates that we’ve been able to drastically increase students’ growth rates in math for some of the most disadvantaged students in Seattle. If we can maintain these growth rates (including through summer breaks), the vast majority of our students are on track to be back at grade level by the time they reach middle school.

--

--

Mike Preiner
Mike Preiner

Written by Mike Preiner

PhD in Applied Physics from Stanford. Data scientist and entrepreneur. Working to close education gaps in public schools.

No responses yet