2022–23 Program Summary

Mike Preiner
4 min readJun 27, 2023

--

The mission of The Math Agency is to close educational gaps in elementary school math. Shameless plug: if you are interested in helping support this mission by coaching at one of our partner schools, we’d love to hear from you!

We recently wrapped up our programming for the 2022–23 academic year. We worked with three different public schools in Seattle, focusing on the highest need students, and once again, the results were very encouraging. Students in our program doubled their historical growth rates, which is observable in both standardized test data and our internal measurements. This is similar to what we saw last year.

This year students entered our program having learned an average of 0.7 grade levels of math per year. While they were enrolled with us, they learned an average rate of 1.4 grade levels of math per year. Both of these these measurements were made using IXL’s assessment module. Seattle Public Schools also used the Measures of Academic Progress (MAP) assessment from NWEA. This is a widely used standardized tests that also includes measures of growth. In this analysis, we’ll use their Growth Index, which compares student growth to that of other students starting with the same grade and skill level.

Let’s start with a high-level overview across all of our schools. What can see we from the data?

Table 1: Summary statistics for our 2022–23 school year programs. *Our program at School 3 is started in late January. **3rd grade students not in the Math Agency program at School 3 had a MAP Growth Index of 68%.

There is good agreement with our internal growth data and the growth results from MAP. Our internal growth data suggests an average academic growth change of 2x, while the MAP growth index averaged 170–180%*, which means students in our program increased their scores by 1.7–1.8x more than similar students. These numbers aren’t strictly comparable, but if we assume students tend to stay on the same track (and there is good evidence for that), we’d expect these numbers to be similar.

*We’re not including School 3 in this comparison, as the program there didn’t start until the end of January, and therefore we’d expect a diminished overall impact.

Program dosage seems to matter. We ran two different dosages of our program at School 1, and saw fairly different growth rates as measured by our internal data (though the MAP data doesn’t show as large of a difference). This isn’t a perfect apples-to-apples comparison, since the two groups were at different grade levels, but it is consistent with our previous data and a lot of academic research.

It is worth noting that our dosage at School 3 was limited by a late January start, which we think matches the growth details. School 3 had a reasonably high growth rate, but given the short timeframe, we think we had less impact overall. There was still a significant increase in overall MAP growth between students in our program and those who weren’t.

Various program formats (after-school, in-school, and remote) all seem like they can be effective. This again matches our conclusions from last year.

So far we’ve been talking about high-level program averages, which can conceal a lot of nuance. With that in mind, let’s look at a more detailed view of the assessment data.

To get the best understanding of our impact, we like to triangulate across multiple data sources. Last year we compared our high-frequency internal measurements to standardized tests, and a similar analysis from this year is shown below, where we plot raw MAP data against the weekly assessment data we generate with IXL. This is important, because even though the average growth metrics agree between IXL and MAP, it doesn’t mean the data agrees well at the individual student level.

What do we see? As before, we see a very strong correlation between the student level MAP results and IXL (a correlation coefficient of 0.86), and no clear differences by school. This data continues to build our confidence that our weekly assessments are providing a meaningful measurement of student skills. This is important, because in future posts we’ll show how we are using this high-frequency data in some novel ways to personalize our program to student needs.

Figure 2. Student level MAP RIT scores vs. IXL assessment data across three different school programs.

Finally, it is important to note that the metrics shown here measure total student learning. This includes learning done in the classroom, in our program, and at home. We know a lot of our students had great teachers, and that surely contributed to the impressive growth! In summary:

We saw good agreement between our internal growth metrics and standardized assessment data: on average, students enrolled in our program doubled their academic growth in math. This is making us increasingly confident that is possible to fully close educational gaps in our public schools with well designed, effective intervention programs.

--

--

Mike Preiner
Mike Preiner

Written by Mike Preiner

PhD in Applied Physics from Stanford. Data scientist and entrepreneur. Working to close education gaps in public schools.

No responses yet