# Can we measure the relationship between math practice and skill growth?

Our last post analyzed our 2020–21 year-end data on relationships, attendance, and practice for our math program at Lowell Elementary. In this post, we’ll dig deep into *progress*, which we’ll define as growth in math skills.

As a quick reminder, here is the theory of action for the program.

One of our major assumptions is that practice drives growth in math skills. This is a pretty “common sense” assumption, but it seems like plenty of common sense stuff doesn’t work in education! It’s also important to understand the *strength* of that relationship and how it may vary under different conditions.

# Before we begin…a super important caveat!

As usual, we have some important caveats with our data. The most important is that *growth data is noisy*, especially when measured over short time periods. To understand why, keep in mind that we measure growth as a change in skill level over time:

When the change in time is relatively small, even tiny changes in skill levels (you can read more about the noise level in our data here) can cause big changes in the calculated growth rate. To mitigate this problem, for the rest of this post we only use data for students where we have skill data for at least 90 days. We also have our usual caveats about sample size.

Let’s start by continuing the analysis from our last post, where we illustrated how student attendance and practice time varied dramatically by tutor.

# Growth rates by tutor

To put this year’s growth rates in perspective we’ll compare them to the students’ *historical* growth rates. To measure historical growth rates, we simply take the results of the students’ first diagnostic and divide it by their actual school grade. For example, a student starting 4th grade at a 2nd grade math level has a historical growth rate of 0.5 grade levels per year.

Let’s now look at growth rates by tutor, shown below. For now we’ll only consider Tutors B,C, and D, since Tutor A was running a different program (a topic for a future post). The first thing we notice is that there *were* differences in the students assigned to each tutor. Tutor D had students starting with noticeably lower previous growth rates (and thus larger gaps). This suggests that to some extent Tutor D did have “harder” students than Tutors B and C.

We see the same pattern in growth by tutor that we saw in practice. Tutor B’s students had the highest growth rate (~2x their previous growth rate), Tutor C’s had the next largest (~1.3x their previous growth rate, and Tutor D has the smallest (the same as their previous growth rate). Important note: our program did not cover a full academic year, and to calculate our equivalent annual rate we took our monthly growth rate and assumed 9 months of growth.

At this point our data is suggesting a strong relationship between growth rates and student practice, which leads us to…

# Growth rates by practice

One of the core assumptions of our program is that effective practice leads to academic growth, and the more, the better. To check this assumption, we plot student-level practice vs. growth rate, segmented by tutor. We see a clear (though noisy) connection between the two. A linear fit is shown in grey.

The linear regression produces two very interesting numbers. The first is the y-intercept (corresponding to **no** additional practice) value of 0.72 grade levels/year. This is reasonably close to our student’s historical growth rates, or “business as usual”.

The second number is the slope of our fit: ~1.0 grade levels of growth per additional 60 minutes of weekly practice. This provides a clear heuristic for improving our program. If we can quickly measure an effect that increases student practice, we can now estimate its expected impact on growth. For example, we’ve shown that practice-based competitions can increase student practice by ~20 minutes/week on average. We can now “connect the dots” and estimate that if we could sustain that effect, it would add about ~1/3 of an *extra* grade level of growth by the end of the year.

At this point we should discuss a couple very reasonable concerns around our growth data:

- Are the students that demonstrated high growth simply fundamentally “better” students that are more likely to practice more?
*This seems unlikely.*Remember,**all**of the students in this program started off extremely behind grade level (typically 1.5 to 2.5 grade levels behind). - It is reasonable for an extra hour of practice a week to cause an entire grade-level of growth over a year? Let’s look at the numbers: 1 hour/week for 9 months corresponds to almost 40 hours of extra learning over the course of an academic year. It seems plausible that this could give the corresponding amount of growth.

Finally, we have one more sanity check on our practice-growth relationship. In a previous post (There Is a Lot of Room at the Top), we showed data for a couple of students performing way *above* grade level. Those students are on the same program (Khan Academy + IXL) that Tutors B, C and D used. The students have a long-term growth rate of ~1.7 grade levels per year, and their average practice time is about 80 minutes per week. This matches our current data quite well.

# What is the takeaway?

We’ve shown the basic idea of “practice makes progress” holds for our program, and even better, we measured it: on *average*, 60 minutes of extra practice/week gives an extra 1.0 grade levels of growth per year. Getting a handle on this number gives us insight on how much our efforts to improve practice are likely to impact student growth. And *that* is going to help us make our program even more impactful in the future.