Finding Patterns in Educational Data to Improve Learning

There is that point in the student assessment process where our students complete testing and it is time to carefully review the data. This is when see how our kids faired; some surprise us with how well they do and others do not do as well as we might have expected. This leaves us with the question: What do we do next with our educational data to guide and inform instruction?

When using a transparent assessment, it can be very tempting to go through the data and create an individualized lesson plan for each student based on the results. For example, a student answers a question for standard X incorrectly, so therefore we may conclude that she needs to spend more time on that concept. Another student misses a question designed for standard Y and we may decide he needs to do some extra work on that skill. In some ways, this approach seems like the logical next step to help our students improve learning. But is this really the best way of using educational data to improve learning? The reality is that individual lesson plans are not a sound pedagogical approach and often lead to feelings of anxiety and stress for educators and feelings of boredom for students.


Let’s look a third grade math question from the NewYork State Common Core assessment.

Using educational data to improve learning

This test question is intended to assess a student’s understanding of Common Core standard 3.OA.3:

Use multiplication and division within 100 to solve word problems in situations involving equal groups, arrays, and measurement quantities, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem.

On the surface, we assume that if our student selects choice A, she has mastered the standard. In this case, she does not need to work on this concept further. If our student selects choice B, C or D, we might conclude that she does not understand the concepts related to this standard. In this scenario, we might plan to spend more time on these concepts. We might even take it one step further and look at the erroneous choice to determine the exact nature of the misunderstanding so that we can correct it. If she selected choice B, we might conclude that she may have added the two groups together to arrive at 9. Despite how tempting it may be to follow this data analysis plan, we must resist it because the reality is that our students’ thinking is not nearly so neat and tidy as this scenario presents.

Let’s wade into a bit of this complexity and messiness to appreciate what we can and cannot reasonably conclude from our students’ assessment data. For example, say our student selected choice A. What do we know from this? Superficially, we could say this student selected the correct answer and therefore has mastered this standard and the associated concepts. This implies that we can move on in our instruction to address trouble areas for this student. But, what if the student guessed the correct answer? What if the student was so overwhelmed by the concepts related to this question that she guessed and moved on to the next question. Well, that changes our interpretation of her data, doesn’t it?


Alternatively, let’s say she selected answer choice B, C or D, then what? We could conclude that she does not understand how to use multiplication and division to solve word problems, because that is exactly what this test question was designed to measure. It would be great if student problem solving was that predictable. But there are many other possible interpretations with a variety of associated next steps:

Table listing possible interpretations of incorrect test question response

As you can see, a student answering any one test question correctly or incorrectly does not conclusively indicate an appropriate next step. If that is the case, then why do we bother to test at all?

Good question…


We assess our students on regular intervals to make sure that all of our students are making meaningful progress.  Testing ensures that no students fall through the cracks and also keeps our highest achieving students continually challenged and engaged. We also want to know if our interventions for struggling students are helping them catch up. The key to correctly making these conclusions is to look at patterns in educational data and to not over-leverage any one data point.

We assess our students on regular intervals to make sure that all of our students are making meaningful progress.

If a student answers a question on standard 3.OA.3 incorrectly, we do not immediately re-teach that concept. However, if a student answers three to five questions on 3.OA.3 incorrectly we may do exactly that to improve learning. Or, we may notice that most of the incorrectly answered questions for this student were word problems and we then look to other sources of data to see if reading is the primary issue. If half of the students in our class answered a question on 3.OA.3 incorrectly, we may devise some ways for the class to gain additional exposure to this concept.


Any one data point is merely a breadcrumb; a breadcrumb that can lead us to the appropriate next step to improve learning. But it us up to us, as educators, to look at all sources of data to see where the trail of breadcrumbs leads us and our students. By looking for patterns in educational data, we uncover the optimal next steps to improve learning.

FREE TRIAL

 

 

Topics: Blog