In today’s digital classroom, teachers have access to more data than ever. With a few clicks, we can view detailed reports on student test scores, formative assessments, progress reports from self-paced software, attendance, and so much more. At times, the amount of data can feel overwhelming, especially when each data point only exists as an isolated channel, unrelated to the next.
I am not saying that multiple data measures are a bad thing; in fact, they can help us to differentiate instruction, personalize learning, and really meet each of our students where they are academically. As administrators, it is critical that we help our teachers collect the most meaningful data points by giving them the tools they need to quickly interpret figures to make informed decisions in their classroom.
In my district, Moreno Valley Unified School District (MVUSD) in California, our data showed that our students were really struggling in math. Our state test scores were low and, with the changing rigor of Common Core, parents were coming to me concerned that they were not able to help their child with assignments. I knew we had to do something outside the box—and quickly—to catch our struggling students and prevent them from falling further behind.
Step 1: Finding the right data
We knew that MVUSD’s math scores were low on the California Assessment of Student Performance and Progress (CAASPP). For our students that were not meeting proficiency, the score alone did not show a clear picture of the specific skills they needed to master to catch back up to grade level. Our teachers needed a tool to pinpoint skill gaps for individual students so we could be more targeted in our interventions.
After much research, it was clear that we would benefit from administering benchmark assessments. Unlike traditional summative assessments like the CAASPP that simply determine content mastery, a good interim assessment allows educators to get a snapshot of what an individual student knows, is able to do, and is ready to learn next.
While there are many assessment options, we chose NWEA MAP Growth because it had the most research behind it. Our students take a computer-adaptive assessment a few times each year that adjusts to each student’s responses. Teachers get detailed reports that identify individual student needs and show projected proficiency through the school year and over multiple years. Administrators get higher-level reports that make it simple to do a temper-check several times throughout the year (instead of just at the end of the school year) and measure longitudinal growth.
Step 2: Connecting interventions to our data
Now that we were collecting the right data to isolate the instructional areas MVUSD students were ready to tackle, we needed to provide our teachers with additional resources to help them differentiate instruction. Teachers can use MAP data to identify common pain points to inform their lesson plans, but the granular data allows us to personalize learning even further. We know the specific topics students need to close skill gaps, but with an average class size of 24, it can be difficult to find the time for one-on-ones with each student.
As a district, we sought out the interventions that could take MAP growth data to the next level. I think the best example is the one-to-one online tutoring program we provide to students who scored a level 1 or level 2 on their math CAASPP.
We worked with FEV Tutor, who took individual students’ fall RIT scores (grade-level equivalences) to create personalized tutoring plans for each student. Depending on the school site, students worked one-on-one with their own professional tutor during the day or at an after-school program. All tutoring was online, and since it was one-to-one, students could work through the specific learning strands identified on their learning plan with the support of a live instructor.
Each online tutoring session concluded with an exit ticket. Teachers and administrators saw this data on a weekly basis, which allowed our teachers to see—in real time—how their students were progressing through their learning plans. If students were continuing to struggle, it was a warning that students would not likely reach their projected growth goals for the year and that we should explore additional interventions.
At the end of the tutoring program, the team at FEV Tutor did a full analysis to examine the impact. In academic year 2016-2017, MVUSD set a district-wide goal for 50 percent of all students to meet or exceed their fall to spring MAP Growth goals. We are pleased to share that 69 percent of FEV Tutor participants met or exceeded their fall to spring MAP Growth goals in math, compared to 17 percent of students who were identified for tutoring but did not participate.
Step 3: Connecting data points
MAP Growth is a great sign of students working their way toward proficiency; however, it is important to match this data into overall student performance. To try and get a clearer picture of the impact online tutoring had on student achievement, MVUSD’s department of accountability and assessment worked with FEV Tutor to examine the impact that online tutoring had on the CAASPP.
We saw that students who participated in FEV tutoring grew by an average of +26 scale score points from the spring 2016 CAASPP to the spring 2017 CAASPP, compared to +22 points for non-FEV Tutor participants. By taking a deeper dive into the data we found that, across the district, students who participated in 10 or more tutoring sessions had the highest rate of performance-level movement. For students that took 10 or more sessions, the percentage who scored a level 3 (standard met) or level 4 (standard exceeded) grew by 15 percentage points from the spring 2016 to spring 2017 CAASPP. The percentage of students who scored a level 2 (standard nearly met) grew by 13 percentage points. This 13-percent increase is specifically significant at MVUSD because most students who participated in the FEV tutoring program scored a level 1 (standard not met) on the 2016 CAASPP.