3 reasons we’re still using outdated assessments

It’s time to rethink accountability and examine where we’ve been—and where we could go—in the world of assessments

For many years, educators have strived to balance the grip of testing for accountability with the need for actionable data. Though the sustained school closures throughout the COVID-19 pandemic have undoubtedly presented new challenges to the assessment landscape, these changes also amplified pre-existing concerns within the traditional, summative assessment approach.

For too long, educators have had to rely on end-of-year testing data to gauge their students’ knowledge. The problem? Summative assessment data is only available to teachers when it’s too late to use that data to inform instruction and improve student outcomes.

When state assessments were canceled in 2020 and districts were left to their own devices to measure and address learning needs, teachers, district administrators, and state leaders alike were forced to ask the question: Is there a better way to gauge what students know throughout the year?

As I’ve connected with state leaders to help them provide consistency in teaching and learning during this time, I’ve worked diligently to provide an answer to that question. It’s easy for me to say yes; there is a better way. There are existing innovative assessment models that enable teachers to use shorter, standards-based assessments throughout the academic year to gauge student mastery. But before we can all dive into a new assessment era, it’s important to understand how we got here and how we can apply what we’ve learned to a brighter future for both teachers and students.

It’s been over four years since the Every Student Succeeds Act (ESSA) replaced No Child Left Behind (NCLB). The most anticipated changes to the law dealt with the softening of the school test-based accountability measures that ultimately defined the NCLB era. So why, when given the opportunity to embrace the innovative assessment opportunities outlined under ESSA, are most states still using traditional summative end-of-year assessments? Though there are many answers to that question, I’ve outlined three primary reasons summative assessment models are still embraced today and what it will take to change that.

Change is risky 

Most of us are familiar with the concept of standardized testing and see it as a normal part of the education experience. Students and teachers spend most of the school year teaching to state standards and then take a summative test at the end of the school year to identify what students have retained throughout the year. The test takes multiple hours to complete and is repeated over several days for math, ELA, and sometimes science or social studies. 

These tests have been validated over time and are used to rank our students, teachers, and schools based on how well students perform. However, they’re not particularly good at providing granular details of what students know and don’t know, standard by standard, nor are they given at a time when teachers could use the information to help redirect student learning. Additionally, assessment results aren’t reported back to districts promptly; sometimes, the data is not available until the school year is over.

The value of implementing an innovative assessment model is to administer shorter assessments throughout the year with immediate, actionable data that teachers can use to inform instruction. However, adopting a new assessment model represents a significant risk to states when the current models, with all of their shortcomings and baggage, have remained well established over the years. 

Assessments must remain valid and reliable

One of the main reasons summative assessments have remained so prevalent is due to the use of well-established psychometric models, the science of measuring mental capacities and processes. Psychometric models are utilized to ensure the assessments we use for school accountability are valid (accurate) and reliable (consistent), so educators have a clear snapshot of student learning.

Though the current psychometric models used in these assessments have been scrutinized over the years, the Item Response Theory (or IRT) model remains widely accepted. IRT requires a large number of items to guarantee validity and reliability, resulting in very long end-of-year assessments. Even in attempts to use IRT with shorter through course assessments, there is a limit to how short those assessments can be and how little actionable data they can provide to teachers. 

Lack of funding

The costs of creating new accountability models and assessments, training teachers and administrators, and the potential need to build a supporting technical infrastructure are enough to keep most states from jumping in headfirst. However, the Innovative Assessment Demonstration Authority (IADA) within ESSA gives states permission to pilot and eventually implement new assessment models for use in the statewide accountability system if they can demonstrate that the new model provides a clear picture of what students know and can do.

Since the passage of ESSA, only four states have applied for the IADA, mainly because of a lack of funding to pilot and launch a new initiative. With a new infusion of funds from the recent stimulus packages, districts and states can consider new, innovative assessment models that will provide educators with actionable data to assess and address learning loss. 

As I continue to support the great work state leaders and educators are doing to shape the future of assessment, I hope we all keep in mind that the true purpose of accountability is equity—ensuring that every student receives a high-quality education. We need to know where students are so we can get them where they need to be. Rather than centering instruction around assessments that occur at the end of the learning cycle, assessments should inform the teaching and learning process every step of the way. 

eSchool Media Contributors