No State Test? No problem!
This post is by guest blogger, Chris Birr, EdS. Birr is a member of the ion Board of Directors, a School Psychologist, MTSS Coordinator and deep thinker. Chris lives in suburban Columbus, Ohio with his wife, two daughters and dog.
Anecdotally, it sounds like most states are canceling the end of year-standardized tests this year. For some schools, that could be a substantial loss of student data and methods to assess achievement in their system. However, for most districts and schools, the loss of year-end state test data should not be a significant loss of data. Again, anecdotally, and from experience, schools have more than enough student data and this could be a moment for schools to step back and improve assessment use and fidelity of implementation and interpretation. Some questions swirling involve how to plan for students coming back with summer slide coupled with pandemic slide, assessment can provide some context for whom your most critical students are and how to plan for the entire grade level.
Side note: assessment is not the end-all, be-all but assessment is a critical component in the problem-solving model. (Thanks Florida MTSS). We need to know who to provide instruction that is more intensive and how the entire group is doing from season to season.
Assuming your school has some type of universal screening such as easyCBM, MAP, STAR, AimsWeb Plus, and fall screening is in place, you are in great shape. Even better, you have data from fall and winter from this year. Here are a few questions that I’ve heard and seen so far.
How do we know which students are going to be ok once we are back in school?
Use your previous data and screen all students in the fall. Establish criteria or a cut score to identify students who are on-track for proficiency and those who are off-track and need instruction that is more intensive. Next year (every year, really), we should be accelerating growth for all students so the use of fall, winter, spring data will provide feedback if students are making adequate growth during the year.
What is the best way to set a cut score?
Select either a Normative or Criterion-Referenced cut score. This Cut Score will provide an indication of students who are on track to reach proficiency or off-track and in potential need of more intensive instruction (i.e. intervention).
Criterion-Referenced (preferred). Most screeners provide a linking study. For some, the screener and state test results are equated and provide an indication that based on the fall screening score, the student is likely to score in a certain range on the spring state test. Students who reach a certain proficiency cut score, are likely to demonstrate proficiency on the state test.
If your screener does not provide a linking study, research-based methods can be used to develop seasonal cut scores. This is more difficult and requires some statistical judo. Partner with a local university or graduate students who can help. Bonus points if you can arrange for a research opportunity using district data, check your district policies regarding research.
Normative Cut Scores (less preferred). If a linking study is not provided, use research-based methods are not realistic, and/or time is limited, a normative cut score could be selected. The drawback here is that the cut score selected is based on a percentile rank and not directly linked to performance on the state test. Comparing a national score to a local population obscures the data a bit and predictions are less reliable. Best case, look at the state test and try to determine the state percentile needed for proficiency and apply that to the fall scores in each grade. For example, if a student is required to score at the 65th state percentile for proficiency on the spring test, set the fall score at the 65th percentile for your screening assessment. This is not ideal but is a shade better than setting the cut score at the 50th percentile as an arbitrary selection.
· If using ORF for screening, look at the 50th percentile as a cut score. Faster is not always better in this situation but students reading below this score may need repeated reading or other intervention. Also, include accuracy in the decision rules. If students score below 93% accurate, more information may be needed to make instructional decisions. Low accuracy could be an indication of a decoding deficit. (Christ, 2016)
We have seasonal cut scores but the state test has changed and now we don’t have results from this year? Can we use the old screening cut scores in fall?
Cut scores generally hold up over time even if the state changes the test. Regarding cut score stability, it might be wise to continue using the same cut score on the seasonal screening assessments until new linking studies are conducted or the state test has a few years of use (Klingbeil, Van Norman, Nelson, & Birr, 2018). If the cut score selected indicated proficiency on a prior state test, the prior fall cut scores will likely provide a pretty good indication if students are on track for spring proficiency. Although using past cut scores is not perfect, educators in your system will likely be aware of the scores and able to interpret results from the fall screening. Also, proficiency standards assessed should be similar even if the test or scales change. The goal here is not perfection, but good enough. An unpopular statement, but using reliable and valid assessment data to make decisions is considerably more efficient and effective than professional judgment (Begeny, Krouse, Brown, & Mann, 2011).
The focus is not to overemphasize testing and data when we return. On the contrary, this is intended to provide recommendations to streamline assessment procedures to limit the amount of testing completed and improve efficiency in data-based decision-making. When we return, having baseline data on all students should be used to develop an effective plan of instruction for ALL. Current achievement levels are a starting point but educators need to look at whether or not students are demonstrating strong growth throughout the next year. Set up systems that maximize efficiency and effectiveness so teachers can spend less time on data analysis and more time planning to change trajectories for all students.