Kentucky’s great improvement in NAEP reading participation
Probable impact on scores
One of the really good things that showed up in the recent release of the 2013 National Assessment (NAEP) reading scores is a large reduction in the number of Kentucky’s students who were excluded from this national assessment due to learning disabilities. The Kentucky Department of Education did a yeoman job of getting the exclusion rate reduced to nearly match the nationwide average in 2013. This probably impacted scores, however.
As the graph below shows, beginning in 1998 Kentucky started to exclude students with learning disabilities (SD) at rates well above the national public school average. This imbalanced situation got worse in 2009 when the US public school average exclusion rate started to decline. The situation further deteriorated through 2011.
The disparity in exclusion rates put the validity of Kentucky’s NAEP reading scores in question. Kentucky was excluding a lot more students from a group that tests much lower on average than other student groups in NAEP. That was certain to inflate scores, though no consensus has ever been reached about the precise amount of that inflation.
The situation changed radically in 2013. Thanks to the Kentucky Department of Education’s efforts to increase NAEP participation, Kentucky’s 2013 exclusion rate dropped five points and nearly matches the US public school average, which also dropped a point from the 2011 figure.
Along the way, however, some interesting things happened to Kentucky’s NAEP reading scores. The scores for the state’s learning disabled students dropped dramatically, and it looks like that had an impact on Kentucky’s overall scores, as well.
To learn more about this, click the “Read more” link.
This table shows Kentucky’s 2011 and 2013 Grade 4 NAEP Reading Scale Scores separately for Kentucky’s SD that did take the NAEP versus those who took the NAEP as non-disabled students. Also shown are the percentages each of these two student groups comprised in Kentucky’s test samples each year. The last sections in the table show some overall statistics about the exclusion rates and the overall average “All Student” scores reported for Kentucky in both years.
To begin, in 2011 Kentucky’s SD who did take the NAEP scored 207 and made up only seven percent of the total group of tested students. In contrast, the state’s non-disabled students scored much higher at 227 and made up 93 percent of the total group tested.
Other data in the table shows that in 2011 a total of 15 percent of all Kentucky’s students were identified as SD in the initially selected raw sample that NAEP wanted to test. In the end, the proportion of SD excluded amounted to a whopping eight percent of all the students, disabled and not, in that initial raw sample.
One year later, Kentucky’s SD score on NAEP dramatically dropped from 207 all the way down to 189. SD also comprised a notably larger proportion of the students tested, making up 11 percent of the total, an increase of four points from the 2011 percentage.
In 2013 Kentucky’s non-disabled students actually posted a two point increase in their score. However, that was not reflected in the overall “All Student” score for Kentucky, which actually dropped a point due to the double impact of more learning disabled students in Kentucky’s NAEP tested sample with their correspondingly much lower average score.
What if Kentucky’s 2011 exclusion rate had been similar to the new 2013 rate?
The new score information offered a chance to do a little What-If analysis. I wondered what might happen to Kentucky’s overall “All Student” average scores if Kentucky had reasonable score and demographic adjustments based on the much more inclusive 2013 testing. I created the next table to explore that.
Because the non-disabled student scores slightly increased between 2011 and 2013, I assumed that there would be no score change for the SD between those years.
I also used the percentages tested data from 2013, as well, but I kept the non-disabled score at 227 as this accurately reflected how all the non-disabled students performed in 2011. I took a weighted average with this data and determined that a probable “All Student” score, if Kentucky had not excluded a high proportion of its learning disabled students in 2011, would be 222, not the score of 225 reported that year.
Thus, it looks like the impact of excessive exclusion created somewhere around three points of inflation in Kentucky’s 2011 scores.
That three point difference would be notable in state rankings. Using the NAEP Data Explorer’s statistical significance test tools, I determined that Kentucky’s officially reported “All Student” Grade 4 NAEP Reading Score in 2011 was listed in 17th place. If the score had been three points lower, Kentucky’s listing would have tied Wisconsin’s, which is listed much lower in the 30th place. Instead of outscoring 21 states by a statistically significant amount on the 2011 NAEP, Kentucky would have only outscored about 11 states.
So, while there is promise in the new data because we now can fairly compare our NAEP reading results to other states, at the same time we need to recognize that claims of progress in the past have been overblown.
I think the new NAEP experience raises another question. Do we really need to have so many of our learning disabled students take our state tests with a reader reading them the tests? The new NAEP shows a lot more of our kids can handle a test that does not allow readers. Perhaps we need to do state testing both ways so our kids with disabilities can show what they comprehend, but also show us if they can decode printed text, as well.
After all, in the real world someone who cannot read printed text is in a lot of trouble.
And, as I promised earlier, I still need to explain the paradox of how Kentucky’s only two dominant racial groups can both score below average while the state’s “All Student” reading scores come out above the rest of the nation, so stay tuned.
Technical Notes:
The actual scores used by the NAEP and by me to calculate overall “All Student” scores are unrounded figures carried out to a large number of decimal places. All scores are rounded in the tables above to the nearest whole number to match standard NAEP report card practice.
Most of the data in the tables above come from the NAEP Data Explorer.
The exception is the exclusion rate data highlighted in blue in the first table, which come from NAEP report card products for 2011 and 2013.