Are Common Core tests too hard?
Reading level for questions may be too high
A concerned Missouri Mom who also happens to be a teacher decided to check out the reading difficulty level for Common Core tests her sixth grader is starting to take. What she found is disturbing.
A few key points in this Mom/teacher’s recent blog:
• “I have heard many stories of teachers logging in to take the practice test and not being able to pass. They have reported reading passages that are not on grade level and in fact extremely challenging for the average student.”
• “As a reading teacher who is accustomed to judging reading passages for text complexity and difficulty, I immediately noticed that the test I was looking at was not on level.”
This Missouri teacher then used an online reading difficulty calculator to determine the reading difficulty level of some of the sample sixth grade Common-Core-aligned Smarter-Balanced Assessment Consortium (SBAC) test questions Missouri is using. This tool presents reading difficulty in a grade-equivalent format.
The Missouri reading teacher typed passages from test questions directly into the calculator and then checked the resulting reading grade levels computed by the online tool. Here is one typical finding:
• “As I typed, I was stunned at the level being calculated. The grade levels started registering at the 8th, 9th, and 11th grade equivalencies!”
That got my attention. Kentucky isn’t using SBAC, of course, but I was curious about the reading level of KPREP items that have been used in Kentucky’s Common Core assessments. My interest was increased because Kentucky contracts with the Pearson publishers to create and score the KPREP tests used in Kentucky, and Pearson reportedly has links to the SBAC, as well.
So, I went to the Kentucky Department of Education’s listing of “K-PREP Sample Items” to see what the readability scores for Kentucky’s test items might look like (Note, the Kentucky Department of Education once used the term “K-PREP.” They have since dropped the dash and now list this as a straight-forward acronym, “KPREP”).
I pulled a KPREP Grade 6 writing prompt for my first examination. Here is a screen shot of how this writing prompt was scored by the readability-score.com web tool (with my additional notes in the red-bordered balloon items):
Notice that the web tool actually offers five different readability calculations. The Flesch-Kincaid Grade Level calculation listed first shows the writing question prompt has a computed reading grade level expected from students who are five months into their eighth grade school year. The Gunning-Fog Score is even more amazing, showing a reading level expected from 11th grade students who are five months into the school term.
Only the SMOG Index shows about the right reading difficulty for this sixth grade writing test item. However, results from the five indexes average out to a reading skill requirement at the ninth grade level. Just reading and understanding this question requires a reading proficiency grade level that is three years more advanced than the average reading ability of sixth grade students.
Things got more interesting when I looked at Question 4 from the KPREP fourth grade math released items.
Click the “Read more” link to find out how interesting.
Here is the screenshot of readability-score.com reading grade level calculations for one of the KPREP fourth grade math questions.
Again, my added comments are in the red-bordered balloons.
According to the various indexes, the reading grade level required to understand this Kentucky KPREP math question is much higher than the average reading capability of fourth grade students. Furthermore, every one of the five indexes show a required reading level many grades above the fourth grade.
This could create a situation where a student’s real math ability is buried behind an inability to read and understand the questions.
Last, but not least, I also looked at a fourth grade reading question example. Here is how that looked:
Once again the calculator shows fourth graders would have to read at a level far above their actual grade – in this case at the level expected of students who are six months into their seventh grade year – in order to successfully deal with this question.
In closing, I want to make it clear that I am not a reading teacher, so I cannot vouch for the true accuracy of the scores from readability-score.com. But, there is evidence that at least some of the indexes are valuable.
For example, the parent from Missouri who first raised these sorts of issues is a reading teacher and clearly has a level of confidence in at least one of the calculations, the Flesch-Kincaid Grade Level.
This teacher also pointed out that the Flesch-Kincaid Grade Level calculation was used in a report from the National Center for Education Statistics comparing NAEP to other international tests.
In fact, I found several examples of federal reports that use Flesch-Kincaid such as “Comparison of the PISA 2009 and NAEP 2009 Reading Assessments” and the “Technical Report and User's Guide for the Program for International Student Assessment (PISA)” for 2009. So, this index, at the very least, has credibility with federal education officials.
I was also surprised to learn the Automated Readability Index has been around for nearly half a century and was recommended for use by the United States Air Force to insure its technical manuals could be understood by airmen in the service.
So, I suspect there really is a problem with the readability of at least some KPREP questions. Never-the-less, I’d like to hear from some local experts – reading teachers here in Kentucky. Is KPREP “leveling” with us, or not? After all, if students cannot even read and understand the questions, they clearly cannot demonstrate what they really know and are able to do.
And – for sure – I don’t want our KPREP math, science and social studies scores reflecting mostly a score for reading ability rather than a score for a student’s ability in those non-English language arts subjects.