What qualities REALLY count in education? Part 1
On problems with education research, state education rankings and teacher preparation
An annual rite of passage occurred again on January 12, 2012 when Education Week released its annual report on education across the United States, known as Quality Counts (subscription). Kentucky’s education boosters wasted little time jumping on the new state rankings in the report, which showed that Kentucky moved up 20 places in the Quality Counts state rankings in just one year (Really???).
It’s hard to imagine that so much celebration would be warranted for a state that only earned an unimpressive “C+” in the Quality Counts scoring process, but the jump in the rankings does sound impressive (assuming you believe a state can change its education system that much so quickly), until you ask some very basic questions:
What qualities really count in education, and does Quality Counts do a good job of identifying and grading them? For that matter, do many involved with education really know the answers about what REALLY makes up a quality education system?
Before I go any further into this, I want to make it clear that I find Education Week’s newspaper effort to be very valuable. I have communicated with a number of EdWeek reporters over the years, and I have found them very knowledgeable of education in general and with their specific fields of emphasis. They communicate their knowledge with considerable skill.
Unfortunately, I am told the rankings section of Quality Counts isn’t created by line reporters at Education Week (they do write some of the interesting articles, however). The rankings might be less controversial if EdWeek’s regular reporters were more involved.
In any event, I have a number of concerns with EdWeek’s ranking system, and I’ll be discussing some of those in the next few days.
I’m going to start with an easy to understand problem – Why does EdWeek continue to use its own Cumulative Promotion Index calculations for high school graduation rates when extensive research completed for the US Department of Education in 2006 shows the Averaged Freshman Graduation Rate (AFGR) calculation is more accurate? Furthermore, the AFGR data is readily available for the years considered in this year’s Quality Counts report. Just check out Table 112 in the 2010 Digest of Education Statistics.
Using a less accurate calculation matters for Kentucky’s EdWeek rankings. In fact, “High School Graduation” gets extra weight in EdWeek rankings because graduation is counts in two areas of the EdWeek report.
The first is in the “Chance for Success” part of the rankings.
Here Quality Counts says Kentucky’s 2007-08 high school graduation rate was 72.8 percent, which ranked above the national average rate of 71.7 percent.
The same numbers are ranked again in the “K-12 Achievement” section.
Also ranked in this section is the change in graduation rates between 2000 and 2008. These numbers are also based on EdWeek’s Cumulative Promotion Index.
So, EdWeek includes graduation rate calculations in multiple places to derive the overall state rankings.
Now, here are my problems:1. The more accurate AFGR for Kentucky in 2007-08 was 74.4 percent, and that was LOWER, NOT HIGHER than the national average rate of 74.7 percent.2. In 1999-2000, Kentucky’s AFGR was 69.7 percent. The improvement in 2008 was therefore only 4.7 percentage points. EdWeek says Kentucky’s graduation rate improved almost twice as much, with a 9.1 percentage point improvement. Using the EdWeek formula, only 5 states showed more improvement, but with the AFGR, 15 states did better.3. Last year I suggested to the EdWeek Quality Counts staff that they needed to drop their less-accurate Cumulative Promotion Index and start using the AFGR. That didn’t happen.
So, while most knowledgeable people are now using the AFGR until all states finally can compute high quality graduation rate data (which Kentucky can’t report until 2014), EdWeek is still putting out less accurate “stuff.”
And, since they more or less triple counted their less accurate ‘stuff,’ this makes up my problem number one with Quality Counts. There is no excuse for using low quality data, and counting it three times, when better information is available.
Stay tuned, because I see more problems with Quality Counts, problems that indicate Kentuckians shouldn’t be doing much cheering about our supposed remarkable, one year soaring in the rankings.
(Note: Added Correct name of Quality Counts index 24 Jan 12)