The Bluegrass Institute for Public Policy Solutions

View Original

More claims from new Prichard/Chamber report about Kentucky education that just don’t hold up

The Prichard Committee for Academic Excellence and the Kentucky Chamber of Commerce recently issued a joint report called “A Citizen’s Guide to KENTUCKY EDUCATION, Reform, Progress, Continuing Challenges.” This paper talks about supposed progress and challenges for Kentucky’s public education system. It also tries to boost support for the state’s current education standards.

Sadly, the Prichard/Chamber paper has a considerable number of problems. For example:

The paper touts supposed progress in Kentucky’s education relative to other states around the nation. However, it is disturbingly easy to challenge the rankings cited by Prichard and the Chamber. I already fairly extensively discussed the problems with the Prichard/Chamber rankings in another blog, “Rankings of Kentucky’s educational performance still flawed – Déjà vu,” so I will not further elaborate here.

The Prichard/Chamber paper also contains a very brief history of education in Kentucky – sort of – from 1980 onward. But, the Prichard/Chamber history mistakes or ignores a whole lot of important events that deserve continuing attention because the issues remain troublesome today. Some examples include:

  • Claims that academic standards were the foundation of the 1990 reform

  • The multiple reasons behind KIRIS testing’s failure in 1998, including ignoring multiple critical reports that still have important messages for today

  • The total failure of the very expensive Performance Events program in 1996

  • The collapse of the Writing Portfolio program, which also occurred in 1996

The Prichard/Chamber report ignored these things, but they provide important lessons for education problems Kentucky is facing today. So, this blog, which you can see in full by clicking on the “Read more” link, discusses some of “The rest of the Story,” as the late Paul Harvey used to so nicely put it.

To begin, even some of the “history” that is recounted in the Prichard/Chamber paper is off the mark.

Claims that academic standards were the foundation of the 1990 reform are incorrect

Page 3 in the Prichard/Chamber paper talks about key principles and elements of the

Kentucky Education Reform Act of 1990 (KERA). The Prichard/Chamber paper says that in 1990:

“Academic expectations reflecting high standards were created by teachers, parents, business leaders and other citizens to define what students should know and be able to do.”

This isn’t what happened. Actually, the original KERA legislation says very little about academic standards. What is included is woefully inadequate to provide any foundation for an education program.

In fact, anything beginning to approach a real academic standards document was not adopted until over half a decade later.

Here is the full listing of the KERA law’s very limited discussion of education standards:

(1) Communication skills necessary to function in a complex and changing civilization;

(2) Knowledge to make economic, social, and political choices;

(3) Understanding of governmental processes as they affect the community, the state, and the nation;

(4) Sufficient self-knowledge and knowledge of his mental and physical wellness;

(5) Sufficient grounding in the arts to enable each student to appreciate his or her cultural and historical heritage;

(6) Sufficient preparation to choose and pursue his life's work intelligently; and

(7) Skills to enable him to compete favorably with students in other states.

This is slightly expanded in Section 3 of the bill, which says:

(I) Upon the effective date of this Act, the Council on School Performance Standards established by Executive Order 89-151 shall be reconvened by the chairman to frame the following six (6) goals for the schools of the Commonwealth in measurable terms which define the outcomes expected of students:

(a) Schools shall expect a high level of achievement of all students.

(b) Schools shall develop their students' ability to:     

1. Use basic communication and mathematics skills for purposes and situations they will encounter throughout their lives;     

2. Apply core concepts and principles from mathematics, the sciences, the arts, the humanities, social studies, and practical living studies to situations they will encounter throughout their lives;     

3. Become a self sufficient individual;     

4. Become responsible members of a family, work group, or community including demonstrating effectiveness in community service;     

5. Think and solve problems in school situations and in a variety of situations they will encounter in life; and     

6. Connect and integrate experiences and new knowledge from all subject matter fields with what they have previously learned and build on past learning experiences to acquire new information through various media sources.

(c) Schools shall increase their students' rate of school attendance.

(d) Schools shall reduce their students' dropout and retention rates.

(e) Schools shall reduce physical and mental health barriers to learning.

(f) Schools shall be measured on the proportion of students who make a successful transition to work, postsecondary education and the military.

That’s it. This is all the original KERA legislation says regarding standards for the new K to 12 education program in Kentucky. Detail is totally lacking.

Obviously, this KERA language didn’t give Kentucky’s 4th, 8th and 12th grade teachers any sort of real clues about what their curriculum should look like to prepare students in those grades for the new state assessments, which KERA also required. Consider that today, even the current, Common Core based standards for English language arts and math in Kentucky, while still not sufficient, certainly offer far more detail than Kentucky provided its teachers in 1990. Compare Common Core to the very limited language in KERA to see that the claim that early 1990s education in Kentucky was standards driven is just ludicrous. Aside from an old “Program of Studies” document that predated KERA, there basically weren’t any standards back then. And, the Program of Studies only briefly discussed what needed to be included in various courses that could be offered; but, the document didn’t specify which, if any, of those courses might be required under KERA.

What made the history of Kentucky’s early education reform era standards even more disturbing was that additional material was a long time in coming.

The Kentucky Board of Education added little in the early 1990s to KERA’s ridiculously meager standards list, initially providing only a set of 75 single-sentence expansions to the six basic items in the KERA legislation. These 75 items were called “Kentucky's Learning Goals and Academic Expectations.” Those 75 single-sentence items were basically all that teachers initially had to work from to cover the development of education systems for all grades from Kindergarten (which quickly became part of another dubious experiment, the Ungraded Primary for what formerly were grades K to 3) thorough the 12th grade.

Then, it got even worse. The Goals and Expectations, later called “Kentucky’s Learning Goals and Learner Outcomes,” were further reduced in 1994 to only 57 single-sentence statements (See Page 6 in “Elementary Change: Moving toward Systemic School Reform in Rural Kentucky”)!

So, in reality, hardly any standards information was available to teachers and students during the first two years of testing in 1992 and 1993 with the state’s now long defunct Kentucky Instructional Results Information System (KIRIS) tests. Kentucky’s teachers struggled because they truly had no clue what was fair game on these new tests.

More, somewhat standards-like information became available in 1993 when the state released its first edition of “Transformations, Kentucky’s Curriculum Framework,” but as researchers at the Thomas B. Fordham Foundation would later point out in 1998, Transformations:

“…is a guide to instruction rather than a standards document of the sort we find in the Core Content….” (Page 32)

Furthermore, due to its size and sometimes confusing format, Transformations was not easy to use to develop curriculum in any event.

In fact, teachers were still complaining about the lack of information about the state’s true learning goals when the first draft “Content Guidelines” document came out in September 1994. This was the first document that began to resemble what most people expect to see in an education standards document today. By the time the draft Content Guidelines document came out, three years of KIRIS testing had already been accomplished. The final release version, now called “Kentucky Core Content for Assessment Version 1.0,” didn’t take effect until 1996. By this time, KIRIS was just two years away from total failure.

And, part of that failure might well have been due to the poor quality of Kentucky’s standards that did exist even as of 1998. In that year, the Thomas B. Fordham Foundation issued a series of reports on the quality of education standards in various states. In “State Mathematics Standards, An Appraisal of Math Standards in 46 States, the District of Columbia, and Japan,” Fordham raters gave Kentucky’s math standards an overall grade of “D.” In a companion report on science standards, “State Science Standards, An Appraisal of Science Standards in 36 States,” Kentucky got an even lower score of “F.” Other Fordham reports gave Kentucky’s standards as of 1998 an “F” for both history and geography.

Thus, the “Rest of the Story” shows Prichard/Chamber’s claim that high quality academic standards established in KERA formed the basis of reform in 1990 is not correct. There were no high quality education standards in Kentucky at any time during the KIRIS era. The education standards information that was available in Kentucky was far too limited, vague and of too low a quality to serve as an effective driving engine for anything really productive. Confusion reigned among Kentucky’s educators because no clear, specific and adequately detailed standards were available.

Also missing: Other reasons behind KIRIS’ failure in 1998

Comments on Page 4 of the Prichard/Chamber report omit some of the important reasons why the Kentucky General Assembly shut KIRIS down in 1998. The Prichard/Chamber report says that test scores showed improvement on KIRIS, which is true. But, a major credibility problem with those rapid KIRIS score increases was similar improvement was not generally shown by other tests.

For example, KIRIS showed very rapid rates of improvement in student proficiency rates in math and reading that simply were not matched in the National Assessment of Educational Progress (NAEP), as Figures 1 and 2 show.

Figure 1

G4 Reading KIRIS to NAEP

In Grade 4 reading, between 1992 and 1998 the NAEP only showed a proficiency rate increase of 6 points in Kentucky while the KIRIS proficiency rate exploded from only 3 percent in 1992 to 33 percent in 1998. Even worse, the small rise in NAEP scores in 1998 was accompanied by a very large increase in exclusion of students with learning disabilities, which led to charges that even Kentucky’s small NAEP improvement might be an illusion.

Figure 2

G4 Math KIRIS to NAEP

In elementary school math (KIRIS testing of math shifted from the fourth to the fifth grades between 1994 and 1995), a NAEP comparison also raised concerns. KIRIS proficiency skyrocketed from just five percent proficiency for 1992 to a 20 percent rate in 1998. While only two years of NAEP fourth grade data were available for comparison in 1998, the rather small, 3-point increase in proficiency between 1992 and 1996 wasn’t even a statistically significant change. Meanwhile, the KIRIS elementary school math proficiency rate nearly tripled between 1992 and 1996, eroding confidence in Kentucky’s testing program.

An additional concern – supported by both Figures 1 and 2 – was a suspicion that KIRIS scoring was purposely made excessively hard in the early test years then later relaxed to create an image of improvement. The fact that both the 1992 math and reading proficiency rates in KIRIS were much lower than rates on the rather demanding NAEP certainly adds support to this conjecture. The additional fact that by its last year KIRIS was reporting proficiency rates now notably higher than the NAEP’s further reinforced such conjecture.

Another credibility problem was that KIRIS seemed to dubiously be identifying the wrong school systems for problems. KIRIS’ flagging of high wealth schools such as the Dixie Heights High School in Kenton County and a school in the state’s wealthiest school system, the Anchorage Independent School District, led to major concerns that something was fundamentally wrong. Funneling scarce education resources to supposed problems in Dixie Heights and Anchorage just didn’t make sense, but that is what was happening with KIRIS.

Still more missing Kentucky education history

The Prichard/Chamber paper has a number of very interesting and significant holes in its historic coverage. A good clue to this is found by examining the year bands listed in headings on Pages 2 through 5 in the report. The first year band listed is “1980-1990.” The next year band listed is “1998-1999.” It’s as though 1991 through 1997 didn’t exist.

However, those early KERA years certainly did exist, and some of the major problems that occurred in this period need careful consideration today because Kentucky seems poised to repeat some of the very same mistakes that surfaced during those tumultuous years under KIRIS.

Aside from the scoring inflation in KIRIS we’ve already mentioned, a few things from the 1991 to 1997 period ignored by the Prichard/Chamber paper include:

  • Failure of the very expensive Performance Events program in 1996

  • Collapse of the Writing Portfolio program, which also occurred in 1996

  • Total lack of coverage of numerous critical reports that have important messages for today

There is a similar paucity of coverage for the period of the Commonwealth Accountability Testing System (CATS) assessment and accountability system that operated in Kentucky from 1999 to 2008. A few key items not mentioned include:

  • Continued and obvious inflation in the CATS Kentucky Core Content Test (CATS KCCT) results compared to the NAEP

  • Continuing, never resolved inflation in scoring of the writing portfolio program

Kentucky needs to consider these historical items because they remain important today.

For example, as I noted after the 2015 NAEP results were released, we are starting to see some signs that the KPREP tests are now becoming inflated, just as happened with both KIRIS and CATS. The new KPREP inflation was discussed in a four-part blog series late last year (Part 1, Part 2, Part 3 and Part 4).

Also, plans in process in 2016 at the Kentucky Department of Education will use assessment items similar to the failed Performance Events for Kentucky’s new science assessments.

Already, there are problems with the re-introduction of performance-event-like items. Getting such complicated and problematic assessments up and running has created massive delays. A 2011-12 start year for the new science assessment is mandated in Senate Bill 1 from the 2009 Regular Legislative Session. Clearly, that legislative requirement was not met. Now, even the US Department of Education is expressing displeasure about the delay in launching the new science tests.

Furthermore, problems that beset the old Performance Events were never solved, so Kentucky is probably sailing into dangerous, and expensive, testing territory again with this attempt to bring back a historically unworkable idea for state assessments.

Another current problem where past history can inform is the use of teacher self-scoring for accountability items. Human nature tells us that isn’t going to work, and Kentucky’s past history with the teacher-self-scored Writing Portfolio program confirms that. Every audit ever conducted of the old Writing Portfolio program found notable inflation in teacher-awarded scores. The department of education never found a workable solution. Never the less, Kentucky is currently using school staff self-scoring for its Unbridled Learning Program Reviews and Professional Growth and Effectiveness System. Both are producing obviously inflated scores.

Some of the reports critical of KIRIS

A number of very important reports were created in the first decade of KERA, but it seems the Prichard/Chamber report would like us to forget them. Here’s some of “The rest of the story” in this area.

The first report in this unmentioned group was issued by the University of Kentucky in early 1995. It was created after the Kentucky Department of Education pressured the university to start using KIRIS results as an entry requirement. This report was called:

An Evaluation of the Kentucky Education Reform Assessment System (KIRIS) and Academic Performance at the University of Kentucky,” by Roseann R. Hogan, et al., and came out on March 1, 1995. It is not online.

The Hogan report compared KIRIS scores for the UK fall entering freshmen of 1993 to ACT performance and high school grade point averages. A key finding that pretty much sums up the value of KIRIS says:

“Using KIRIS scores does not increase the ability of the University to predict how well a student will perform in specific introductory courses in English, mathematics, social science and physical sciences.”

The next report was possibly the most important report on KIRIS ever created. It was assembled by six of the nation’s most highly regarded education testing experts at the request of the Kentucky Office of Education Accountability (OEA), which was, and remains today, an important part of the Kentucky Legislative Research Commission. This key report was:“

Review of the Measurement Quality of the Kentucky Instructional Results Information System, 1991-1994,” by Ronald K. Hambleton et al., and was released on June 20, 1995.Hambleton’s massive report, which recounted many problems with the KIRIS program, is often referred to as the “OEA Panel Report.” The OEA Panel Report probably can be noted as the beginning of the real decline in the statewide reputation for KIRIS. Some key findings:

  • "The misclassification rates of schools in some reward categories are high." (Page 9-2)

  • "Although limited, evidence from other assessments (National Assessment of Educational Progress and the American College Tests) fails to show any reflection of the large gains observed on KIRIS. This suggests that at least in the grades and subject areas to which these data pertain, KIRIS markedly overstates actual gains in student achievement." (Page 9-2)

  • "Accumulation of equating errors…make(s) year to year comparisons of KIRIS results of questionable validity." (Page 9-3)

  • "The Panel considers the performance standards used to classify students as Novice, Apprentice, Proficient, and Advanced to be untrustworthy. This makes the information provided to parents and others about student performance potentially misleading." (Page 9-3)

  • "For numerous reasons, the Panel concluded that the KIRIS portfolio assessments are currently inappropriate for use in the KIRIS accountability system….The scoring of portfolios remains too flawed for use in a high-stakes system. In particular, scores provided by teachers in students' own schools -- the scores used in the accountability system -- remain biased upward by a substantial and variable amount." (Page 9-4)

A few key recommendations include:

  • "The portfolios should not be used at this time in the accountability index." (Page 9-7)

  • "Any assessment data used for accountability purposes should be scored externally to the schools in which the data are collected." (Page 9-7)

  • "Material provided to the press and the public should provide alternative interpretations of the results when such alternative interpretations are plausible." (Page 9-8)

  • "There is a great need to establish routine auditing procedures on all aspects of KIRIS including assessment development, standard-setting, equating, etc. Because of the high-stakes nature of KIRIS and the resulting potential for inflated gains in scores, it is essential that mechanisms be established for ongoing auditing of observed gains on KIRIS." (Page 9-8)

  • "There has been a shift toward process at the expense of content in the curricula and this shift needs to be reconsidered." (Page 9-9)

In the end, the OEA Panel Report found:

  • “Considerably more progress is needed to establish KIRIS as a technically sound accountability and assessment system.” (Page 9-10)

The next major report fully sounded the death knell for KIRIS. It came out in 1998. This final report was:“

Kentucky Instructional Results Information System: A Technical Review

,” by James S. Catterall et al., and was released in January 1998.This was the last report produced before KIRIS ended. It also was created at the request and with the funding of the Kentucky Legislative Research Commission.

Catterall generally agreed with findings from the earlier OEA study from 1995, pointing to continuing problems with establishing the validity and reliability of KIRIS that had not been resolved after the 1995 Hambleton report was issued.

Some areas of concern to Catterall included the continued use of Writing Portfolios given the scoring problems identified in audits and the call in the OEA Panel Report for removing portfolios from the Kentucky school accountability program. Concurs Catterall:

  • “Portfolios were cited as having had negative effects on instruction almost as often as having had positive effects." (Page 67)

Other issues involved the constant changing of administration conditions for KIRIS such as changing grades where assessments were given.

Regarding the obviously growing score inflation, Catterall notes:

  • “When index scores in reading go up by 15 percent and CTBS or NAEP scores only go up by 3 percent, there appears to be a huge gulf between the two tests.” (Page 13)

Catterall also noted other apparent disconnects between both KIRIS scores and KERA-preferred instructional practices and real student improvement, saying:

  • “KIRIS at present has documented neither strong enough ties between student performance and school index scores, nor strong enough ties between changes in school instructional practices and increased performance.” (Page 13)

Of special note, this last-of-the-KIRIS-era reports says:

  • “Given our understanding of what validity evidence has been gathered since the OEA report, we believe more should have been done in the interim and much more remains to be done.” (Page 35)

In other words, some of the underlying assumptions in the KIRIS assessment program remained inadequately validated after more than 6 years of testing.

In its section on Performance Events, the Catterall study talks about inconsistent scoring issues, commenting that:

  • "The patterns involved performance event results that were large in magnitude and inconsistent with results from open-response in the same content area, and inconsistent with results from performance events in the past." (Page 41)

Later adding:

  • "Only after the fact was the absence of adequate technical support recognized, confronted, and resolved by the decision not to use the performance events." (Page 42)

This last set of comments, related to the Performance Events, is important today because, as previously mentioned, Kentucky’s education leaders are considering using items very similar to Performance Events in the state’s new science assessments. Aside from the enormous challenge of fairly and accurately scoring Performance Events consistently there is an even greater problem: sustainability. These events must be changed over time to prevent score inflation due to teaching to the test. However, changing these complex events for new ones that sample the same subject matter at the same level of difficulty is an enormous challenge. Failure to do this well means that there will be no valid trend lines in the data, rendering the performance-oriented assessments worthless.

Sustainability problems doomed Performance Events in 1996 in Kentucky, and there is every reason to anticipate exactly the same problems will dog the new science test formats, as well.

It must be understood that the problems identified by Catterall and others in the 1990s have not been resolved since. Continuing, very expensive problems – problems for which Kentucky’s education history provides no resolution – can be anticipated with the new science plan.

Important CATS era material also missing

As noted on Page 4 of the Prichard/Chamber paper, KIRIS was indeed replaced by the Commonwealth Accountability Testing System, or CATS, after 1998 testing was completed. However, CATS, and its associated testing program, the Kentucky Core Content Tests (KCCT), continued to have problems.

One major issue was that CATS was not changed enough from KIRIS. Writing Portfolios remained in the assessment and accountability program, and the teacher self-awarded scores remained inflated until the final demise of this failed assessment element with the passage of Senate Bill 1 during the 2009 Regular Legislative Session.

Scoring standards for the KCCT also started out right away with obvious inflation and only got worse over time, as Figure 3 shows.

Figure 3

KIRIS to CATS Scoring Inflation

Right from the start in 1999, CATS showed serious inflation in scores. The situation further deteriorated over time so that by 2007, CATS was generally showing proficiency rates for elementary and middle school reading and math that were about twice as high as those the NAEP was reporting.

By the time the legislature met in 2009, it was clear that CATS testing credibility had eroded just as badly as with the earlier KIRIS situation.

In addition, the Kentucky Council on Postsecondary Education was reporting that large percentages of incoming freshmen required remedial courses before they could enter regular college classrooms.

Clearly – even in 2009 – whatever real education standards applied to the CATS KCCT, those standards were not aligned to what students actually needed for life.

The legislature reacted by throwing out not only the assessments, but the underlying standards, as well.

Following passage of Senate Bill 1 in 2009, the legislature and citizens anticipated adoption of high-quality, Kentucky-developed standards targeted at getting K to 12 students college ready. That isn’t what happened, however. Kentucky’s educators instead went out of state and adopted the Common Core State Standards for English Language Arts and Mathematics. The state’s educators later adopted the also out-of-state developed Next Generation Science Standards.

As a note, we used the word search feature in Adobe Acrobat to search for the terms “Common Core” and “Next Generation Science” in the Prichard/Chamber report and found not one instance of either being mentioned. Instead, the report hides the fact that at present Kentucky’s reading, writing and math standards are basically still cut and paste adoptions of the Common Core while the state’s current science standards are a cut and paste adoption of the NextGen Science Standards. Renaming these the Kentucky Core Academic Standards in no way changes that fact.

Perhaps in the future Kentucky will make changes to the standards that will render them non-common with Common Core and NextGen Science, but that has not happened to date. Until it does, pretending that the state’s current standards are somehow different from Common Core and NextGen Science is just that, pretending.

To close this blog, the Bluegrass Institute doesn’t think we should ignore things that vitally impact our children’s education. That includes not ignoring past lessons learned, too often learned with considerable heartache. In the best interests of all, we need to work from a full and accurate picture of our education history and where we currently stand so that we can avoid repeating expensive mistakes from the past.

We are disappointed that the new Prichard/Chamber paper doesn’t get the picture nearly accurate enough to serve as a reliable basis going forward for decision making. Kentucky’s citizens deserve to hear “The rest of the story,” and we have tried to provide a part of that here.

To learn more about Kentucky’s history with education reform, we suggest:

Selling ‘Performance’ Assessments with Inaccurate Pictures from Kentucky

(Note: this popular paper can also be found in the archives of the Nelson County Gazette, in the Ideas web site from the Research Division of the Federal Reserve Bank of St. Louis and at the Nonpartisan Education Review).

We also suggest:

"Assessing CATS: Questions that must be answered so that No Child is Left Behind in Kentucky"

and

KERA (1990 – 2010): What Have We Learned

(Minor update July 12, 2016 adds indenting for easier reading of KERA act elements)