The Economist: Quality of research isn’t so great
The Economist has published an interesting article, “Try Again,” about the generally unreliable quality of many research papers in the psychology area.
The problem is that many such papers get only a “peer review.” No one actually tries to replicate the research to see if the results can be reproduced, which is the way real science is conducted.
Thus, in a study of the studies, the Economist article says many psychology findings could not be solidly duplicated.
Writes the Economist:
“In truth, these results will surprise few of those involved in research, for whom bias at the heart of academic publishing is an open secret. High-profile journals are more likely to accept articles that show new, positive results than ones which demonstrate no correlation or effect. Since the careers of researchers depend on getting their work published, the temptation to, for example, massage things by removing inconvenient outliers which those concerned persuade themselves are freak results, can be overwhelming.”
I bring this up here because education research is predominantly pursued in the same manner as psychological research. In fact most studies in education get published with only a peer review, at best. Replication efforts are rare. Indeed, because many education studies fail to identify the schools and districts where they are conducted, true replication is often impossible.
This flawed research process leads to a lot of ineffectiveness as we struggle to improve our education system. We are hearing calls today to use radical education ideas that Kentucky already tried in the early days of KERA, ideas that never worked.
For example, “Fuzzy Math” ideas are back, again. The Internet is alive with examples of crazy math workbook and assessment questions. Parents are bewildered about how – or if – this stuff really works and why it is even worth the confusion. Teachers don’t really seem to know, either, because they can’t explain it to parents.
We hear – again – that “research shows” we need advanced assessment elements like performance events to adequately test science. Researchers have either forgotten or are choosing to ignore brutal lessons learned in Kentucky in the early 1990s that such expensive items are not sustainable in assessments. The “Performance Events” in the old KIRIS assessments expensively crashed in just four years. Still, “research” keeps appearing touting the value of such failed ideas, and more kids are put at risk as a result.
This isn’t the way good science gets done, but it is the way too much "science" is being done. That’s bad for the field of psychology, and it’s bad for education, too. Buyer, and parent, beware.