Is There Something Wrong With Science? December 21, 2010Posted by Metabiological in Science.
Tags: science, scientific method, The New Yorker
Let me begin by saying that I am a believer in science. Not in a religious way, but in a Kuhnian way. For those of you who haven’t read Thomas Kuhn’s “The Structure of Scientific Revolutions” go out and get it now. It is arguably the most important book on the history and philosophy of science published in the last hundred years. What I mean when I say I believe in science is what Kuhn meant when he talked about why certain theories win out over others: not because they explain phenomena perfectly but because they explain it better than their contemporaries. Science is not perfect, it just happens to be better than any other method we’ve currently tried (similar to my feelings on democracy and capitalism.)
So when an article comes along with the provocative sub-title “Is something wrong with the scientific method” I certainly take notice. After all this is not only my future livelihood being called into question but the basis of my worldview. I’m an empiricist and science is empirical. To call it in to question is to call in the very thing that separates science from every other method of knowledge acquisition: reliability.
Thankfully upon reading the article the title turned out to be what titles often are; little more than a hook to get people interested. What the article examines is a phenomena in science called the decline effect. The short version is that effects found via experimentation start off very strong but as more and more studies are performed on the effects the strength diminishes, in some cases to the point where it disappears altogether.
At first glance this would seem to call into question the validity of the scientific process but what is more likely is that a few different factors are at work. The first on that popped into my head, and the first mentioned in the article, is regression to the mean. This is basic statistics: a single experiment can be strongly influenced by outliers. As more studies add to the sample size that influenced is diminished until a more accurate picture of the phenomena is revealed. This probably accounts for a fair amount of the problem but not all of it.
To explain the rest of the discrepancy the article than examines the problem of publication bias. Anyone at all involved in science knows about this and it is most definitely a problem. As a general rule publications are more likely to publish studies that have found a statistically significant effect over studies that have found nothing. This is just human nature in action as people prefer to find something rather than nothing (even though to paraphrase a great scientist, “in science finding nothing is still finding something”).
This still doesn’t fully explain the decline effect so the article finishes with another common practice: selective reporting. This sounds a lot worse than it actually is. The fact is that scientists are human too and when interpreting data are more likely to interpret it in a way that supports their conclusions. This is an unconscious act that everyone is guilty of and though science training is supposed to minimize there is no doubt it still happens. This isn’t helped by the fact that interpreting data is A LOT harder than many people think it is.
So is there a problem with science? No. Three possible explanations for the decline effect give us one statistical effect and two examples of human error. The problem is not the tool but the ones using it and while this is cause for some concern it is not the death blow to science. The nice thing about the scientific process, the thing that separates it from all other methods, is that it is self-correcting. As knowledge accumulates old ideas are reinterpreted, altered and struck down. Even the decline effect itself is evidence of the self-correcting power of science. Is it perfect? Of course not, but I challenge you to find a better method.