Journals mostly have a strong preference for articles that do find something, which means something other than an outcome consistent with the null hypothesis. That is, they prefer 'positive' results.
There are arguments for this preference, but following it comes at a cost. First the cost: discovering that the null hypothesis holds in some specific case is not the same thing as discovering nothing. You discover nothing when you don't do any work at all, or do it badly (having no control condition for an intervention, or having too few subjects to do meaningful analysis, etc.) that you might as well have not done any. When you do work that likely could have found something if it was there, and don't, that's a modest discovery. It should be part of the scientific record. It's absence, among other things, confounds literature surveys and meta-analyses.
This is not, of course, an original point. It has been made eloquently by various people. Among recent examples see this piece by Ben Goldacre. Here's one quotation about the loss represented by un-published trials:
We may never know what was in that unpublished data, but those missing numbers will cost lives in quantities larger than any emotive health story covered in any newspaper. Doctors need negative data to make prescribing decisions. Academics need to know which ideas have failed, so they can work out what to study next, and which to abandon. In their own way, academic journals are exactly as selective as the tabloid health pages.Some have argued that failing to publish the results (no matter what they are) of any research involving human subjects is unethical. Here is an example in the BMJ. I think a stronger point could be made. At the very least, there is an obligation to make available the results of any science that is to any extent publicly supported. For one report on a survey attempting to figure out how many un-published research there is, see this report in Nature. While I'm throwing links around, some of what I say here is related to some points made in the article on The Future of Science (well worth reading) by Michael Nielsen.
The preference, on the other hand, makes sense for at least two reasons. One is that 'unexpected' results are regarded as better than expected ones, and confirming the null hypothesis is, from this perspective, a very boring an predictable thing to do. This preference is not unconditional - it's sometimes onside independently to duplicate a result, indeed it is often required. It's also not clear that 'unexpected' is a very important category - what is expected depends on prior beliefs, and so the very same empirical work might switch back and forth from being expected to unexpected as background theory changes, or other positive results get taken on board.
Another reason is scarcity of space. Even if null results are in some sense part of science, they're less likely to get cited or built on than positive results. They are indeed less interesting, and when space is scarce, as it mostly is with paper journals, and with finite time and patience on the part of reviewers irrespective of the medium of publication, a bias in favour of the interesting is rational.
The internet takes away the space scarcity problem, and it might be that the reviewer scarcity problem can be managed. So here is my at least semi-hare-brained proposal, in its first draft form:
- There should be a web-based Open Access Journal of Null Results.
- The journal should be non-disciplinary.
- Any scientific team or individual can submit a brief report of any research that led to a null result.
- Submissions should be publication quality in the following respects:
(a) Authors and affiliations should be fully detailed.
(b) The submission should have a proper title, abstract, account of methods, and data analysis.
(c) Where appropriate to the discipline it should be made clear what ethical approval was obtained, and whether there were any conflicts of interest.
(d) Where possible the primary data (excepting anything that violates consent or privacy protocols) should be archived with the submission.
- People or teams who submit should be asked to provide details of at least three peer-reviewed publications in the past five years by independent authors (i.e. with no overlap with the author list of the submission). These authors are then asked to briefly review the methods of the submission, and say whether in their view the research had a decent chance of finding anything if it was there. Submissions that pass this test have that fact noted on the journal, along with the names of the reviewers and the date at which their opinions were given. (So this is not a blind review process at all.) Submissions passed by two reviewers are considered peer-reviewed publications in the Journal.
- As far as possible, the operation runs automatically, with minimal volunteer human oversight.
I don't know who might host such a thing, or whether there are obvious flaws in this idea, or who might possibly foot the bill. I'm still not entirely recovered from New Year celebrations. But for now, let the blogosphere have its say...