Monday, January 5, 2009

Proposal: The Journal of Null Results

There's a lot of research that gets 'done' but doesn't become 'real', because it isn't submitted for publication. One common reason for this is that there's a lot of research that in some sense fails to find anything. More specifically, what is found isn't far from the 'null hypothesis' that there is no interesting relationship between the variables measured, or no effect of the experimental manipulation.

Journals mostly have a strong preference for articles that do find something, which means something other than an outcome consistent with the null hypothesis. That is, they prefer 'positive' results.

There are arguments for this preference, but following it comes at a cost. First the cost: discovering that the null hypothesis holds in some specific case is not the same thing as discovering nothing. You discover nothing when you don't do any work at all, or do it badly (having no control condition for an intervention, or having too few subjects to do meaningful analysis, etc.) that you might as well have not done any. When you do work that likely could have found something if it was there, and don't, that's a modest discovery. It should be part of the scientific record. It's absence, among other things, confounds literature surveys and meta-analyses.

This is not, of course, an original point. It has been made eloquently by various people. Among recent examples see this piece by Ben Goldacre. Here's one quotation about the loss represented by un-published trials:
We may never know what was in that unpublished data, but those missing numbers will cost lives in quantities larger than any emotive health story covered in any newspaper. Doctors need negative data to make prescribing decisions. Academics need to know which ideas have failed, so they can work out what to study next, and which to abandon. In their own way, academic journals are exactly as selective as the tabloid health pages.
Some have argued that failing to publish the results (no matter what they are) of any research involving human subjects is unethical. Here is an example in the BMJ. I think a stronger point could be made. At the very least, there is an obligation to make available the results of any science that is to any extent publicly supported. For one report on a survey attempting to figure out how many un-published research there is, see this report in Nature. While I'm throwing links around, some of what I say here is related to some points made in the article on The Future of Science (well worth reading) by Michael Nielsen.

The preference, on the other hand, makes sense for at least two reasons. One is that 'unexpected' results are regarded as better than expected ones, and confirming the null hypothesis is, from this perspective, a very boring an predictable thing to do. This preference is not unconditional - it's sometimes onside independently to duplicate a result, indeed it is often required. It's also not clear that 'unexpected' is a very important category - what is expected depends on prior beliefs, and so the very same empirical work might switch back and forth from being expected to unexpected as background theory changes, or other positive results get taken on board.

Another reason is scarcity of space. Even if null results are in some sense part of science, they're less likely to get cited or built on than positive results. They are indeed less interesting, and when space is scarce, as it mostly is with paper journals, and with finite time and patience on the part of reviewers irrespective of the medium of publication, a bias in favour of the interesting is rational.

The internet takes away the space scarcity problem, and it might be that the reviewer scarcity problem can be managed. So here is my at least semi-hare-brained proposal, in its first draft form:
  1. There should be a web-based Open Access Journal of Null Results.
  2. The journal should be non-disciplinary.
  3. Any scientific team or individual can submit a brief report of any research that led to a null result.
  4. Submissions should be publication quality in the following respects:
    (a) Authors and affiliations should be fully detailed.
    (b) The submission should have a proper title, abstract, account of methods, and data analysis.
    (c) Where appropriate to the discipline it should be made clear what ethical approval was obtained, and whether there were any conflicts of interest.
    (d) Where possible the primary data (excepting anything that violates consent or privacy protocols) should be archived with the submission.
  5. People or teams who submit should be asked to provide details of at least three peer-reviewed publications in the past five years by independent authors (i.e. with no overlap with the author list of the submission). These authors are then asked to briefly review the methods of the submission, and say whether in their view the research had a decent chance of finding anything if it was there. Submissions that pass this test have that fact noted on the journal, along with the names of the reviewers and the date at which their opinions were given. (So this is not a blind review process at all.) Submissions passed by two reviewers are considered peer-reviewed publications in the Journal.
  6. As far as possible, the operation runs automatically, with minimal volunteer human oversight.
I regard (4) as the most serious problem. Experiments with volunteer review haven't gone well (again, see Nielsen's The Future of Science), which is why I think reviews should be requested. But the proposed review system is a little unusual, and I think some careful debugging would be needed to make it fly.

I don't know who might host such a thing, or whether there are obvious flaws in this idea, or who might possibly foot the bill. I'm still not entirely recovered from New Year celebrations. But for now, let the blogosphere have its say...

21 comments:

Anony Mouse said...

Great idea.

You forgot the link to Nielsen's piece the first time around...

Anony Mouse said...

Oh... never mind re:Nielsen. I see the link IS there...

Anonymous said...

For a discussion of this topic that occurred late in 2007 on the "Publishing in the New Millennium forum", see: tinyurl.com/9hfz2j. Some existing journals are suggested.

Doctor Spurt said...

Thanks, Anonymous. Among the links in the piece suggested are the following:

http://www.wired.com/science/discoveries/magazine/15-10/st_essay

and

http://bitesizebio.com/2007/10/29/iress-and-negative-data/

Doctor Spurt said...

As the link below points out, there are some journals of negative results in some fields, and some of them are Open Access: http://www.earlham.edu/~peters/fos/2009/01/call-for-oa-to-null-results.html

Neuroskeptic said...

My worry is that the main barrier to the publication of negative result is not journals but scientists themselves, who just don't submit them for publication, and who may not be willing to accept that they've got negative results even to themselves, leading to endless post hoc tests until they get a "positive result" which they then submit.

As far as I can see, the obvious solution to this is pre-registration of all trials in those fields of science where it's a problem.

as I say here

This would create issues of its own, scientists wouldn't like revealing their plans to their rivals in advance, but it would solve a lot of problems.

it's already being done with clinical trials and it could be easily extended to, e.g. genetic association studies, epidemiology, and neuroimaging.

Doctor Spurt said...

Thanks, Neuroskeptic. I'm doubtful about the hopes for a registration system beyond the one for clinical trails, although that doesn't mean it's a bad idea.

I agree that there's a problem of attitude. But that can be fought - any work with genuine risk might 'fail', and we need to work to protect that idea, and to celebrate interesting failures, dead-ends, etc., as well as accept the inevitably accumulation of boring failures that will come with the process of discovery.

Jean-Claude Bradley said...

One of the main obstacles to getting people to contribute is the amount of time and effort required to write a paper and submit. If you simply make your lab notebook (which you have to keep anyway) public and easily indexed by the major search engines, you can get many of the same benefits with much less effort. Here are examples of Open Notebooks: ONSchallenge, UsefulChem.

Doctor Spurt said...

Thanks, Jean-Claude. Open Notebooks reduce the problems in the fields that have them. And I agree that there's a time/effort problem. But we're not talking *that* much time (writing some descriptive boilerplate about work that's already been done) and I'd hope that with somewhere for the work to go (even if not a 'proper' journal) that might seem worth it. What we'd really need is a change in values - if it was regarded as shameful *not* to briefly write up and make public negative results that might help. (Say it was considered 10% as bad as serious falsifying, so that doing it 10 times would be career ending?)

Jean-Claude Bradley said...

It would be interesting to see how different people feel about writing papers in general. I can only speak for myself but it always ends up taking much longer than I expect it will.

Doctor Spurt said...

Agreed. I don't know of any good surveys. Anecdotally, I'm with you - it always takes longer, and is harder, than I'm expecting.

Anonymous said...

BMC Research Notes? Specifically goes for short reports, encourages maximum publication of datasets in reusable additional files, and reviews only for scientific soundness (and ethics, of course).

(How "non-disciplinary" are we going? BMC RN is biology and medicine only.)

Doctor Spurt said...

I'd say any discipline where there's a negative result issue, which is certainly a bigger list that biology and medicine (psychology and economics are the ones I'd be most vexed about myself). There'd be nothing wrong with having more than one place for negative/null results to go to, and things like BMC Research Notes are a useful model.

Anony Mouse said...

Another proposal: a journal of unfinished projects: http://bjoern.brembs.net/news.php?item.482.3

Doctor Spurt said...

Let's not lower the bar *too* far. We'll end up with the Journal of Stuff Someone or Other Thinks it Would be Cool if Somebody Did (but nobody has hauled thumb ex-arse yet).

Simon Halliday said...

I am with you strongly on this one. Esther Duflo wrote on this topic, specifically in reference to randomized economic trials and other economic experiments. Economics distinctly lacks such a journal and sorely needs one. I'm with you on Psychology too.

Anonymous said...
This comment has been removed by a blog administrator.
bollywood girls said...
This comment has been removed by a blog administrator.
Carnelian Kitten said...

See The Journal of Articles in Support of the Null Hypothesis: http://www.jasnh.com/

Doctor Spurt said...

Cool, and thanks Kitten! So psychologists are OK. Anyone know how many know that there is such a journal?

Brett said...

It's called the Journal of Articles in Support of the Null Hypothesis.