Wellcome Open Research

Rewarding best practice with Registered Reports

Dorothy Bishop, Professor of Developmental Neuropsychology, University of Oxford, published the first full Registered Report on an open post publication peer review model. Her research published earlier this year in Wellcome Open Research looks at the impact of sex chromosomes on neurodevelopment.

Dorothy tells us why she is an enthusiast for this new form of journal article, which is gaining popularity with 142 journals so far adopting this format as part of their regular submission process.

I’ve been reading a book by Michael J Mahoney, called ‘Scientist as Subject’, in which he noted serious problems with our current models of publishing. In particular, he argued that reviewers and editors tend to be heavily influenced by results, being more ready to accept papers that agree with their prejudices, especially if the findings are unambiguous and clearcut. Mahoney proposed instead that:

‘…  manuscripts should be evaluated solely on the basis of their relevance and their methodology. Given that they ask an important question in an experimentally meaningful way, they should be published – regardless of their results. In the peer review system, papers sent to referees would contain only an introduction and the procedure section (perhaps supplemented with a brief description of how the data would be presented or analysed). After the reviewers have rendered their opinions, the results would be appended. An even better option would be to have contracted publication. In this the researcher submits his idea and experimental procedures to the editor prior to their execution. If the editor (with or without reviewers) approves, the researcher is guaranteed subsequent publication of the work. This strategy would avoid mountains of unrewarded (and uncommunicated) enquiry and it could provide the researcher with invaluable assistance in the early design of his experiment. It also places evaluative emphasis on the question rather than on the answer..’(p. 105-106).

A remarkable thing about Mahoney’s book is that it was written in 1976, but it anticipated very accurately a publishing model known as Registered Reports, which was introduced by Chris Chambers at the journal Cortex in 2014, and is now being adopted at a rapidly expanding number of journals, including Wellcome Open Research. I’ve been an early adopter, having now published three papers in this format, at Cortex, at Royal Society Open Research, and most recently at Wellcome Open Research.

On the basis of my experiences, I am a huge enthusiast for this approach, which I think is good for both science and scientists.

For the scientist, there are three principal advantages. First, you are forced to think through your planned study far more carefully than is usually the case. I’ve found it helpful to actually write a script to analyse the data, using a simulated data set. This means you approach the study having anticipated many of the features of the data that might need to be specified or controlled. I’ve done far more pilot studies than I used to do as a consequence.

Our Registered Report in Wellcome Open Research was unusual in that it was an analysis of a pre-existing dataset. The format is only feasible if you’ve not had a chance to peek at the data in advance; in this case, my geneticist collaborators had the DNA for our sample, and I had the phenotype data, but we had not linked these together. We had a clear hypothesis and thought that writing an analysis plan would be easy, but in fact it took months, because we kept finding that a good plan involved important decisions, such as which genetic variants should we focus on, how should we control for multiple comparisons, and how should we combine data from different phenotypes.

Our hypothesis was a long shot, especially given a small sample size (from a rare clinical group). We did not find the associations that we pre-registered, but we felt the data and methods were still valuable for other researchers in this area. And had we found a positive association, we felt sure that others would be more likely to believe it was not just a fluke that had been turned up by data dredging.

A second benefit for researchers is that you get reviewer feedback at a point in the research process when it can be helpful. In the first study we did, reviewers suggested two control procedures that we then incorporated. If we had not done so, I’m sure we’d have had reviewers telling us after the event that our lack of controls made the interpretation ambiguous.

Third, once you’ve had your registered analysis plan approved, publication is guaranteed regardless of results. A common concern is that it takes time to write the analysis plan and get it reviewed, and so data collection is delayed. This is undoubtedly true, but there is a huge compensation in terms of time saved when the study is completed.

Registered reports contrast with the conventional publishing model, where researchers may be left hoiking their paper around different journals to be greeted by an endless stream of reviewer criticisms. And this tends to happen when the funds have run out, the postdoc has moved on, and enthusiasm for the project is evaporating.

Perhaps the most important benefits are for science itself. With registered reports, there is no scope for publication bias favouring positive findings (because the decision is made before the results are known), nor for the kind of analytic flexibility (p-hacking) that plagues many areas of science and leads to findings that fail to replicate. Because the reviewers have only introduction, methods and analysis plan to evaluate, they will scrutinise these closely, and are likely to insist that the study has adequate power and reliable measures.

The main criticism I’ve heard of registered reports is that it may kill creativity, by forcing researchers to follow a constrained analytic path. This, however, is a misunderstanding: with a Registered Report you are free to report any results you like. The key issue is that it is transparent which analyses were pre-registered in order to test a specific hypothesis, and which were more exploratory. What we escape from is the bad practice of deciding on the hypothesis after inspecting the data, which can give a distorted impression of the strength of the results.


COMMENTS