Four years ago I published what turned out to be one of my most popular blogposts: 'Rethinking Retractions'. In that post I related the story of how I managed to mess up the analysis in one of my papers, leading to a horrifying realisation when I gave my code to a colleague: a bug in my code had invalidated all our results. I had to retract the paper, before spending another year reanalysing the data correctly, and finally republishing our results in a new paper.
Since I wrote that blogpost I have found there are a lot of people out there who want to talk about retractions, the integrity of the scientific literature and the incentives researchers face around issues to do with scientific honesty.
Here a few of the things that have resulted from that original blogpost:
Looking back now at the original blogpost, I can see the situation with some more distance and detachment. The most important thing I have to report, five years after the original cock-up and retraction, is that I never suffered any stigma from having to retract a paper. Sometimes scientists talk about retractions as if they are the end of the world. Of course, if you are forced to retract half of your life's work because you have been found to have been acting fraudulently then you may have to kiss your career goodbye. But the good news is that most scientists seem smart enough to tell the difference between an honest error and fraud! There are several proposals going around now to change the terminology around corrections and retractions of honest errors to avoid stigma, but I think the most important thing to say is that, by and large, the system works - if you have made an honest mistake you should go ahead and correct the literature, and trust your colleagues to see that you did the right thing.
Since I wrote that blogpost I have found there are a lot of people out there who want to talk about retractions, the integrity of the scientific literature and the incentives researchers face around issues to do with scientific honesty.
Here a few of the things that have resulted from that original blogpost:
- David Duvenaud (who spotted the original bug in my code) created a presentation and a paper on the pitfalls of creating code for analysis, and sanity checks the analyst can use to avoid the same thing happening to them
- The story was picked up by Times Higher Education
- I was invited to tell my story at a symposium at the World Conference on Research Integrity 2017. The symposium was organised by Elizabeth Moylan, who with co-authors wrote a proposal for a new system of post-publication article alterations.
- After speaking at the symposium, the story of my retracted paper was covered by the founders of Retraction Watch in STAT and then by Science
Speaking at the World Conference on Research Integrity |
Meanwhile, I'm just hoping I still have something to offer the scientific community beyond being 'the retraction guy'...