Ediqo Q&A Series

Q&A with Dr. Ivan Oransky from Retraction Watch

In this interview, we talk to Ivan Oransky, co-founder of Retraction Watch, about retractions as a “window into the scientific process” and publishing ethics.

Biography

Ivan Oransky is co-founder of Retraction Watch as well as vice president and global editorial director of MedPage Today, and Distinguished Writer In Residence at New York University’s Arthur Carter Journalism Institute. He spent 2009 to 2013 as executive editor of Reuters Health. Previously, he was managing editor, online, of Scientific American. From 2002 to 2008, he worked for The Scientist, first as web editorial director (2002-2004) and then as deputy editor (2004-2008). He has written for publications including the Boston Globe, The Guardian, The Lancet, Nature, The New York Times, and Slate. He completed his BA at Harvard University.

 

  1. You started Retraction Watch as a blog in August 2010. Can you briefly describe how this idea was born?

My colleague Adam Marcus and I got together after having come across some very public cases of retractions, such as of faked data, where a researcher had 20 retractions at once, and I said what if we started a blog? I had been blogging at Embargo Watch at the time, and Adam said, sure. There are great stories behind a lot of these retractions, and the journals are not very transparent about the process or about the notices themselves. But we said let’s see what happens. We launched it in August of 2010 and very quickly it really took off. People started circulating the blog and it was great. Early on, Adam was quoted as saying we will blog once, a couple of times a month, we’ll find some interesting stories, our mothers will read it. Very quickly it became clear that there was a treasure trove of material and we were spending lots and lots of time on it, very happily and very gladly. It sort of reinforces itself when you write something that people find interesting. Then, someone sends you something else that is interesting and all of a sudden you have 100 stories and you only have time to do three or four.

  1. In your inaugural editorial of Retraction Watch, you describe retractions as a “window into the scientific process”. Can you elaborate on this notion for our readers?

The scientific process that we are really looking at is the self-correction part of science. Science I think is in the most part very proud of its self-correcting nature. It’s one of the few human endeavors that actually has a built-in self-correction mechanism. You look at politics, you look at others fields, they don’t really do this. So, that’s a good thing, but it doesn’t nearly work as well as what I think a lot of people would like us to think it does. Understanding how retractions, as one window into the self-correction process, work, or don’t work as the case may be, tells us a lot about how willing scientists are to make changes and retractions, and how willing journals and institutions are to admit mistakes. That’s a pretty powerful way to look at this human endeavor.

  1. Retraction Watch has grown quickly over the years and received funding in 2014 to build a database of retractions. How far along is the project and what will be the main use cases for this database?

In terms of the formal database, we are still working with the Centre for Open Science to design prototypes, and signed a memorandum of understanding last year. We are now working on what exactly the database will include, how it will be searchable, the kind of outputs, so that’s an ongoing process. In terms of the use cases, the ones we have always talked about is, firstly, that anyone who is planning on citing a paper or who is planning to look into a particular field will be able to search for papers in that field and see if any of them have been retracted. They will also hopefully be able to go backwards in time and get alerts whenever any of the papers they have previously cited have been retracted, so that will be particularly useful in analyses and systematic reviews. Also, the ability of people to do scholarship on retractions as a process. What trends are there? Are there fields that have more retractions? Are there reasons for retractions that are changing? All these things we see on an anecdotal basis as we report on them and the blog itself functions as a sort of database. It’s not 100% comprehensive, but we do categorize our posts. There’s another area, scholarship. There are a number of papers being written about retractions and taking a look at doing it in a systematic way. That’s something the database will allow people to do.

  1. Can you offer any insights into some key statistics so far? E.g. what is the prevalence of different types of unethical behavior?

That’s not really the process that we are following. In other words, we are not really building part of it and looking at it. We are going to build the whole scaffold and structure and put everything into it. There are interesting studies that have come out by other people that I can refer to. One is from 2012 by Fang, Steen and Casadevall who found two thirds of retractions are due to misconduct, which is more than we thought. And the reason why it turned out to be different than what we thought was that they actually went and looked at Retraction Watch coverage and coverage by other media so they could look at the real reason for retractions as opposed to the one the retraction notice said, or didn’t say in some cases. So, you could see a lot of the information was based on Retraction Watch and that says we are providing a lot to the ecosystem that wasn’t there before.

  1. Do you see more cases of unethical behavior and retractions in academia today than, say, 15 year ago?

I think that is kind of a misconception that because there are more retractions now, that means there is more misconduct. I can understand the misconception but it’s one that we try to fight against at every opportunity, as I do now. There is no evidence that misconduct is more common now than it was in 2000 or back in ’85*. Probably, that is because no one was tracking it and probably because the fact that there are more retractions is certainly because we are looking for these things. It’s because papers are online and you can actually find more evidence of misconduct. Again, it’s like any screening test. Autism rates seem to be rising but actually it’s because we are screening for it and the definitions are changing. We have a better understanding of retractions now, but there’s not actually any evidence that misconduct is on the rise.

*[Added September 5, 2016]: Ferric Fang, who has done a great deal of scholarship on retractions, and is on the board of directors of the Center For Scientific Integrity, our parent non-profit organization, reminds me that a paper he, Elisabeth Bik, and Arturo Casadevall published in June showed that the rate of inappropriate image duplication — often linked to misconduct — does seem to have risen, starting in 2000, and perhaps earlier. Thus, we now have the first empirical evidence that misconduct may in fact be on the rise.

  1. How important are whistleblowers to the functioning of Retraction Watch, and what measures do you take to protect them?

Whistleblowers are very important to the functioning of science even though a lot of scientists wish that they would go away. We speak of whistleblowers frequently. To be honest, we are often not in a position to investigate their claims ourselves. We do that sometimes. But we are very happy to refer people to PubPeer, which came along a couple of years after we did and really creates a great space for whistleblowers and other people bringing allegations or to voice their concerns anonymously and then, hopefully, have the authors and journals respond. We are not equipped to look into all these allegations. But we protect our sources 100 percent, as best as is possible to do. Different countries have different laws and, even here in the States, different countries have different laws. We are quite well protected because we are journalists. Something that is again different in different countries. It’s very good in the US. Many states including New York, where we operate, have something called the Shield Law that dictates that a journalist that has a confidential source cannot be compelled, even by a court order, to reveal the sources name, which will basically just throw it out of court. We are very serious about that and we have been practicing journalism for a long time.

  1. The European Union put strong data protection regulations into practice in 2006 (“the right to be forgotten”). How does this affect the work of Retraction Watch and do you personally think we should forgive researchers for their past mistakes?

I’m not a lawyer, so I hope I’m getting this right but “the right to be forgotten” is about search results and I don’t know if it would apply to us or a retraction. If a journal publishes a retraction then I don’t think the laws in Germany would have anything to say about that, as long as it is accurate. I think for the sake of science, it’s about not wasting funds and time and resources. People should know when a paper has been retracted. On the other hand, a retraction is not a punishment. Actually, it ends up being one, but it’s not meant that way. The point of a retraction is to correct the scientific record. So, I don’t see any reason why, and I don’t know the law well enough, if the original report of the retraction would be affected by that.

  1. There seems to be a strong resistance from key stakeholders (publishers, journal editors, authors, university administrators) to issue corrections and retractions, as key stakeholders only lose from issuing retractions. Retractions are often therefore not very elaborate. One of your endeavors consists in investigating such retractions that provide limited detail. Have you observed an improvement since the inception of Retraction Watch?

In some quarters we have. There are some journals I guess got tired of us criticizing them all the time. One in particular is the Journal of Biological Chemistry. They used to publish one line notices saying that this paper has been withdrawn by the authors or the editors. They now include actual information. Some publishers were honestly doing a pretty good job and we still think they could do a better job, and, in general, the big publishers do okay with this. Again, we can disagree about details but their average is pretty good. One thing we have seen is that we are cited in the literature a lot. We have been cited well over 100 times. And we have been referred to directly in retractions themselves. I would like to think we are having an effect there. At least people are getting us and understanding what we are saying. Have I globally looked at the quality of retraction watches before we launched? I have not. That would be a really great project.

More Posts