Two recent events put in stark relief the differences between the old way of doing things and the new way of doing things. What am I talking about? The changing world of science publishing, of course.
Let me introduce the two examples first, and make some of my comments at the end.
Example 1. Publishing a Comment about a journal article
My SciBling Steinn brought to our collective attention a horrific case of a scientist who spent a year fighting against the editors of a journal, trying to have a Comment published about a paper that was, in his view, erroneous (for the sake of the argument it does not matter if the author of the original paper or the author of the Comment was right – this is about the way system works, er, does not work). You can read the entire saga as a PDF – it will make you want to laugh and cry and in the end scream with frustration and anger. Do not skip the Addendum at the end.
Thanks to Shirley Wu for putting that very long PDF into a much more manageable and readable form so you can easily read the whole thing right here:
See? That is the traditional way for science to be ‘self-correcting’….Sure, a particularly egregious example, but it is the system that allows such an example to be a part of that continuum somewhere on its edge – this is not a unique case, just a little bit more extreme than usual.
Janet wrote a brilliant post (hmmm, it’s Janet… was there ever a time I linked to her without noting it was a “brilliant post”? Is it even possible to do?) dissecting the episode and hitting all the right points, including, among others, these two:
Publishing a paper is not supposed to bring that exchange to an end, but rather to bring it to a larger slice of the scientific community with something relevant to add to the exchange. In other words, if you read a published paper in your field and are convinced that there are significant problems with it, you are supposed to communicate those problems to the rest of the scientific community — including the authors of the paper you think has problems. Committed scientists are supposed to want to know if they’ve messed up their calculations or drawn their conclusions on the basis of bad assumptions. This kind of post-publication critique is an important factor in making sure the body of knowledge that a scientific community is working to build is well-tested and reliable — important quality control if the community of science is planning on using that knowledge or building further research upon it.
The idea that the journal here seems to be missing is that they have a duty to their readers, not just to the authors whose papers they publish. That duty includes transmitting the (peer reviewed) concerns communicated to them about the papers they have published — whether or not the authors of those papers respond to these concerns in a civil manner, or at all. Indeed, if the authors’ response to a Comment on their paper were essentially. “You are a big poopyhead to question our work!” I think there might be a certain value in publishing that Reply. It would, at least, let the scientific community know about the authors’ best responses to the objections other scientists have raised.
Example 2: Instant replication of results
About a month ago, a paper came out in the Journal of the American Chemical Society, which suggested that a reductant acted as an oxidant in a particular chemical reaction.
Paul Docherty, of the Totally Synthetic blog, posted about a different paper from the same issue of the journal the day it came out. The very second comment on that post pointed out that something must be fishy about the reductant-as-oxidant paper. And then all hell broke lose in the comments!
Carmen Drahl, in the August 17 issue of C&EN describes what happened next:
Docherty, a medicinal chemist at Arrow Therapeutics, in London, was sufficiently intrigued to repeat one of the reactions in the paper. He broadcast his observations and posted raw data on his blog for all to read, snapping photos of the reaction with his iPhone as it progressed. Meanwhile, roughly a half-dozen of the blog’s readers did likewise, each with slightly different reaction conditions, each reporting results in the blog’s comment section.
The liveblogging of the experiment by Paul and commenters is here. Every single one of them failed to replicate the findings and they came up with possible reasons why the authors of the paper got an erroneous result. The paper, while remaining on the Web, was not published in the hard-copy version of the journal and the initial authors, the journal and the readers are working on figuring out exactly what happened in the lab – which may actually be quite informative and novel in itself.
Compare and contrast
So, what happened in these two examples?
In both, a paper with presumably erroneous data or conclusions passed peer-review and got published.
In both, someone else in the field noticed it and failed to replicate the experiments.
In both, that someone tried to alert the community that is potentially interested in the result, including the original authors and the journal editors, in order to make sure that people are aware of the possibility that something in that paper is wrong.
In the first example, the authors and editors obstructed the process of feedback. In the second, the authors and editors were not in a position to obstruct the process of feedback.
In the first example, the corrector/replicator tried to go the traditional route and got blocked by gatekeepers. In the second example, the corrector/replicator went the modern route – bypassing the gatekeepers.
If you had no idea about any of this, and you are a researcher in a semi-related field moving in, and you find the original paper via search, what are the chances you will know that the paper is being disputed?
In the first example – zero (until last night). In the second example – large. But in both cases, in order to realize that the paper is contested, one has to use Google! Not just read the paper itself and hope it’s fine. You gotta google it to find out. Most working scientists do not do that yet! Not part of the research culture at this time, unfortunately.
If the Comment was published in the first example, chances that a reader of the paper will then search the later issues of the journal seeking comments and corrections are very small. Thus even if the Comment (and Reply by authors) was published, nobody but a very small inner circle of people currently working on that very problem will ever know.
Back in grad school I was a voracious reader of the literature in my field, including some very old papers. Every now and then I would bump into a paper that seemed really cool. Then I would wonder why nobody ever followed up or even cited it! I’d ask my advisor who would explain to me that people tried to replicate but were not successful, or that this particular author is known to fudge data, etc. That is tacit knowledge – something that is known only by a very small number of people in an Inner Circle. It is a kind of knowledge that is transmitted orally, from advisor to student, or in the hallways at meetings. People who come into the field from outside do not have access to that information. People in the field who live in far-away places and cannot afford to come to conferences do not have access to that information.
Areas of research also go in and out of fashion. A line of research may bump into walls and the community abandons it only to get picked up decades later once the technological advances allow for further studies of the phenomenon. In the meantime, the Inner Circle dispersed, and the tacit knowledge got lost. Yet the papers remain. And nobody knows any more which paper to trust and which one not to. Thus one cannot rely on published literature at all! It all needs to be re-tested all over again! Yikes! How much money, time and effort would have to be put into that!?
Now let’s imagine that lines of research in our two Examples go that way: get abandoned for a while. Let’s assume now that 50 years from now a completely new generation of scientists rediscovers the problem and re-starts studying it. All they have to go with are some ancient papers. No Comment was ever published about the paper in the first Example. Lots of blogging about both afterwards. But in 50 years, will those blogs still exist, or will all the links found on Google (or whatever is used to search stuff online in 50 years) be rotten? What are the chances that the researchers of the future will be able to find all the relevant discussions and refutation of these two papers? Pretty small, methinks.
But what if all the discussions and refutations and author replies are on the paper itself? No problem then – it is all public and all preserved forever. The tacit knowledge of the Inner Circle becomes public knowledge of the entire scientific community. A permanent record available to everyone. That is how science should be, don’t you think?
You probably know that, right now, only BMC, BMJ and PLoS journals have this functionality. You can rate articles, post notes and comments and link/trackback to discussions happening elsewhere online. Even entire Journal Clubs can happen in the comments section of a paper.
Soon, all scientific journals will be online (and probably only online). Next, all the papers – past, present and future – will become freely available online. The limitations of paper will be gone and nothing will prevent publishers from implementing more dynamic approaches to scientific publishing – including on-paper commentary.
If all the journals started implementing comments on their papers tomorrow I would not cry “copycats!” No. Instead, I’d be absolutely delighted. Why?
Let’s say that you read (or at least skim) between a dozen and two dozen papers per day. You found them through search engines (e.g., Google Scholar), or through reference managers (e.g., CiteULike or Mendeley), or as suggestions from your colleagues via social networks (e.g, Twitter, FriendFeed, Facebook). Every day you will land on papers published in many different journals (it really does not matter any more which journal the paper was published in – you have to read all the papers, good or bad, in your narrow domain of interest). Then one day you land on a paper in PLoS and you see the Ratings, Notes and Comments functionality there. You shake your head – “Eh, what’s this weird newfangled thing? What will they come up with next? Not for me!” And you move on.
Now imagine if every single paper in every single journal had those functionalities. You see them between a dozen and two dozen times a day. Some of the papers actually have notes, ratings and comments submitted by others which you – being a naturally curious human being – open and read. Even if you are initially a skeptical curmudgeon, your brain will gradually get trained. The existence of comments becomes the norm. You become primed….and then, one day, you will read a paper that makes you really excited. It has a huge flaw. It is totally crap. Or it is tremendously insightful and revolutionary. Or it is missing an alternative explanation. And you will be compelled to respond. ImmediatelyRightThisMoment!!!11!!!!11!!. In the old days, you’d just mutter to yourself, perhaps tell your students at the next lab meeting. Or even brace yourself for the long and frustrating process (see Example 1) of submitting a formal Comment to the journal. But no, your brain is now primed, so you click on “Add comment”, you write your thoughts and you click “Submit”. And you think to yourself “Hey, this didn’t hurt at all!” And you have just helped thousands of researchers around the world today and in the future have a better understanding of that paper. Permanently. Good job!
That’s how scientific self-correction in real time is supposed to work.