Over at Jabberwocky Ethan White has a nice summary of some new ways of publishing. He has a lot to say about collaborative publishing where the reviewers become part of the file. It is interesting. He leaves out Peerage of Science and doesn’t much discuss what is for profit and what is not.
Here is what I wonder. If these things make it easier, will we have more factoids out there in the literature without proper frameworks? Some people are not clear in their papers as to what their data mean. Referees might have useful opinions on this. There may be disagreement and discussion that might be worth seeing in the literature. But so often the referee is simply pointing out that the authors have not put their paper in context. They have not read the literature. They have not cited appropriately. These sorts of things should be fixed. There is no point in hearing about them, or making them public.
So, the review process may be iterative in helpful ways that no one gains by seeing. Even the best authors need to be informed if there are articles they have missed important for framing. Publishing is a tricky business, sometimes controversial, sometimes sloppy. Lets share the controversy but clean up the sloppiness privately.
Interesting point Joan and one that I often hear in a very different venue that I’m involved with – scientific software/code development. When writing code it is considered a best practice to do use something called version control which keeps track of every little change made to the code including lots of cleaning up of sloppiness, dead ends, and other things that most folks won’t be interested in seeing (apologies if this was all familiar, I’m unsure of how much computational work your group does). So, one of the questions/arguments I hear is that when we’re making our code public that maybe we should just post the final version of the code since all of that other stuff will just make it more difficult and unwieldy to understand the final product. My response to this is that because of the way these systems are built that anyone who isn’t interested in that information is never exposed to it. It is tucked off in the history of the code. But if someone does see value to it then they can go find it. This holds for all of the implementations of open review that I’ve seen with the exception of Faculty of 1000 Research where the reviews are slightly more in your face, but all the way at the bottom of the article.
People seem to find value in access to the review history as can be seen from the fact that 15-20% of PeerJ’s page views go to the review history. Given that people get something out of seeing reviews and they are presented in ways that keeps them out of the way of people who don’t want to see them it seems like a win-win situation to post the information. Sure, some of it will be irrelevant, but that’s up to the reader to decide.
I’m completely with Ethan here. The benefits of getting science published faster are immense. Yes, it will require people to approach the literature with a different mindset – no longer assuming that everything they read has been vetted by peer review. But, we all know that vetting isn’t that successful anyway. I think if we get papers published fast, and without extensive review, and then build a framework for open peer review that happens at the time of publication and continues for the useful lifetime of the paper we’ll all be far better off.
I’m not sure I see a reason why sharing reviews is a bad thing, other than “sloppy.” Published papers are usually far too tidy, in my opinion, with some substantial areas glossed over and some puffed up to sell the paper. Earlier, I made a different and bigger argument for sharing reviews:
http://smallpondscience.com/2013/04/15/transparency-in-research-publish-your-reviews/