Entries in publishing (2)

Friday
Aug192011

Academic publishing 2.0

An interesting conversation on Google+, started by Vincent Knight, has prompted me to think about the academic review process, and the journal industry in general, and I’ve come to the conclusion that the whole system is ready for a re-think.

The world is moving more and more towards a system where there is free and ready access to information, and the journal publishing model just doesn’t support this. Most people at good universities probably don’t ever think about this because they have full academic subscriptions to all the journals in their field, but for those outside academia it can be impossible to get access to this knowledge. Journals subscriptions each cost thousands of dollars; instead of facilitating access to knowledge, journals are now restricting it. It is almost reminicent of the days when guilds hoarded knowledge to themselves, although not quite to that extreme; the knowledge is available, it just has to be paid for.

And why should we have to pay to get access to these papers? The authors themselves don’t get to see any of this money.

So why don’t we just do away with journals? Well, the problem is what to replace them with. Journals do actually perform two vital functions:

  1. They provide a filter for relevance, and
  2. They provide a filter for quality.

The filter for relevance allows us to know where to find information relevant to our field. A researcher really only needs to monitor the publications of perhaps 10-15 key journals in their field. You don’t find papers on cosmology randomly turning up in issues of Transportation Science.

The relevance filter has become much less important in recent years. Specialised search engines, and even Google itself, mean that it doesn’t really matter where a paper is published; it is still discoverable. A good system of tags and keywords, combined with saved searches and alerts renders the relevance filter aspect of journals unnecessary.

The quality filter is still important. In a world where anyone could publish anything, it would be impossible to keep up with papers that sound relevant. We know that if a paper makes it into a peer-reviewed journal then at least it will be of a certain quality, and if relevant then probably worth our time. We want some “social proof” for the material we spend our attention on; if the reviewers (who are presumably well respected in our field) approve of the paper, then we can have a degree of confidence in it. The filter for quality also provides a way for the public, and those not experts in a given field, to know that they can trust the findings of the research, and that it has the implicit approval of that field. This last benefit is especially important in controversial areas such as climate change research.

However, the review process also introduces another significant disadvantage; the length of time between submission and publication can be anywhere from a few months to more than a year, and this slows down the process of iteration of innovation. Consider how much further your field might have progressed in the last 20 years if papers were magically published in their final form as soon as they were submitted. Unfortunately this long delay is the compromise that comes with the quality filter that the review process provides.

Let us step back and consider the criteria we might want a publishing system to meet:

  1. There should be an efficient way to discover material that is relevant to our interests;
  2. There should be a filter for quality, so that we can spend our limited time reading worthwhile papers;
  3. There should be a way for “outsiders” to know whether the research is generally accepted by the research community;
  4. Research should be available for public consumption as soon as possible after submission;
  5. Anyone should be able to access the research, at no cost;
  6. Anyone should be able to publish research, at no cost.

The current peer-reviewed journal publishing system meets criteria 1-3, and arguably 6, but not criteria 4 and 5.

I can hear you asking, “what about arXiv?” From its website, “arXiv is an e-print service in the fields of physics, mathematics, non-linear science, computer science, quantitative biology, quantitative finance and statistics.” arXiv is a fantastic resource; researchers can submit preprints of their work to arXiv so that they are available immediately to everyone, and then (optionally) when they make it through the peer-review process they get the stamp of approval and quality. I encourage the reader to check out this recent discussion on Google+ on how a preprint service could help the field of Operations Research. Feel free to join in the discussion there if you have an opinion.

The combination of a preprint service and traditional journal publishing seems to meet all the criteria above, but I can’t help think that it’s a bit clunky. We are still relying on the journal side for the quality filter, but we don’t know about that quality until after the review period.

Surely with todays connectivity and technology we can come up with a more streamlined system. Insights gained from content aggregators like Digg and Reddit, combined with the network and reputation effects of services like Twitter and Linkedin, should be able to add some value.

Update: I’ve given an idea of what the solution could look like here.

1 2