I arrived late to the JISC Libraries of the Future online video stream, so this may not be a full picture of the discussion, but what I heard was mainly the Scholarly Communication Problem. This problem is extremely well known and I have heard it talked about--forgive me--endlessly, without (m)any solutions proposed. It goes like this:
Universities / research organisations pay researchers.
Funders grant money to researchers.
Universities / research organisations give money to libraries.
So far so good. Then a strange dance:
researchers submit work to publishers who perform several functions:
- they assemble the various files together (text, charts, figures etc.)
- they select notable papers
- they send them out for peer review and manage the review (e.g. ensuring comments are in on time)
- they edit the copy itself for clarity and do QA
- they apply necessary metadata, get a DOI etc.
- they do layout on the copy so it looks nice
They don't actually perform the peer review, this is done by the reseachers.
Then libraries and funders pay publishers for this bundle of services, despite the fact that one chunk (peer review) is performed by the researchers. Funders may also pay extra for OA, or sometimes journals have direct fees for publishing at all.
The impact factor of publications lends a certain authority, and is then used in determining both individual and institutional "worthiness". This necessity of publication drives what I believe the gentleman from Harvard called a "cocaine marketplace" - once you're hooked by publishers, they can keep raising and raising the prices.
Then funders and libraries may pay AGAIN, to have the researcher submit the document to the repository followed by what is essentially a mini-publishing workflow run by librarians - again assembling the files together, again doing QA, applying necessary metadata and perhaps assigning a handle.
So: organisations effectively pay researchers to research, to peer review, and to submit to journals,
and then pay the publishers to get their own research back. They maybe pay researchers and a bunch of other people to do this in a mini-version again, paying to get their own research back, in their own repository. So paying for the same research two or three times.
THIS PROBLEM IS WELL KNOWN
Talking about these over and over and over again does nothing to solve this problem. In fact I think talking about this any more without providing concrete (preferably implemented) solutions should be banned.
To fix it you:
1) identify what services researchers could self-organise (or be organised by libraries) to perform, like peer review
2) identify what, if any services from publishers are still needed, and pay only for those
3) go directly to public OA publishing of the peer-reviewed manuscript
4) find a new way to measure the "worthiness" of individual researcher and organisation output
There are lots of people who have identified this issue, I can submit my thoughts from three years ago - June 21, 2006 - my article on peer review for Nature.
There are a small number of groups taking a "just do it" approach to solving this problem. There need to be many more. There's also a whole set of related issues, like how to you manage versions in this environment, how do you manage discoverability when content may be all over the place, and how do you think about "notability" when a document may be an ongoing dynamic collaboration, rather than a static final publication with a fixed set of authors. And this is not even touching the issues of "data as publication". There's plenty of work to be done.
Great post! It sums up a fairly complicated issue very clearly. Also, I hadn't seen your 2006 paper on peer review before now -thanks for pointing it out, I really enjoyed it!
Lisa
Posted by: Lisa Green | April 02, 2009 at 02:34 PM
Hi Richard, thanks for the nice post. Some brief comments on existing "implementations":
1a) manuscript formatting can be done automatically by means of templates (like those used in TeX or wiki environments, though XML would be preferable), though their usability is crucial to adoption by researchers. Also the metadata creation and its conveying to providers of unique identifiers for content (e.g. DOI) or content providers (e.g. Researcher ID) can be done automatically.
1b) editorial review can be skipped
1c) peer review is already performed by researchers, and I do not see how it could be otherwise. However, I do not see a reason why peer review could not be done after a piece of research has been made public via posting to some suitable destination on the internet ( http://arxiv.org/ has been doing this for almost two decades now). An interesting post in this regard is at http://bit.ly/Ppfk .
2) Digital archiving of back issues is an area where publishers may still be helpful, though librarians (and the likes of Google) may do the job, too. Another arena for them might be the creation of multimedia versions of scientific content, as http://jove.com/ are striving to do.
3) Yes, implemented in http://www.doaj.org/ but also http://www.scholarpedia.org/ .
4) No suitable implementation that I know of, but I think this is just a matter of time, and at least PLoS are working on it (cf. http://bit.ly/4uiW2r ).
Posted by: Daniel Mietchen | April 03, 2009 at 11:59 AM