The latest Outsell Insight (about OA issues but itself, very much for-pay), by Daniel Pollock, has some rather controversial things to say about scientific communications. He starts by discussing the controversy over Declan Butler's article "PLoS stays afloat with bulk publishing", highlighting Timo Hannay's posting about the controversy.
The gist of Pollock's piece in my reading is that publishers may have to consider becoming (at most) solely peer-review services, with the resulting articles being in repositories, the benefit being that the articles can be discovered and mined openly. A few snippets:
Much scholarly communication takes place outside the STM publishers' domain, via conferences, proceedings, data sets and so forth, none of which fit the process of the peer reviewed research article. Scientists have long (always?) been collaborative creatures - and the digital age means that scientists, and science itself, no longer need publishers to handle the distribution and sharing of information.
Perhaps funders are not advocating the open repository as part of some grievance towards STM publishers, but in order to create new ecosystem in which the next generation of R&D productivity tools can evolve.
Automated knowledge discovery processes require unfettered access to content.... And - to anticipate the common objection - don't think that harboring "the definitive version of the article" is necessary either! Text mining tools are increasingly capable of disambiguating multiple sources...
we issue a warning to the proponents of the peer-reviewed journal article: beware of overstating your value to the process of science! The longer its focus on its narrow part of the scholarly communications process continues, the more the STM publishing community will seem out of touch, and the more likely that - whether it charges for access or not - it will become a prisoner in its own walled garden.
Nature Publishing Group Sets the Cat Amongst the Pigeons of Open Access, But Maybe We're All Missing the Point - by Daniel Pollock, Vice President & Lead Analyst - London, UK - July 18, 2008
I'm afraid the above bits don't really give the full shape of his argument. I think he makes some compelling points, but is I think overly optimistic on the attention and technology side of things. Journals aggregate interesting science - many scientists still very much like a group of qualified editors and peer reviewers providing a filter on the deluge. Secondly, while knowledge discovery requires unfettered access for machines to content, I don't see why that necessarily implies unfettered access for humans. You can perfectly well have an API that lets machines mine full-text, while still putting up a paywall for humans. As well, I think the versions issue is very challenging, and we are a long way from reliable automatic disambiguation and identification of authoritative copies. Finally, many conference proceedings already are peer-reviewed, and we can certainly imagine peer review extending to other areas, such as data sets.
I do think that the idea of the imprimateur of peer review existing outside the journal package is an interesting one, it's one of the concepts I covered in my article "Technical solutions: Evolving peer review for the internet"
An article or blog entry submitted to, and passed by, a stand-alone peer review service might be recorded in a public registry, or be digitally signed as part of the certification process.
As well, I do think the point that publishers have to think more about (and provide services for) the overall system of scholarly communication is well-taken, and I think we already seen both Nature Publishing Group and Elsevier (amongst others) taking many strides in this area.