My brief opinion piece "Technical solutions: Evolving peer review for the internet" has been posted to the Nature peer review debate. Nature (2006) | doi:10.1038/nature04997
You can leave your feedback about it on the Peer Review Debate Comments blog section.
I have supplementary bookmarks at http://www.connotea.org/user/scilib/tag/peerfocus
I want to explore some of the concepts in the article in a bit more depth.
Also at the end of this posting you will find some acknowledgements, as many people contributed to this article.
There were three main points I made:
- the Wikipedia "Wisdom of Crowds" approach is not a replacement for peer review
- peer review is a service that is currently bundled up with journal publication, but it could in theory be separated out
- peer review doesn't give us any information about the article other than "this is ok" - there are many additional metrics made possible by technology that can help to rank articles, such rankings may be part of overall systems of discovery and peer discussion around articles
There is a fourth item which was part of my original charge for the article, but which I didn't really cover:
- in what ways can online peer discussions contribute to peer review?
Taking these one-by-one.
1. I think we have to be very careful about the Wisdom of Crowds idea.
In Technical solutions: Wisdom of the crowds, Chris Anderson explored a lot of these issues.
One of my main concerns is that Wisdom of Crowds is sometimes oversold, in the way that Service-Oriented Architecture (SOA) is. Just put together a system, sprinkle some magic Wisdom of Crowds dust on it, and hey presto, the system is continuously improved by everyone who uses it.
The stark reality of Wikipedia and of open-source projects is that in general, a very small core group of people, with quite a strict set of management and control, drives the creation and growth of the system. If you looked in depth at the stats on how many people are making major, ongoing contributions to Wikipedia, I think you will find the core group is quite small. Similarly, if you look at the open-source projects on Sourceforge, you will discover most of them are one-man shows, or tiny teams. The poster child for open source, Linux, is really quite a small technical team of experts at its core.
2. Since one of my primary areas of activity is in Service-Oriented Architecture for library technology, I really like the idea of applying service thinking to the scientific workflow. Herbert Van de Sompel explores this in Technical solutions: Certification in a digital era.
The way I think about it is that currently when an article is submitted to a journal, you get peer review as a bundled service that is part of the publication workflow. I don't see any reason why this couldn't be a separate, stand-alone service: I write my article, I submit it to the Elsevier Peer Review Service, and it comes back certified as passing peer review.
The logical question then is, who pays? I think you have the same options as now: in the author (or "author's research funder") pays model, you pay up front, to cover the costs of peer review (which are mostly associated with the administration of the system - contacting reviewers, gathering their feedback, managing the communications). In the library (or repository) pays model, the organization that is going to be the primary host for the article pays to have it certified, in order to maintain the quality of the contents of its repository. I realize of course that I am glossing over a lot of complexity. One may certainly wonder about the possibility of then "buying" peer review - in theory the market might drive the business to services that have low certification standards. But I think that scientific reputation mitigates against this - certification from a well-established service is going to have the most weight.
In any case, I think this is an idea that should be explored, as we try to understand how scientific communication is evolving: does it become article-centric? repository-centric? publisher-centric? author-centric?
3. Peer review is, in my opinion, central to maintaining the quality of science. But science is also about communications, advancing our knowledge. An article may be wonderful and receive an excellent peer review, but if it doesn't lead to further discussions and additional work, is it contributing substantially to science as a whole? This is one area where the Internet may be able to shine, but it is also an area of huge challenges.
One can certainly imagine a dream research workflow environment, where as you work, relevant articles appear in your workspace. To make that happen is very difficult. What is most relevant, articles with a high number of journal citations? Linked to by a lot of web sites? Being discussed in science blogs? Assigned 3-or-more stars by your science peers?
That being said, discovering relevant information is one of the key challenges of our age, as we drown in the Internet flood. We need to create tools that enable communities to discover the information of relevance to them. An article that is tremendously helpful to an ordinary citizen suffering from cancer may be of less relevance to a leading-edge cancer researcher. The best scientific resources for teaching about cancer at a high-school level may not be the same as those for a graduate course. The underlying articles may all be excellent peer-reviewed science, but their relevance is highly contextual.
4. Discussion lists, discussion groups, discussion boards, repository discussions etc.
In part, this is an extension of the article discover topic above, because presumably one reason you want to discover articles is in order to discuss them. I deliberately avoided any detailed coverage of the discussion topic in my opinion piece, because it would have led me into a multi-paragraph morass around moderation. Conducting discussions that are both open and relevant is a huge challenge in an Internet environment. Even back in the original USENET days, when it was primarily university students contributing, a group like sci.physics was a chaotic mess of useful peer communications, frivolity, and pointless flamewars. That's with a tiny, specialized, self-selected community. Try now to extend it to thousands and millions of people.
This is a really hard problem. The Internet is as prone to the
Foolishiness of Crowds as it is to the occasional bursts of Wisdom. As
I write this, Science Blogs lists
I have to say, Stephen Hawking and Ann Coulter (a US right-wing pundit, for those who are fortunate enough not to know) are not two people one would often find in the same company, and Ms. Coulter may be many things, but I think even her most partisan admirers would not count scientist amongst them. There is a whole side discussion this opens up about ranking - popularity is about the most-discussed. In the US, most-discussed may be more about heated political debate. Another example: for a while, a top "most bookmarked" article in delicious and Furl was Scientists respond to Gore's warnings of climate catastrophe, even meriting a Slashdot article.
The article reads as if it is a serious scientific counterpoint to Gore's statements, and many of the comments on delicious reflect this. On Slashdot on the other hand, the top rated comments converged on the information that this was in fact a bit of partisan political hackery, cooked up by a consultant whose funding comes from oil companies.
Peer review would, I think, have immediately rejected this article. Peer discussion... well, it's all over the map. In Slashdot, high-ranked comments can bubble to the top, as part of a sophisticated moderation system. In my Slashdot view, it doesn't take much scrolling down to reach
And Who Happens to Fund the Article's Author?
(Score:5, Informative)
by goMac2500 (741295) on 17:46 Wednesday 14 June 2006 (#15535430)
Why Exxon Mobile of course!
del.icio.us and Furl have no such moderation systems for comments. So on delicious we have comments expressing enthusiasm for the article such as
"Actual Climatologists respond to Gore's warnings of climate catastrophe"
"the real climate experts weigh in on Al Gore's mis-representation and distortion of climate science"
as well as some negative comments
"right wing website trashing gore"
On Furl, the comments are pretty negative about the article
"blabbering about science without offering anything legitimate. What crap."
"how does this bullshit get written?"
Three different communities, three different comment systems, three different views of the article.
If you read delicious, you might get an impression the article was
credible.
Reading Slashdot and Furl, you would get the (more accurate) impression
that the article is highly biased and its science is dubious at best.
What's next?
I think it would be very interesting to explore the idea of controlled social discussion spaces for science - MySpace for scientists? Stay tuned...
I am deeply indebted to a number of people for in-depth background discussions that helped me shape this article.
They are:
Dr. Bruce Dancik, Editor-in-Chief of NRC Research Press, about peer review in general.
Cathy Norton, Library Director, MBLWHOI Library, and Director of Information Systems Marine Biological Laboratory, Woods Hole Oceanographic Institution
and
David Remsen, Information Systems Program Developer, Marine Biological Laboratory, Woods Hole Oceanographic Institution
about the uBioRSS tool (part of the uBio project)
Carl Lagoze, Senior Research Associate, Cornell Computing and Information Science
and
Sandy Payette, Researcher and Co-Director of the Fedora Project, Cornell Information Science
about the Pathways project
Euan Adie, bioinformatician, University of Edinburgh
about the PostGenomic tool
My organization, NRC CISTI, Canada's National Science Library, provided me with time to work on this article.
I also had lots of help from Nature:
Thanks to Sian Lewis (Ichabod is itchy) for her help shepherding my article through the Nature publication process, and to Maxine Clarke for her key role in providing me with this opportunity.
Previously:
June 15, 2006 new articles up in Nature Peer Review Debate
June 05, 2006 the Nature of peer review
Very interesting post, Richard, and thank you for the generous acknowledgement.
I think your observations about the "wisdom of the crowds" models are pertinent. One could also extraploate to citation analysis. (Many already have, of course). Google Scholar returns on the basis of how many people have cited articles with your keyword. The whole citation business is based on numbers of cites. Yet, as has been pointed out by many before me here, what of the reason for the cite--- is the paper wrong? controversial? etc.
I look forward to the MySpace thoughts, also I agree with your instincts about Wikis, maybe that is also a way to go --- a sort of cross between social moderation and a core group of peer review. This is one thing that the Nature experiment may help to provide some evidence either way for -- what is the "value" of free comments that anyone can post on a submitted manuscript, compared with those that an editor obtains from two or three selected peer reviewers (the eds and reviewers forming the equivalent to your wiki-small group of experts).
But of course, who has time to spend on making insightful comments on "submitted" (un-peer-reviewed) papers? A scientist who has committed to review a paper for a journal is one thing, but would people be prepared to provide anything like that deep level of analysis spontaneously? With the number of artices submitted and indeed published every year, hard to see it for all but the most exciting few. Shades of a circular argument beginning to develop here!
Slight side issue, but Chris Anderson's upcoming book "the Long Tail", which I haven't yet read, may go into the aspect of peer review/certification to which you allude above from a slightly different angle. Which is that many papers, even in the most prestigious journals, don't get cited (read). This is relevant to your point about "value" if the paper does not lead to or develop further a line of research. Does this all mean that there are too many papers being published? Or does it mean that people need an indication that they only need to bother reading 5 per cent of the published literature? Or other?
Posted by: Maxine | June 27, 2006 at 08:11 AM