In the session "Reinventing scientific publication (Web 2.0, 3.0, and their impact on science)" led by James Hendler at SciFoo, one of the items was an idea from Geoffrey Bilder, for publishers to provide a "peer review logo" that could be attached to (at this point I am interpreting based on my own understanding) e.g. blog postings, some sort of idea of a digital signature to indicate peer reviewed content. (I know the list well since I'm afraid my major contribution to the evening, despite having thought about this topic a lot, was transcribing the list).
2) ID, logoing, review status tag, trust mechanisms
- other peer review status
I wonder if we should make a wiki where we list all of the grand (and not so grand) challenges of web science communication and discovery, and then people can pick off projects. The SciFoo prototypes list is one angle on this. Of course, in the perpetual-beta web world, it's probably faster to just create a wiki, than to try to start a discussion about whether one should be created. It's in that "just do it" spirit that I'm pleased to find there is already a peer review logo initiative in the works, although the angle is to indicate that you're writing about a reviewed work, not that your work itself has been reviewed. From Planet SciFoo:
Cognitive Daily - A better way for bloggers to identify peer-reviewed research, by Dave Munger
[we] have decided to work together to develop such an icon, along with a web site where we can link to bloggers who've pledged to use it following the guidelines we develop
via Bora Zivkovic, via Peter Murray-Rust
(it's strange and also good to be blogging now about people that I've finally met)
UPDATE: I do have a vague idea in a similar space, which would be a "repeatability counter".
As I have learned more about peer review, I have understood that it has many aspects, but preventing fraud is not one of them. Peer review can help to create a paper that is well-written and has "reasonable" science, but it can't stop a determined fraudster. (This isn't my insight, but comes from a presentation I saw by Andrew Adrian Mulligan of Elsevier - "Perceptions and Misperceptions - Attitudes to Peer Review".) What does address fraud, and keep science progressing, is falsifiability: someone else does the experiment and sees if they get the same results. Now I realise there are many different classes of results, but it's interesting that many of these are not publishable, and are maybe not captured in the current system:
- We tried to repeat the experiment, but it failed because we didn't have enough information on the protocol
- We tried to repeat the experiment, but it failed and we think the paper is in error
- We successfully repeated the experiment
- (probably more scenarios I haven't considered)
So I think it would be interesting to have a sort of "results linking service" where you would click and you would get links to all the people who had tried to reproduce the results, and indications of whether or not they succeeded. We use citation count as a sort of proxy for this, but it's imperfect, not least of which because there is no semantic tagging of the citation so you don't know if it was cited for being correct or incorrect. I think this kind of experiment linking might add a lot of value to Open Notebook Science and to protocols reporting (whether in the literature like Nature Protocols, or in a web system like myExperiment). Otherwise I worry that the amount of raw information from a lab notebook makes it hard to extract a lot of value from it.
UPDATE: Christina in the comments rightly chides me on my loose use of "falsifiability". Basically I'm trying to get at two aspects of testability: 1) is there enough information in the paper to test the author's claims? 2) What are the results of such a test?
Falisifiability in the Popperian sense is making knowledge claims that are capable of being falsified -- things that are testable or can be shown right or wrong. Things can be totally wrong, but falsifiable (if something has been proven false, then it was clearly falsifiable, as opposed to other things which are "not even wrong")... anyway, I think you want reproducible, not falsifiable.
Posted by: Christina Pikas | August 09, 2007 at 01:33 PM