My opinion

By Prof. Eyal Shahar
Corresponding Author Prof. Eyal Shahar
Univesrity of Arizona, Arizona, - United States of America 85724
Submitting Author Prof. Eyal Shahar


Shahar E. On Peer-Review and the Publication Machinery. WebmedCentral MISCELLANEOUS 2011;2(12):WMC002679
doi: 10.9754/journal.wmc.2011.002679

This is an open-access article distributed under the terms of the Creative Commons Attribution License(CC-BY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Submitted on: 18 Dec 2011 03:13:20 PM GMT
Published on: 19 Dec 2011 03:42:19 PM GMT

My opinion

"that an opinion has been widely held is no evidence whatever that it is not utterly absurd….” - Bertrand Russell (1929)
I vividly remember my first rejected article. The year was 1988, the pre-email era, and the big brown envelope arrived in the mail. It contained one copy of the manuscript; three hostile reviews; and a boilerplate rejection letter. They did not like my study on the effect of strict (versus routine) cleansing of the venipuncture site on contaminated blood cultures [1]. And I did not like their reviews. I told my mentor, a senior professor, that the reviews had no merit and that the peer-review process was unfair. He said that I was too young to express an opinion on the matter.
About 150 publications later, I am bald enough and bold enough to repeat the same statements in public. The reviews had no merit, the process was unfair, and that happened again many times in the following years. Not very long ago, I also felt secure enough to publish a harsh critique of common editorial practice [2]. But now I have a clearer picture of science and statistics. For instance, I know that my first rejected article contained a useless large P-value (no, my reviewers didn’t know that), and that null hypothesis testing has little to do with science—even when the p-value is small.
In this fairly short article, I will state several truths about the publication machinery. If you bear with me through the end, I will also offer a substitute for the peer-review idol. To avoid the annoying qualifiers “mostly”, “generally”, “usually”, and the like, I will (usually) make sweeping generalizations. Feel free to insert the qualifiers yourself.
Almost everything gets published these days—somewhere: on the printed pages of a bound journal or on electronic pages in cyberspace. Traditional peer review, formerly portrayed as a gatekeeper, can no longer be claimed to play the gatekeeping role. Nor do thoughtful scientists need a gatekeeper anymore. They run a PubMed search or a Google search on keywords, and skim articles in their areas of interest, regardless of where they were published. If you are interested in the effect of the menopause on carpal tunnel syndrome, you will look for relevant text everywhere. Then, you will sort the information yourself, as well as you can.
These days, peer review mainly serves the battle among journals on their rankings, and sometimes on their profits, too. Intentionally or unintentionally, all of us help them declare winners and losers. We worship journals according to the percentage of articles that pass through their filters of peer review, and we worship their articles accordingly (“She has a paper in XXXX!”). That’s a simple-minded belief. Bad science spares no journal, no matter how dense the filter is; good science is not restricted to certain journals; and the journal’s “impact factor” (cheap rhetoric, by the way) is not an indicator of the quality of any of its articles [2-4]. Ironically perhaps, big-name journals often reject remarkable breakthroughs in science, because a breakthrough is not only difficult to make, but also difficult to recognize. (No wonder Nobel prizes are awarded decades after the discoveries are made.)
Publishers of journals thrive on reputation, an idea taken from the business world: nothing to cherish, from the consumer’s viewpoint. The brand name is considered to be good, because many customers thought it was good. And why did many customers think it was good? Because the brand name was considered to be good. But is the article you are reading in this big-name journal really good? Not necessarily. Moreover, just as any business, a journal can be moved from the top of the list to the bottom of the list in a blink of an eye. What would happen to a journal if all of us decide to not send any more papers its way? The publisher will literally be forced to shut the doors.
No longer do we need to accept self-promoting apologies about printing space (“We receive many meritorious manuscripts and can publish only 15% of them”). New online journals pop up all the time, and they all beg us to submit our papers, each explaining why it is the best game in town. A seller’s market has turned into a buyer’s market.
Biomedical scientists are not genuinely interested in deciding on the merit of an article before it is published, or in “helping to improve it” before, or after, it is published. Editors drag them to the task, and they reluctantly agree, often or not so often. “Peer review is a service to science”, according to lip service, but if science is the beneficiary, you must be eager to read that new paper in your area of expertise before others do. No, you are not. Why not? Because chances are that it contains zero to minuscule knowledge. The article looks like science and reads like science, but no big discovery is waiting for you there. Inborn hostility aside, it makes no difference to you whether it is published somewhere or not. You just help an editor play the pseudo-important peer-review game—as all of us do sooner or later.
I dislike dishonest language, and my list of dishonest words includes the pair “peer-reviewer”. First, what makes someone my peer and what makes me someone else’s peer? What are the criteria to distinguish a peer from a non-peer in science? There are no good criteria. Publishing on the same topic does not make two people peers: one may be a good scientist and the other might be a bad one. One might have studied research methods in depth, and the other might have no more than casual understanding of research design. It is the reputation game all over again. If you have published enough, you have got it.
Second, a reader of an article, before publication or after publication, is not a reviewer. He or she is a reader. The word “reviewer” unjustly elevates one reader above the author (and other readers), but the honor does not last very long. One day A is invited to be a superior reviewer and B assumes the role of an inferior author, whereas a day later the titles are swapped: B is invited to be a superior reviewer and A drops to the level of an inferior author. Who invented that rhetorical nonsense (and I don’t mean abusing the verb “invite”)? Let’s think for a moment: If you are so incompetent (and biased) that you are unable to judge the merit of your own science, how could you be competent enough (and unbiased enough) to judge the merit of others’ science? Do you honestly accept the duality of competent reviewer and incompetent author within the mind of a single scientist? I don’t.
WebmedCentral took a big step in the right direction. No more anonymous, unpublished hostile critiques that restrict your academic freedom (in the name of science!), and help journals fight on their rankings, prestige, and profits. But more may be done. I don’t want to read cursory reader’s notes that took an hour or two to write, accompanied by check marks (“justified conclusions?”, “satisfactory length?”, “adequate references?”). If you really care to comment on a published article, don’t wait for an invitation. Sit down and take the time to write an article, long or short, that criticizes or praises that publication. Invest in your writing as much time as the author has invested, and be ready to read another article that criticizes your writing, and so on. The scientific exchange takes place between carefully-written arguments—sometimes years apart [5, 6]—not between an article and comments and check marks. Nor must it end with a one-time “author’s reply”, as one editor insisted when I tried to correct another author’s mistake. (He must have forgotten that living scientists sometimes argue with dead authors.)
WebmedCentral took a step in that direction too: It is called “Post Publication Peer Review of Published Literature”. (Inhale!) I hope the new category will eventually replace the current model of both pre-publication and post-publication critique of an article. And some day perhaps, the words “Peer Review” will also be replaced with something better. To show you what I had in mind, when I suggested adding that category, here are several examples of mine, which were published under various titles: discussion paper [6], article [7], commentary [8, 9], essay review [10], and letters to the editor [11, 12]. They are all “articles on articles” (or on books): non-anonymous critiques of published scientific literature. Whether right or wrong, short or long, each of them took a lot more thinking time and writing time than you and I typically devote to traditional peer review. I am looking forward to reading similar serious exchanges on WebmedCentral.
Finally, what is the role of a publishing house? Is it merely a venue for housing anything that a scientist or a physician wishes to publish? Does any signed clutter of words deserve to be published in the name of academic freedom?
Of course not. A publishing house that wishes to earn the readers’ respect should include two crucial positions: First, an editor who would weed out text that is simply too bad to be published, the kind of text that readers would not want to read: poor writing and poor expression of thought. Second, a copy editor who would correct embarrassing errors and suggest (not impose) grammatical improvements. Readers expect a minimal bar from any publisher; they will not respect a publishing house that does not show them that minimal kind of respect.


1.Shahar E, Wohl-Gottesman BS, Shenkman L. Contamination of blood cultures during venepuncture: fact or myth? Postgrad Med J 1990;66:1053-8
2.Shahar E. On editorial practice and peer review. Journal of Evaluation in Clinical Practice 2007;13:699-701
3.Shahar E, Pankow JS, McGovern PG. Restriction of fat in the diet of infants. Lancet 1995;345:1116-7
4.Shahar E. Your letter failed to win a place. Br Med J 1997;315:1608-9
5.Robins JM, Hernán MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology 2000;11:550–560
6.Shahar E, Shahar DJ. Marginal structural models: much ado about (almost) nothing. Journal of Evaluation in Clinical Practice Epub Aug 24, 2011
7.Shahar E. Estimating causal parameters without target populations. Journal of Evaluation in Clinical Practice 2007;13:814-816.
8.Shahar E. Commentary: interpreting the interpretation. Journal of Evaluation in Clinical Practice 2007;13:693-4
9.Shahar E. A method to detect an unknown confounder: something from nothing? Journal of Evaluation in Clinical Practice Epub March 13, 2011.
10.Shahar E. Research and medicine: human conjectures at every turn. The International Journal of Person Centered Medicine 2011;1(2):250-253
11.Shahar E, Shahar DJ. On the definition of effect modification. Epidemiology 2010; 21:587
12.Shahar E, Shahar DJ. More on selection bias (submitted and proofread under the title “On the hazard ratio and disease-related colliding bias”). Epidemiology 2010;21:429-30

Source(s) of Funding

Did someone pay me to write this piece? No.
Am I paid in my academic position? Yes.
Was my writing paid by some “research grant”? No.

Competing Interests

WebmedCentral advisory board member. I think that my interests would be best served by not publishing this article.


This article has been downloaded from WebmedCentral. With our unique author driven post publication peer review, contents posted on this web portal do not undergo any prepublication peer or editorial review. It is completely the responsibility of the authors to ensure not only scientific and ethical standards of the manuscript but also its grammatical accuracy. Authors must ensure that they obtain all the necessary permissions before submitting any information that requires obtaining a consent or approval from a third party. Authors should also ensure not to submit any information which they do not have the copyright of or of which they have transferred the copyrights to a third party.
Contents on WebmedCentral are purely for biomedical researchers and scientists. They are not meant to cater to the needs of an individual patient. The web portal or any content(s) therein is neither designed to support, nor replace, the relationship that exists between a patient/site visitor and his/her physician. Your use of the WebmedCentral site and its contents is entirely at your own risk. We do not take any responsibility for any harm that you may suffer or inflict on a third person by following the contents of this website.

1 review posted so far

3 comments posted so far

Enlightenment? Posted by Dr. Dov Henis on 09 Apr 2012 03:11:48 PM GMT

Who Suppresses Science Creativity Posted by Dr. Dov Henis on 11 May 2012 10:02:04 AM GMT

Please use this functionality to flag objectionable, inappropriate, inaccurate, and offensive content to WebmedCentral Team and the authors.


Author Comments
0 comments posted so far


What is article Popularity?

Article popularity is calculated by considering the scores: age of the article
Popularity = (P - 1) / (T + 2)^1.5
P : points is the sum of individual scores, which includes article Views, Downloads, Reviews, Comments and their weightage

Scores   Weightage
Views Points X 1
Download Points X 2
Comment Points X 5
Review Points X 10
Points= sum(Views Points + Download Points + Comment Points + Review Points)
T : time since submission in hours.
P is subtracted by 1 to negate submitter's vote.
Age factor is (time since submission in hours plus two) to the power of 1.5.factor.

How Article Quality Works?

For each article Authors/Readers, Reviewers and WMC Editors can review/rate the articles. These ratings are used to determine Feedback Scores.

In most cases, article receive ratings in the range of 0 to 10. We calculate average of all the ratings and consider it as article quality.

Quality=Average(Authors/Readers Ratings + Reviewers Ratings + WMC Editor Ratings)