Why scientists are sometimes wrong

Being viewed as the scientific position on a topic is a highly prized asset for any side in a debate. More often than not both debating sides will claim to be the science backed side. If it is a long running debate generally people "prove" this assertion by linking to one or more peer reviewed papers matching their proffered opinion. This is something which the modern online interactive scientific community encourages. You will often hear science folk damn an idea because the person stating it cannot produce a link to any peer reviewed evidence. This is a perfectly understandable point that I have made myself a few times.

Having a peer reviewed scientific article can slightly raise the validity for a viewpoint. However the bar is only raised from something on which the entirety of science agrees is nonsense to something which has not quite universal agreement on it being nonsense. Naturally on the web this low bar is still very useful and serves as an excellent time saving pre discussion spam filter. If you take nothing else from this post take this: having a supporting peer reviewed paper does not magically make a viewpoint right or even scientifically valid. Even in a perfectly working peer review system a published article would prove nothing other than there exists a couple of scientists who happen (or once happened) to believe the particular research in the paper was valid enough to warrant publication.

In working practice peer review itself has many problems. Not least of which is deciding who is a good peer to review a paper? This is rarely an easy question as anyone who has ever had to pick reviewers will tell you. Some of the question you have to ask when picking peers for reviewing:

  1. Will the person actually do the reviewing within the time period allotted?
  2. Will the person do it properly or will they, on the last day, glance over it and lazily declare it as fine?
  3. Will the person know enough about the method of analysis or experimental method employed to ensure they were carried out correctly.
  4. Will the person understand the authors stated conclusions and the possibly unstated implications these might have?
  5. Will the person have enough background knowledge in this field to know if this is really "novel"1 work?
  6. Is the work to be reviewed very similar to the reviewer's own work program or interests. Do they have a conflict of interests in holding up this work or alternatively promoting the field.
  7. Does the person have any arguments with the authors or their institutes? Have they had any in the past or are they likely to be influenced by someone who has?
  8. Can the person accept when they are wrong or will they hold out for the sake of pride.
  9. Can the person actually provide constructive criticism or will they be hurtful and petty.

For any paper it is almost impossible to find the ideal reviewer. Some selection criteria, for example 5 and 6 above, are pretty close to mutually exclusive. To help in this impossible task many journals offer the submitting authors to suggest suitable reviewers2. Sometimes the journals themselves help by maintaining private online guides of reviewers to help people tasked with finding reviewers (normally assistant editors) select the appropriate individuals for the submitted papers. These guides vary journal to journal and can be merely a list of key skills entered by each reviewer all the way to short synopses revealing major personality defects and warning of potential issues. ie Dr. X: Very confrontational. Should not review articles by Prof. Y or his group due to a past heated exchange.

So how does an assistant editor find suitable reviewers? Well generally they will compromise and pick the best of the available choices laid before him. He will hope the selected candidate's scientific integrity will overcome the other issues that might affect their reviewing performance. Then he fires away an email to ask the chosen candidates. If he is lucky they will accept if not maybe they will suggest someone they think is better3. If he draws a total blank he will select the next best candidates and continue doing so, getting increasingly desperate and less picky, until someone finally agrees.

So there is always a chance that any paper will be reviewed and accepted by people who are quite simply inappropriate for the task. The result being that due to the known flaws in the system good papers can get rejected and papers containing flawed science can sneak into peer reviewed journals. Given the vast number of papers that are published every week it is a certainty that a handful of truly awful ones will get in. It is very difficult to monitor the quality of reviewing due to the anonymous nature of reviewing and the volume of papers submitted. There is also very little incentive for time pressed reviewers to do a good job. The more prestigious the journal the more effort the reviewers and editors tend to put in but this is merely a tendency and mistakes are made at all levels.

Bad reviewers are far from the only weakness in peer review. Even with perfect reviewing undescribed experimental mistakes, unrepresentative sampling and even out and out fraud are almost impossible for reviewers to detect. The occasionally suggested notion that reviewers could test worrying results by replicating experiments in their own lab in the reviewing time frame is utter nonsense (The reviewer's limited time, money and equipment being the main barriers).

In theory a bad paper that sneaks by should be detected by the readership and trigger damning replies or comment pieces but rarely (given the scale of the problem) does this happen for three main reasons4. Firstly bad papers often are instantly dismissed and ignored effectively silently vanishing into the journal archives causing little real damage on the way. Drawing attention to them by flagging them with a reply may well actually increase their impact. Secondly public criticism of other scientist is actually quite risky to your career especially if you are a young scientist. Unlike reviews comment articles are published and are not generally anonymous. If you comment on a paper and turn out to be even slightly wrong god help you. The chances are quite high that the original author will use his right of reply to gleefully highlight your mistake5. Even if you are entirely right the chances of you working with the original authors or their friends just went down. Odd as it may seem scientists are in fact people. Try going around pointing out your colleagues' professional mistakes in public and see how many thank you and ask to work more with you in future. Finally, especially if it is a serious experimental error, often you can make your comment into an actual independent paper that merely references the original paper thus setting the record straight and giving you an extra full paper.

The long and short of this is that there are thousands of uncommented peer reviewed papers stating mutually exclusive results and obviously some are simply wrong. This is abundantly obvious to most working scientists as they will know an ongoing "open issue" in their field. Go to pretty much any decent sized scientific conference and you are pretty much guaranteed to hear a heated debate between two senior scientists. Hopefully (thou sadly not always) these are not emotional opinionated attacks but rather constructive attempts to convince their opponent of the rationality of their viewpoint with the scientific evidence they have gathered.

The thing to realise is that both arguing scientists may well be logical, resonable6, peer-reviewed science backed and crucially entirely wrong. The peer reviewed evidence they formed their opinions on is, as I have explained above, imperfect and sometimes wrong and so consequently are the opinions. My main point is this: Any scientist no matter now senior or decorated is not all of science and nor can they speak for science. No scientist will never be wrong, we are all human and make mistakes. This harsh truth is why one of the most celebrated scientific societies called the Royal Society has the motto "Nullius in Verba" meaning take noone's word for it.  

There is in fact only one thing that matters in any scientific field, the scientific consensus. In part two I will expand on exactly what this is and why it is so very crucial to scientific thinking.



Footnotes

  1. You can argue very solidly that scientific papers shouldn't be novel. However many journals explicitly state that they will only publish novel non incremental (?) work.
  2. You do not have to be a genius to note this particular system is massively open to abuse.
  3. Often his, absolutely unknown, new PhD student or postDoc. After all they can read up whatever they do not already know right. Yes this is bad and yes this does happen. 
  4. These are also the reasons that comment/reply pieces that do appear generally are on papers that receive a bit of publicity and are generally written by well-established scientists. 
  5. Sometimes these exchanges can become really quite petty and nasty. (The authors are pretty much guaranteed to be in your field. Hence the folk you have annoyed may well be the folk anonymously doing your next paper's review. How do you reckon that will go? Remember item 7 in the ideal reviewer list?)
  6. Of course some may not be logical or may have hidden agendas but we can ignore those "bad scientists" for now.