Why do errors exist in peer reviewed publications?

People make mistakes.

Manuscript authors, reviewers, and editors are people, and people are not perfect. Even if every person involved in the publication of a manuscript catches 99% of all errors, it's still possible that some errors will go unnoticed. This likelihood of course goes up when authors/reviewers are careless, but it can never be eliminated entirely even with very meticulous review.


Going by my experience in reviewing papers, it's virtually impossible to really review every aspect of a paper. This is particularly the case in some fields where the supplementary methods go for 100s of pages and the code may be 10s of 1000s of lines long. It's unlikely that all reviewers have the in depth knowledge of specific fields and the huge amount of time required to properly evaluate everything in the paper. This is particularly the case in fields which are relatively new and ground-breaking - there simply doesn't exist the pool of capable reviewers.

For example, I recently reviewed a paper for a top journal (Nature Genetics) which involved a complex and quite a specialised piece of software for genetic analysis. There was no way I could evaluate each line of code they had written, so there is no way the reviewer could be expected to detect bugs and errors in the code. We have to look at the results presented and their justification to then assume that the code is working as the authors said it would. There is a rather large amount of faith in the authors that their code does what they say it does. I could have spent a whole month non stop reviewing the paper and I still imagine minor things could have slipped by.

This issue is compounded by a) the fact reviewers don't get paid and b) reviewers are often very overworked and c) there isn't much motivation beyond professionalism and academic ideals (although of course these can be strong motivating factors) to spend a large amount of time meticulously reviewing papers.

tl;dr. Reviewers mostly do the best job they can, but given the terrible structure set in place to perform most peer-reviews, I certainly wouldn't trust a peer-review to be a cast iron certificate that everything in the paper is correct, but fortunately a vast majority of people in academia are honest and don’t try to deceive readers or reviewers.


Ideally, this would not happen, but it is near unavoidable.

For instance, if you are refereeing a proof, chances are you won't read it line-by-line. In fact, when I can only read a mathematical argument that way, it is because I have not yet understood it, and this is a really bad way of verifying global correctness, or coherence, or originality. A lot may be lost, and even if everything is correct, chances are that at the end it is hard to say with any certainty how flexible the argument is, whether all the assumptions are truly needed, whether some steps are really as difficult as depicted, or why. Usually we read modularly, trying to grasp the global structure of the argument, using lemmas and the like as black boxes. Only when that structure makes sense, we proceed to the lemmas. Or we may not even read the proofs of the lemmas, because we see how to prove them ourselves. Now, if one such proof has a problem as written in the paper, there is a chance we will miss it because we see how to prove the lemma anyway. Some referees do not care much about mistakes at this level, since they are easily fixable, and what matters more is that the overall argument is sturdy.

It would be much better, overall, if papers included extended discussions of motivation, intuition, proof strategies, and so on. People reading them would find the arguments easier to digest, and the possibility of missing a mistake would reduce. But technical writing is difficult to begin with, some journals have page limits, and sometimes there are time constraints (related to tenure or promotion considerations, for example) that restrict the ability of authors to spend the time that this inclusion would require. Of course, not having such remarks throughout the paper may make it difficult for referees and other readers to grasp some of the details, which may lead to missing mistakes.

Ideally, we as referees should read the paper several times, at least once line-by-line, but few times we are in the position to devote so much time to the process. When I can afford the time, I may even comment on typos or style, though I much prefer if the substance of my comments is on the mathematics of the paper and its potential for generalizations or extensions, or connections with other works. A few happy times I've seen how to improve some of the proofs presented in the paper, but I imagine I have missed important details as well.

Of course, it may well be that a paper has an error that is not at the level of a typo or a lemma not quite proved as it should be. An error may be significant and we may still miss it. Sometimes we find an argument similar to something we are familiar with, and skip verifying details we expect to be routine, and end up missing something serious. Or we misunderstand. It is really not that uncommon or surprising.

Papers are not written in formal languages that are machine-verifiable. Some people argue that they should be. Whatever is the case, currently most of our proofs are conversational, and technicalities may be omitted on occasion. Many papers are very dense, and it takes years of careful examination by many people to detect flaws or gaps, or genuine mistakes. Peer review is not meant to signify a perfect guarantee of correctness, and it is a mistake to think of it as having that goal.

Here is a quote regarding the refereeing process of the journal Discrete Analysis, highlighting precisely this last point:

In some cases, it is not reasonable to expect a reviewer to check the correctness of a paper down to the last detail. In such cases, editors may be satisfied with indirect evidence that a paper is likely to be correct. (For example, it may be that the general outline of the argument is convincing, but that the technical details involved in converting the outline into a complete proof are very complicated.) Thus, publication in Discrete Analysis should not be considered an absolute guarantee of correctness, just as in practice it is not a guarantee for any other journal.