How does peer review detect cheating when replicating a study isn't an option?

I take issue with your first statement, that for a "...study to be accepted as "truth" the standard process is peer review". The purpose of peer review is not to be the judge of what is true or not, but to evaluate (simply put) whether a study is well-conducted, is using adequate methods, is acknowledging relevant previous research, and if the conclusions are supported by the data and analysis. Another way of putting this might be that peer review is about validation of the claims made, but not about validation of truth. Some other thoughts and guidelines on the purpose of peer-review can be found here; on scope and responsibilities, from PNAS (see "Peer Reviewer Instructions" and "Reviewer responsibilities), and a relativly clear statement on the scope of peer-review, from The Royal Society (see "Reviewing instructions").

In a scientific context, "truth" is something that follows from repeated studies that confirms previous results, and is based on a network of theory and observation that is (to a large extent) congruent. However, generally, I would say that it is more appropriate to define the scientific method as a way to search for "truth", than a method to determine what is "true" (opinions will probably vary on this though).

When it comes to cheating, especially with regard to data, the possibilities to detect this during peer-review is limited, even in the ideal situation when data has been made fully available. If researchers for example fully fabricate data, or tamper with raw data, this will not be caught in peer-review, since only the modified data will be avaliable. Add to this that the time available to peer-review is very limited, so a full statistical re-analysis of the data is not possible. This is also one of the reasons for the need of replicated studies and other studies with supporting evidence before results are accepted as "true".


Quite simply, it doesn't and it can't. Further, the aim of peer review is not to detect fraud. Peer review can answer the questions:

  1. Does this study answer an interesting question that has not already been answered else where?
  1. Does the study use the correct methodology to answer the question? Are there flaws or gotchas in the implementation? For example I am currently having a back and forth over what is the most appropriate way to remove a particular sort of bias from a data set. These are the sort of subtleties that a non-expert reader might not be able to detect
  2. Check that the conclusions drawn are supported by the data and analysis provided. Are there subtle reasons that what the authors claim doesn't follow? For example, it might take an infectious disease epidemiologist expert to tell where a particular interpretation of results about Covid is falling victim to the Texas Sharpshooter Fallacy.

Journals can and sometimes do detect particularly egregious cases of fraud (like some categories of imagine manipulation), and reviewers will hopefully catch cases where authors are being evasive, cherry picking data, ignoring flaws in the data, or suggesting things that don't quite follow, but outright lies are more or less impossible to catch.

This is why generally things don't become accepted as truth on the basis of just a single study. While outright replications are rare, future studies will use previous studies as starting points, and if those previous studies are incorrect it will become apparent as the house of cards built on them doesn't stand up.

In fact we rarely ever accept anything as TRUTH. Science doesn't find truth, and all papers are wrong. Instead science as a whole, average over everything asymptotically approaches truth, but on a small scale it is not a smooth approach, but random walk. A biased one to be sure, more two steps forward and one back than the opposite.

This is why breaking into a new field can be difficult. You need to absorb the complete milieu of the field. You need to get a feel for what the field as a whole believes, rather than what an individual paper says. That's not to say the field is always right and the individual paper wrong, but siding with the field will make you right more often than wrong.


TL;DR: Finding outright fraud is not the job of peer review; it is not difficult to cheat in a publication and it is not easy in general to discover it. However, fraud in important work will ultimately be found out. Fraud in unimportant work may linger for a while because nobody will bother to use or reproduce the results.

Peer review rarely can identify fabricated data directly (there are exceptions, see the case of Jan Hendrik Schön, where graphs were identically reproduced in different contexts; or cases where image manipulation can be clearly established).

However, note that fabricating data is the ultimate scientific crime, even worse than plagiarism. If the question is important, you waste other researchers valuable time and direct them away from other more productive lines of work.

Furthermore, if the question is important, you will be found out. It may take time, but you will be found out. This is how science works. It makes mistakes, results are foggy, but the fog will clear at some point. If you ever fabricated data, you will have a very hard time to ever be believed again - actually, I would venture so far as to say you will never be believed again. No one wants to waste their time on work by someone who is not just sloppy (such as the Cold Fusion case), which is bad enough, but actively mislead their peers.

If the question is unimportant, and one is out of the eye of scientific scrutiny, then one may survive for a while in the system (there were cases where whole careers were built on this over longer periods); however, then, what's the point? What's a charlatan without an audience?

Peer review is mostly a sanity check for the most coarse omissions, mistakes, or really clumsy fakes. But discovering the latter is not the purpose of peer review. Given above incentives to not lie, peer review assumes that the authors have given their best shot at being truthful and it tries to capture honest mistakes; another role is evaluating the quality of the research (which is often very subjective and may have a latency of decades before it becomes more "objectively" evaluable).

[Addendum: One major class of issues could theoretically be discovered by peer review in a similar way as vote tampering, namely by statistics such as Benford's law - however, unlike in voting where results matter immediately and on a large scale, peer reviewers do not typically invest the time to run detailed evaluations of whether the statistics has been tampered with. Scientific work is not treated as adversarial as would be vote manipulation or intelligence work, and it would be a huge waste of time to do so, as there is enough to do with the exploration of the unknown.]