Why don't all disciplines follow a double-blind review system?

In fields with a vibrant pre-print culture (e.g. Physics or Math), most papers are already publicly posted on the internet with the authors name attached before the paper is submitted and reviewed. In that context, double-blind reviewing isn't even a sensible option.

Even without pre-prints the same is true of talks in some fields. For many papers, many potential referees will have already seen a talk at a conference on the results in the paper. This may be especially true of math where the process of writing up a paper in full detail is especially onerous, so it's common for people to be giving talks on topics while the paper is in preparation.


Theoretical computer science uses single-blind reviewing almost exclusively. Reviewers know the authors of the papers they review, but authors do not who reviews their papers. (As with many things, we copy this attitude from mathematics.)

I think the main reasons we don't use double-blind reviewing are that (1) we never have, (2) we have a habit of posting preprints (although not to the same extent as math and physics), and (3) there's a general consensus that it's just not necessary.

The standard argument that double-blind reviewing is unnecessary is that the decision to accept or reject a given paper is more objective than in other fields. There isn't an experiment to judge. Either the algorithm is faster or it isn't; either the theorem is true or it isn't; either the proof is actually a proof or it isn't. (I don't buy this argument, especially for page-limited conference submissions, but there it is.)

You should be able to read an article and, assuming that the experiment was conducted accurately and ethically, decide if it is a significant scientific advancement.

These are not the only criteria by which scientific research is judged.


Update: As @a3nm points out in his comment, theoretical computer science is slowly transitioning toward a "lightweight double-blind reviewing" protocol that is already common in other computer science research areas. "Lightweight double-blind" requires the authors to submit their papers without identifying information, citing their own work in the third person, but it does not prevent either posting preprints to arXiv or presenting work at seminars and workshops.

ALENEX, DiSC, ESA, FAccT, FODS, and LICS already follow this protocol, as do several conferences at the intersection of theoretical computer science and machine learning. Major conferences like SODA are at least seriously discussing the idea, but change is slow, and many (especially senior) researchers are strongly opposed to the idea.

For more information, see this report on double-blind reviewing at ALENEX 2018 and this FAQ from POPL 2018.


While this article by Kathryn McKinley is quite old now, it provides a much more nuanced view of the processes that would support a double blind review process. In brief, it's not as simple as "make everything double blind". There are stages where it's important, and there are stages in the review process where it's useful NOT to be double blind.

Roughly speaking, you want double blind review when doing the initial evaluation, because people are more likely to jump to conclusions on a first look. But later on, it's helpful to know who's doing the work because even in mathematical work, there's an element of trust that goes into evaluating a paper (especially in theoretical CS for example, where papers are way too short (and deadlines too close) to do a rigorous evaluation of proofs).