why is the impact factor of mathematical journals "often" lower than the impact factor of journals in other disciplines

There are a lot of different factors, and I know of no reliable way to determine which is the best explanation. For example, one theory is that mathematics simply has less impact (in the non-technical sense) than most scientific fields. I don't believe this, but it's hard to give a principled refutation. The whole subject strikes me as a little silly, with lots of opinions and numbers with no clear meaning.

One important factor is clearly that mathematicians write fewer, often longer papers than most scientists. Another is the two-year cycle mentioned by walkmanyi (citations after two years do not count for impact factor, which is incompatible with both the time lag in mathematics publication and the time required to carry out research in mathematics in the first place).

Another factor is the size of the field. The highest impact factors should occur in an enormous field with some incredibly important research and also a ton of less important papers that cite the great ones. Mathematics is just not that large a field (compared with biology or medicine, certainly), and it furthermore fragments into a lot of subfields it's difficult to move between. When someone makes an amazing discovery in algebraic geometry, you aren't going to get a flood of mathematicians from other areas rushing in to take the next steps, because algebraic geometry requires a lot of background. I don't think that's a bad thing for mathematics as a whole (the things the would-be algebraic geometers are doing instead are probably as valuable as following the latest trends would be), but it cuts down on the opportunities for amassing citations quickly.

Ultimately, I doubt there's any conclusive or satisfying way to determine how much of a role each of these factors plays.

For some published commentary on impact factors in mathematics, see Nefarious Numbers by Arnold and Fowler and Impact Factor and How it Relates to Quality of Journals by Milman. The first paper focuses on the flaws of impact factors and their abuse/manipulation, while the second explains how impact factor calculations relate to mathematical publication practices (and some of the incentives for journal editors). Neither directly answers the question here, but they both shed some light on it.


I must admit that I tend to disagree with the previous answers: while the description of the specifics of the mathematical community are accurate, I do not see why this should affect the impact factor (except for the time needed before an article is cited, whose influence is clear). In particular, size of the field does not have any impact per itself on the average impact factor of papers. In fact, papers are less cited mainly because papers cite less.

Let me back my point, first assuming we are looking in a field that is closed (only cites itself and is only cited by itself) and stationary (no evolution of the number of papers published or the average number of references per article). Consider the publication graph of a given year : it is a bipartite graph, whose vertices are papers published year 0 (first partition) and years -1 and -2 (second partition) and whose edges are citations from the first ones to the second ones. Then the (article) average of impact factors AIF in this domain is the ratio

AIF = (#citations from year 0 papers to year -1 and -2 ones)/(#year -1 and -2 papers)

which is equal to (#edges)/(#papers published in two years), since the field is assumed to be stationary. This is also twice the average number of references to the two preceding years that a paper in the fields has.

So the article average impact factor of a closed and stationary field is solely governed by the references habits in the field. In particular, this is not affected by the overall size of the field (e.g. math as opposed to biology).

Given the distribution of references, an expending field will tend to have bigger impact factor, as will a field that is often cited by other ones. I do not feel that speed of expansion is an important factor for math compared to other fields, but fondamental mathematics are probably seldom cited outside itself. This has little impact if one consider maths against the rest of the world, though, since math papers seldomly cited outside the field too.

Another factor can be the distribution of papers among journals: for example, if a field has only two journals, one very large and one very small that only gets the very top articles, then the (unweighted) journal average IF will be extremely high. I doubt this explain much of the difference between math and the other fields, since mathematics have a strong hierarchy of journals.

So, what we really have to explain is why math papers cite less papers in the two-years range than papers in (most) other fields. This will explain why they are less cited.

Then the answer seems quite clear: maths papers are often long to read, and take time to be digested. The core of a biology paper is usually easy to understand and such papers are more easy to cite. There is also a small subfield effect: mathematician can work on problems that involve few previous papers. This is different from the size of fields, because it is more about the degree of specialization.


why is the impact factor of mathematical journals "often" lower than the impact factor of journals in other disciplines?

Besides your observations 2 and 3, my take on this would stem from the observation that the pace of work in mathematics tends to be a longer shot than in disciplines, such as biology where often there would be several competing groups working on a very close subject. The impact factor is calculated as "recent", but in disciplines with slow pace of development, sooner than a paper gets cited, it already falls of the considered recent period (2, or 5 recent years).