Why does concepts make C++ compile slower?

Since this question is pretty old (from 2011) and concepts were recently released as of this writing (2020), I would like to clarify a couple of things, just to not mislead people or discourage them from using concepts.

Concepts that used to be considered and concepts being released now are quite different beings. Concepts released in C++20 are also known as “concepts lite” as they include reduced features compared to the initial design of concepts. So, what was taken away from concepts?

The main difference is that the primary design of concepts was intended for checking not only the correctness of the usage of a template but also the correctness of the definition of this template. For example, suppose you have a template with type Animal, that needs to have the member function make_sound. You can imagine a constrained function template like so:

template <typename Animal>
requires requires(Animal& x){
  x.make_sound();
}
int animal_tricks(Animal& x) {
  x.make_sound();
  x.do_trick();
}

Now, with the initial design of concepts, the definition of the function template animal_tricks would be incorrect because we are using a do_trick member function, which was not part of the required expression. With C++20 concepts lite, this definition of concept is fine. The compiler will not check the correctness of the animal_tricks function template because, in a concepts-lite world, it’s up to the developer to correctly specify the requirements on the type. That difference can make quite a big difference in the compilation time. In 2016, there were two papers that considered the reasons for concepts to enter C++17 or not: “Why I want Concepts, and why I want them sooner rather than later” and “Why I want Concepts, but why they should come later rather than sooner.” Neither even considered performance, so it’s a good indicator that it was not an issue back then.

Also, the current concepts design might come with some performance advantages. According to the rule of Chiel, the slowest thing in compilation is SFINAE because it needs to at least try to instantiate (usually) a significant amount of types, only to abandon them later on. Concepts (depending on how they are implemented) might not need to instantiate any templates, which in fact might end up being a performance advantage.


Note: the following answer (and the question it answers) pertains to the old C++0x version of concepts and has little relation to the version of the feature added to C++20.


First of all, Herb didn't say that concepts themselves made compiling slower. He said that conceptizing the C++ standard library made any code using the C++ standard library compile slower.

The reason for that comes down to several things.

1: Constraining templates takes compile time.

When you declare a class like this:

template<typename T> class Foo {...};

The compiler simply parses Foo and does very little. Even with two-phase lookup, the compiler simply doesn't do a whole lot in the compilation of class Foo. It stores it for later, of course, but the initial pass is relatively fast.

When you do constrain the template with a concept:

template<ConceptName C> class Foo {...};

The compiler must do some things. It must check up front that every use of the type C conforms to the concept ConceptName. That's extra work that the compiler would have deferred until instantiation time.

The more concept checking you have, the more compile time you spend to verify that the types match the concepts.

2: The standard C++ library uses a lot of concepts.

Look at the number of iterator concepts: input, output, forward, bidirectional, sequential, contiguous. And the committee was considering breaking them down into many more than that. Many algorithms would have multiple versions for different iterator concepts.

And this doesn't include range concepts (of which there is one for every kind of iterator concept except output), character concepts for std::string, and various other kinds of things. All of these have to be compiled and checked.


What concepts really needed to make it fast is modules. The ability for the compiler to generate a module file that contains a sequence of pre-checked symbols, and then load that file directly without having to go through the standard compilation process. Straight from parsing to symbol creation.

Remember: for each .cpp file you #include , the compiler must read that file and compile it. Even though the file is the same thing every time it does this, it still must dutifully read the file and process it. If we're talking about a concept-ized std::vector, it has to do all of the concept checking of the template. It still has to do all of the standard symbol lookup you do when compiling. And so forth.

Imagine if the compiler didn't have to do this. Imagine if it could just load a bunch of symbols and definitions directly from the disk. No compiling at all; just bringing in symbols and definitions for other code to use.

It would be like precompiled headers only better. Precompiled headers are restricted to only have one per .cpp file, whereas you can use as many modules as you like.

Sadly, modules was yanked pretty early in the process from C++0x. And without modules, constraining the standard library with concepts will always compile more slowly than the unconstrained version.

Note that Herb misunderstands the purpose of modules (not hard, since most of the initial concepts of the feature were the things he talked about: cross-platform DLLs and such). Their core fundamental purpose is to help compile times, not to make cross-platform DLLs work. Nor is it intended that modules themselves be cross-platform.