Why is '\n' preferred over "\n" for output streams?

None of the other answers really explain why the compiler generates the code it does in your Godbolt link, so I thought I'd chip in.

If you look at the generated code, you can see that:

std::cout << '\n';

Compiles down to, in effect:

const char c = '\n';
std::cout.operator<< (&c, 1);

and to make this work, the compiler has to generate a stack frame for function chr(), which is where many of the extra instructions come from.

On the other hand, when compiling this:

std::cout << "\n";

the compiler can optimise str() to simply 'tail call' operator<< (const char *), which means that no stack frame is needed.

So your results are somewhat skewed by the fact that you put the calls to operator<< in separate functions. It's more revealing to make these calls inline, see: https://godbolt.org/z/OO-8dS

Now you can see that, while outputting '\n' is still a little more expensive (because there is no specific overload for ofstream::operator<< (char)), the difference is less marked than in your example.


Keep in mind though that what you see in the assembly is only the creation of the callstack, not the execution of the actual function.

std::cout << '\n'; is still much slightly faster than std::cout << "\n";

I've created this little program to measure the performance and it's about 20 times slightly faster on my machine with g++ -O3. Try it yourself!

Edit: Sorry noticed typo in my program and it's not that much faster! Can barely measure any difference anymore. Sometimes one is faster. Other times the other.

#include <chrono>
#include <iostream>

class timer {
    private:
        decltype(std::chrono::high_resolution_clock::now()) begin, end;

    public:
        void
        start() {
            begin = std::chrono::high_resolution_clock::now();
        }

        void
        stop() {
            end = std::chrono::high_resolution_clock::now();
        }

        template<typename T>
        auto
        duration() const {
            return std::chrono::duration_cast<T>(end - begin).count();
        }

        auto
        nanoseconds() const {
            return duration<std::chrono::nanoseconds>();
        }

        void
        printNS() const {
            std::cout << "Nanoseconds: " << nanoseconds() << std::endl;
        }
};

int
main(int argc, char** argv) {
    timer t1;
    t1.start();
    for (int i{0}; 10000 > i; ++i) {
        std::cout << '\n';
    }
    t1.stop();

    timer t2;
    t2.start();
    for (int i{0}; 10000 > i; ++i) {
        std::cout << "\n";
    }
    t2.stop();
    t1.printNS();
    t2.printNS();
}

Edit: As geza suggested I tried 100000000 iterations for both and sent it to /dev/null and ran it four times. '\n' was once slower and 3 times faster but never by much, but it might be different on other machines:

Nanoseconds: 8668263707
Nanoseconds: 7236055911

Nanoseconds: 10704225268
Nanoseconds: 10735594417

Nanoseconds: 10670389416
Nanoseconds: 10658991348

Nanoseconds: 7199981327
Nanoseconds: 6753044774

I guess overall I wouldn't care too much.


Yes, for this particular implementation, for your example, char version is a little bit slower than the string version.

Both versions call a write(buffer, bufferSize) style function. For the string version, bufferSize is known at compile time (1 byte), so there is no need to find the zero terminator run-time. For the char version, the compiler creates a little 1-byte buffer on stack, puts the character into it, and passes this buffer to write out. So, the char version is a little bit slower.