Why is ONE basic arithmetic operation in for loop body executed SLOWER THAN TWO arithmetic operations?

This effect only happens at -O0 (or with volatile), and is a result of the compiler keeping your variables in memory (not registers). You'd expect that to just introduce a fixed amount of extra latency into a loop-carried dependency chains through i, x, and y, but modern CPUs are not that simple.

On Intel Sandybridge-family CPUs, store-forwarding latency is lower when the load uop runs some time after the store whose data it's reloading, not right away. So an empty loop with the loop counter in memory is the worst case. I don't understand what CPU design choices could lead to that micro-architectural quirk, but it's a real thing.

This is basically a duplicate of Adding a redundant assignment speeds up code when compiled without optimization, at least for Intel Sandybridge-family CPUs.

This is is one of the major reasons why you shouldn't benchmark at -O0: the bottlenecks are different than in realistically optimized code. See Why does clang produce inefficient asm with -O0 (for this simple floating point sum)? for more about why compilers make such terrible asm on purpose.

Micro-benchmarking is hard; you can only measure something properly if you can get compilers to emit realistically optimized asm loops for the thing you're trying to measure. (And even then you're only measuring throughput or latency, not both; those are separate things for single operations on out-of-order pipelined CPUs: What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand?)

See @rcgldr's answer for measurement + explanation of what would happen with loops that keep variables in registers.

With clang, benchmark::DoNotOptimize(x1 += 31) also de-optimizes into keeping x in memory, but with GCC it does just stay in a register. Unfortunately @SashaKnorre's answer used clang on QuickBench, not gcc, to get results similar to your -O0 asm. It does show the cost of lots of short-NOPs being hidden by the bottleneck through memory, and a slight speedup when those NOPs delay the reload next iteration just long enough for store-forwarding to hit the lower latency good case. (QuickBench I think runs on Intel Xeon server CPUs, with the same microarchitecture inside each CPU core as desktop version of the same generation.)


Presumably all the x86 machines you tested on had Intel CPUs from the last 10 years, or else there's a similar effect on AMD. It's plausible there's a similar effect on whichever ARM CPU your RPi uses, if your measurements really were meaningful there. Otherwise, maybe another case of seeing what you expected (confirmation bias), especially if you tested with optimization enabled there.


I tested this with different levels of code optimization (-O0,-O1,-O2,-O3) [...] But I always got similar result

I added that optimizations notice in the question to avoid "do not measure not optimized code" answers because optimizations is not what I ask about.

(later from comments) About optimizations: yes, I reproduced that with different optimization levels, but as the loops were optimized away, the execution time was too fast to say for sure.

So actually you didn't reproduce this effect for -O1 or higher, you just saw what you wanted to see (confirmation bias) and mostly made up the claim that the effect was the same. If you'd accurately reported your data (measurable effect at -O0, empty timed region at -O1 and higher), I could have answered right away.

See Idiomatic way of performance evaluation? - if your times don't increase linearly with increasing repeat count, you aren't measuring what you think you're measuring. Also, startup effects (like cold caches, soft page faults, lazy dynamic linking, and dynamic CPU frequency) can easily lead to the first empty timed region being slower than the second.

I assume you only swapped the loops around when testing at -O0, otherwise you would have ruled out there being any effect at -O1 or higher with that test code.


The loop with optimization enabled:

As you can see on Godbolt, gcc fully removes the loop with optimization enabled. Sometimes GCC leaves empty loops alone, like maybe it thinks the delay was intentional, but here it doesn't even loop at all. Time doesn't scale with anything, and both timed regions look the same like this:

orig_main:
   ...
        call    std::chrono::_V2::system_clock::now()       # demangled C++ symbol name
        mov     rbp, rax                                    # save the return value = start
        call    std::chrono::_V2::system_clock::now()
        # end in RAX

So the only instruction in the timed region is saving start to a call-preserved register. You're measuring literally nothing about your source code.

With Google Benchmark, we can get asm that doesn't optimize the work away, but which doesn't store/reload to introduce new bottlenecks:

#include <benchmark/benchmark.h>

static void TargetFunc(benchmark::State& state) {
   uint64_t x2 = 0, y2 = 0;
  // Code inside this loop is measured repeatedly
  for (auto _ : state) {
    benchmark::DoNotOptimize(x2 += 31);
    benchmark::DoNotOptimize(y2 += 31);
  }
}
// Register the function as a benchmark
BENCHMARK(TargetFunc);
# just the main loop, from gcc10.1 -O3 
.L7:                         # do{
        add     rax, 31        # x2 += 31
        add     rdx, 31        # y2 += 31
        sub     rbx, 1
        jne     .L7          # }while(--count != 0)

I assume benchmark::DoNotOptimize is something like asm volatile("" : "+rm"(x) ) (GNU C inline asm) to make the compiler materialize x in a register or memory, and to assume the lvalue has been modified by that empty asm statement. (i.e. forget anything it knew about the value, blocking constant-propagation, CSE, and whatever.) That would explain why clang stores/reloads to memory while GCC picks a register: this is a longstanding missed-optimization bug with clang's inline asm support. It likes to pick memory when given the choice, which you can sometimes work around with multi-alternative constraints like "+r,m". But not here; I had to just drop the memory alternative; we don't want the compiler to spill/reload to memory anyway.

For GNU C compatible compilers, we can use asm volatile manually with only "+r" register constraints to get clang to make good scalar asm (Godbolt), like GCC. We get an essentially identical inner loop, with 3 add instructions, the last one being an add rbx, -1 / jnz that can macro-fuse.

static void TargetFunc(benchmark::State& state) {
   uint64_t x2 = 0, y2 = 0;
  // Code inside this loop is measured repeatedly
  for (auto _ : state) {
      x2 += 16;
      y2 += 17;
    asm volatile("" : "+r"(x2), "+r"(y2));
  }
}

All of these should run at 1 clock cycle per iteration on modern Intel and AMD CPUs, again see @rcgldr's answer.

Of course this also disables auto-vectorization with SIMD, which compilers would do in many real use cases. Or if you used the result at all outside the loop, it might optimize the repeated increment into a single multiply.

You can't measure the cost of the + operator in C++ - it can compile very differently depending on context / surrounding code. Even without considering loop-invariant stuff that hoists work. e.g. x + (y<<2) + 4 can compile to a single LEA instruction for x86.


The question is actually why my computers execute two operations faster than one, first of all in code where these operations are not optimized away

TL:DR: it's not the operations, it's the loop-carried dependency chain through memory that stops the CPU from running the loop at 1 clock cycle per iteration, doing all 3 adds in parallel on separate execution ports.

Note that the loop counter increment is just as much of an operation as what you're doing with x (and sometimes y).


ETA: This was a guess, and Peter Cordes has made a very good argument about why it's incorrect. Go upvote Peter's answer.

I'm leaving my answer here because some found the information useful. Though this doesn't correctly explain the behavior seen in the OP, it highlights some of the issues that make it infeasible (and meaningless) to try to measure the speed of a particular instruction on a modern processor.


Educated guess:

It's the combined effect of pipelining, powering down portions of a core, and dynamic frequency scaling.

Modern processors pipeline so that multiple instructions can be executing at the same time. This is possible because the processor actually works on micro-ops rather than the assembly-level instructions we usually think of as machine language. Processors "schedule" micro-ops by dispatching them to different portions of the chip while keeping track of the dependencies between the instructions.

Suppose the core running your code has two arithmetic/logic units (ALUs). A single arithmetic instruction repeated over and over requires only one ALU. Using two ALUs doesn't help because the next operation depends on completion of the current one, so the second ALU would just be waiting around.

But in your two-expression test, the expressions are independent. To compute the next value of y, you do not have to wait for the current operation on x to complete. Now, because of power-saving features, that second ALU may be powered down at first. The core might run a few iterations before realizing that it could make use of the second ALU. At that point, it can power up the second ALU and most of the two-expression loop will run as fast as the one-expression loop. So you might expect the two examples to take approximately the same amount of time.

Finally, many modern processors use dynamic frequency scaling. When the processor detects that it's not running hard, it actually slows its clock a little bit to save power. But when it's used heavily (and the current temperature of the chip permits), it might increase the actual clock speed as high as its rated speed.

I assume this is done with heuristics. In the case where the second ALU stays powered down, the heuristic may decide it's not worth boosting the clock. In the case where two ALUs are powered up and running at top speed, it may decide to boost the clock. Thus the two-expression case, which should already be just about as fast as the one-expression case, actually runs at a higher average clock frequency, enabling it to complete twice as much work in slightly less time.

Given your numbers, the difference is about 14%. My Windows machine idles at about 3.75 GHz, and if I push it a little by building a solution in Visual Studio, the clock climbs to about 4.25GHz (eyeballing the Performance tab in Task Manager). That's a 13% difference in clock speed, so we're in the right ballpark.


I split up the code into C++ and assembly. I just wanted to test the loops, so I didn't return the sum(s). I'm running on Windows, the calling convention is rcx, rdx, r8, r9, the loop count is in rcx. The code is adding immediate values to 64 bit integers on the stack.

I'm getting similar times for both loops, less than 1% variation, same or either one up to 1% faster than the other.

There is an apparent dependency factor here: each add to memory has to wait for the prior add to memory to the same location to complete, so two add to memories can be performed essentially in parallel.

Changing test2 to do 3 add to memories, ends up about 6% slower, 4 add to memories, 7.5% slower.

My system is Intel 3770K 3.5 GHz CPU, Intel DP67BG motherboard, DDR3 1600 9-9-9-27 memory, Win 7 Pro 64 bit, Visual Studio 2015.

        .code
        public  test1
        align   16
test1   proc
        sub     rsp,16
        mov     qword ptr[rsp+0],0
        mov     qword ptr[rsp+8],0
tst10:  add     qword ptr[rsp+8],17
        dec     rcx
        jnz     tst10
        add     rsp,16
        ret     
test1   endp

        public  test2
        align 16
test2   proc
        sub     rsp,16
        mov     qword ptr[rsp+0],0
        mov     qword ptr[rsp+8],0
tst20:  add     qword ptr[rsp+0],17
        add     qword ptr[rsp+8],-37
        dec     rcx
        jnz     tst20
        add     rsp,16
        ret     
test2   endp

        end

I also tested with add immediate to register, 1 or 2 registers within 1% (either could be faster, but we'd expect them both to execute at 1 iteration / clock on Ivy Bridge, given its 3 integer ALU ports; What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand?).

3 registers 1.5 times as long, somewhat worse than the ideal 1.333 cycles / iterations from 4 uops (including the loop counter macro-fused dec/jnz) for 3 back-end ALU ports with perfect scheduling.

4 registers, 2.0 times as long, bottlenecked on the front-end: Is performance reduced when executing loops whose uop count is not a multiple of processor width?. Haswell and later microarchitectures would handle this better.

        .code
        public  test1
        align   16
test1   proc
        xor     rdx,rdx
        xor     r8,r8
        xor     r9,r9
        xor     r10,r10
        xor     r11,r11
tst10:  add     rdx,17
        dec     rcx
        jnz     tst10
        ret     
test1   endp

        public  test2
        align 16
test2   proc
        xor     rdx,rdx
        xor     r8,r8
        xor     r9,r9
        xor     r10,r10
        xor     r11,r11
tst20:  add     rdx,17
        add     r8,-37
        dec     rcx
        jnz     tst20
        ret     
test2   endp

        public  test3
        align 16
test3   proc
        xor     rdx,rdx
        xor     r8,r8
        xor     r9,r9
        xor     r10,r10
        xor     r11,r11
tst30:  add     rdx,17
        add     r8,-37
        add     r9,47
        dec     rcx
        jnz     tst30
        ret     
test3   endp

        public  test4
        align 16
test4   proc
        xor     rdx,rdx
        xor     r8,r8
        xor     r9,r9
        xor     r10,r10
        xor     r11,r11
tst40:  add     rdx,17
        add     r8,-37
        add     r9,47
        add     r10,-17
        dec     rcx
        jnz     tst40
        ret     
test4   endp

        end