Is it a sensible optimization to check whether a variable holds a specific value before writing that value?

Is it a sensible optimization to check whether a variable holds a specific value before writing that value?

Are there any use cases that would benefit from the if statement?

It is when assignment is significantly more costly than an inequality comparison returning false.

A example would be a large* std::set, which may require many heap allocations to duplicate.

**for some definition of "large"*

Will the compiler always optimize-out the if statement?

That's a fairly safe "no", as are most questions that contain both "optimize" and "always".

The C++ standard makes rare mention of optimizations, but never demands one.

What if var is a volatile variable?

Then it may perform the if, although volatile doesn't achieve what most people assume.


Yes, there are definitely cases where this is sensible, and as you suggest, volatile variables are one of those cases - even for single threaded access!

Volatile writes are expensive, both from a hardware and a compiler/JIT perspective. At the hardware level, these writes might be 10x-100x more expensive than a normal write, since write buffers have to be flushed (on x86, the details will vary by platform). At the compiler/JIT level, volatile writes inhibit many common optimizations.

Speculation, however, can only get you so far - the proof is always in the benchmarking. Here's a microbenchmark that tries your two strategies. The basic idea is to copy values from one array to another (pretty much System.arraycopy), with two variants - one which copies unconditionally, and one that checks to see if the values are different first.

Here are the copy routines for the simple, non-volatile case (full source here):

        // no check
        for (int i=0; i < ARRAY_LENGTH; i++) {
            target[i] = source[i];
        }

        // check, then set if unequal
        for (int i=0; i < ARRAY_LENGTH; i++) {
            int x = source[i];
            if (target[i] != x) {
                target[i] = x;
            }
        }

The results using the above code to copy an array length of 1000, using Caliper as my microbenchmark harness, are:

    benchmark arrayType    ns linear runtime
  CopyNoCheck      SAME   470 =
  CopyNoCheck DIFFERENT   460 =
    CopyCheck      SAME  1378 ===
    CopyCheck DIFFERENT  1856 ====

This also includes about 150ns of overhead per run to reset the target array each time. Skipping the check is much faster - about 0.47 ns per element (or around 0.32 ns per element after we remove the setup overhead, so pretty much exactly 1 cycle on my box).

Checking is about 3x slower when the arrays are the same, and 4x slower then they are different. I'm surprised at how bad the check is, given that it is perfectly predicted. I suspect that the culprit is largely the JIT - with a much more complex loop body, it may be unrolled fewer times, and other optimizations may not apply.

Let's switch to the volatile case. Here, I've used AtomicIntegerArray as my arrays of volatile elements, since Java doesn't have any native array types with volatile elements. Internally, this class is just writing straight through to the array using sun.misc.Unsafe, which allows volatile writes. The assembly generated is substantially similar to normal array access, other than the volatile aspect (and possibly range check elimination, which may not be effective in the AIA case).

Here's the code:

        // no check
        for (int i=0; i < ARRAY_LENGTH; i++) {
            target.set(i, source[i]);
        }

        // check, then set if unequal
        for (int i=0; i < ARRAY_LENGTH; i++) {
            int x = source[i];
            if (target.get(i) != x) {
                target.set(i, x);
            }
        }

And here are the results:

arrayType     benchmark    us linear runtime
     SAME   CopyCheckAI  2.85 =======
     SAME CopyNoCheckAI 10.21 ===========================
DIFFERENT   CopyCheckAI 11.33 ==============================
DIFFERENT CopyNoCheckAI 11.19 =============================

The tables have turned. Checking first is ~3.5x faster than the usual method. Everything is much slower overall - in the check case, we are paying ~3 ns per loop, and in the worst cases ~10 ns (the times above are in us, and cover the copy of the whole 1000 element array). Volatile writes really are more expensive. There is about 1 ns of overheaded included in the DIFFERENT case to reset the array on each iteration (which is why even the simple is slightly slower for DIFFERENT). I suspect a lot of the overhead in the "check" case is actually bounds checking.

This is all single threaded. If you actual had cross-core contention over a volatile, the results would be much, much worse for the simple method, and just about as good as the above for the check case (the cache line would just sit in the shared state - no coherency traffic needed).

I've also only tested the extremes of "every element equal" vs "every element different". This means the branch in the "check" algorithm is always perfectly predicted. If you had a mix of equal and different, you wouldn't get just a weighted combination of the times for the SAME and DIFFERENT cases - you do worse, due to misprediction (both at the hardware level, and perhaps also at the JIT level, which can no longer optimize for the always-taken branch).

So whether it is sensible, even for volatile, depends on the specific context - the mix of equal and unequal values, the surrounding code and so on. I'd usually not do it for volatile alone in a single-threaded scenario, unless I suspected a large number of sets are redundant. In heavily multi-threaded structures, however, reading and then doing a volatile write (or other expensive operation, like a CAS) is a best-practice and you'll see it quality code such as java.util.concurrent structures.


In general the answer is no. Since if you have simple datatype, compiler would be able to perform any necessary optimizations. And in case of types with heavy operator= it is responsibility of operator= to choose optimal way to assign new value.