Curious null-coalescing operator custom implicit conversion behaviour

This is most definitely a bug.

public class Program {
    static A? X() {
        Console.WriteLine("X()");
        return new A();
    }
    static B? Y() {
        Console.WriteLine("Y()");
        return new B();
    }
    static C? Z() {
        Console.WriteLine("Z()");
        return new C();
    }

    public static void Main() {
        C? test = (X() ?? Y()) ?? Z();
    }
}

This code will output:

X()
X()
A to B (0)
X()
X()
A to B (0)
B to C (0)

That made me think that the first part of each ?? coalesce expression is evaluated twice. This code proved it:

B? test= (X() ?? Y());

outputs:

X()
X()
A to B (0)

This seems to happen only when the expression requires a conversion between two nullable types; I've tried various permutations with one of the sides being a string, and none of them caused this behaviour.


Thanks to everyone who contributed to analyzing this issue. It is clearly a compiler bug. It appears to only happen when there is a lifted conversion involving two nullable types on the left-hand side of the coalescing operator.

I have not yet identified where precisely things go wrong, but at some point during the "nullable lowering" phase of compilation -- after initial analysis but before code generation -- we reduce the expression

result = Foo() ?? y;

from the example above to the moral equivalent of:

A? temp = Foo();
result = temp.HasValue ? 
    new int?(A.op_implicit(Foo().Value)) : 
    y;

Clearly that is incorrect; the correct lowering is

result = temp.HasValue ? 
    new int?(A.op_implicit(temp.Value)) : 
    y;

My best guess based on my analysis so far is that the nullable optimizer is going off the rails here. We have a nullable optimizer that looks for situations where we know that a particular expression of nullable type cannot possibly be null. Consider the following naive analysis: we might first say that

result = Foo() ?? y;

is the same as

A? temp = Foo();
result = temp.HasValue ? 
    (int?) temp : 
    y;

and then we might say that

conversionResult = (int?) temp 

is the same as

A? temp2 = temp;
conversionResult = temp2.HasValue ? 
    new int?(op_Implicit(temp2.Value)) : 
    (int?) null

But the optimizer can step in and say "whoa, wait a minute, we already checked that temp is not null; there's no need to check it for null a second time just because we are calling a lifted conversion operator". We'd them optimize it away to just

new int?(op_Implicit(temp2.Value)) 

My guess is that we are somewhere caching the fact that the optimized form of (int?)Foo() is new int?(op_implicit(Foo().Value)) but that is not actually the optimized form we want; we want the optimized form of Foo()-replaced-with-temporary-and-then-converted.

Many bugs in the C# compiler are a result of bad caching decisions. A word to the wise: every time you cache a fact for use later, you are potentially creating an inconsistency should something relevant change. In this case the relevant thing that has changed post initial analysis is that the call to Foo() should always be realized as a fetch of a temporary.

We did a lot of reorganization of the nullable rewriting pass in C# 3.0. The bug reproduces in C# 3.0 and 4.0 but not in C# 2.0, which means that the bug was probably my bad. Sorry!

I'll get a bug entered into the database and we'll see if we can get this fixed up for a future version of the language. Thanks again everyone for your analysis; it was very helpful!

UPDATE: I rewrote the nullable optimizer from scratch for Roslyn; it now does a better job and avoids these sorts of weird errors. For some thoughts on how the optimizer in Roslyn works, see my series of articles which begins here: https://ericlippert.com/2012/12/20/nullable-micro-optimizations-part-one/