Compiler Value Type Resolution and Hardcoded "0" Integer Values

It's because a zero-integer is implicitly convertible to an enum:

enum SqlDbType
{
    Zero = 0,
    One = 1
}

class TestClass
{
    public TestClass(string s, object o)
    { System.Console.WriteLine("{0} => TestClass(object)", s); } 

    public TestClass(string s, SqlDbType e)
    { System.Console.WriteLine("{0} => TestClass(Enum SqlDbType)", s); }
}

// This is perfectly valid:
SqlDbType valid = 0;
// Whilst this is not:
SqlDbType ohNoYouDont = 1;

var a1 = new TestClass("0", 0);
// 0 => TestClass(Enum SqlDbType)
var a2 = new TestClass("1", 1); 
// => 1 => TestClass(object)

(Adapted from Visual C# 2008 Breaking Changes - change 12)

When the compiler performs the overload resolution 0 is an Applicable function member for both the SqlDbType and the object constructors because:

an implicit conversion (Section 6.1) exists from the type of the argument to the type of the corresponding parameter

(Both SqlDbType x = 0 and object x = 0 are valid)

The SqlDbType parameter is better than the object parameter because of the better conversion rules:

  • If T1 and T2 are the same type, neither conversion is better.
    • object and SqlDbType are not the same type
  • If S is T1, C1 is the better conversion.
    • 0 is not an object
  • If S is T2, C2 is the better conversion.
    • 0 is not a SqlDbType
  • If an implicit conversion from T1 to T2 exists, and no implicit conversion from T2 to T1 exists, C1 is the better conversion.
    • No implicit conversion from object to SqlDbType exists
  • If an implicit conversion from T2 to T1 exists, and no implicit conversion from T1 to T2 exists, C2 is the better conversion.
    • An implicit conversion from SqlDbType to object exists, so the SqlDbType is the better conversion

Note that what exactly constitutes a constant 0 has (quite subtly) changed in Visual C# 2008 (Microsoft's implementation of the C# spec) as @Eric explains in his answer.


RichardTowers' answer is excellent, but I thought I'd add a bit to it.

As the other answers have pointed out, the reason for the behaviour is (1) zero is convertible to any enum, and obviously to object, and (2) any enum type is more specific that object, so the method that takes an enum is therefore chosen by overload resolution as the better method. Point two is I hope self-explanatory, but what explains point one?

First off, there is an unfortunate deviation from the specification here. The specification says that any literal zero, that is, the number 0 actually literally appearing in the source code, may be implicitly converted to any enum type. The compiler actually implements that any constant zero may be thusly converted. The reason for that is because of a bug whereby the compiler would sometimes allow constant zeroes and sometimes not, in a strange and inconsistent manner. The easiest way to solve the problem was to consistently allow constant zeroes. You can read about this in detail here:

https://web.archive.org/web/20110308161103/http://blogs.msdn.com/b/ericlippert/archive/2006/03/28/the-root-of-all-evil-part-one.aspx

Second, the reason for allowing zeros to convert to any enum is to ensure that it is always possible to zero out a "flags" enum. Good programming practice is that every "flags" enum have a value "None" which is equal to zero, but that is a guideline, not a requirement. The designers of C# 1.0 thought that it looked strange that you might have to say

for (MyFlags f = (MyFlags)0; ...

to initialize a local. My personal opinion is that this decision has caused more trouble than it was worth, both in terms of the grief over the abovementioned bug and in terms of the oddities it introduces into overload resolution that you have discovered.

Finally, the designers of the constructors could have realized that this would be a problem in the first place, and made the signatures of the overloads such that the developer could clearly decide which ctor to call without having to insert casts. Unfortunately this is a pretty obscure issue and so a lot of designers are unaware of it. Hopefully anyone reading this will not make the same mistake; do not create an ambiguity between object and any enum if you intend the two overrides to have different semantics.


This is apparently a known behavior and affects any function overloads where there is both an enumeration and object type. I don't understand it all, but Eric Lippert summed it up quite nicely on his blog