C# Why can equal decimals produce unequal hash values?

To start with, C# isn't doing anything wrong at all. This is a framework bug.

It does indeed look like a bug though - basically whatever normalization is involved in comparing for equality ought to be used in the same way for hash code computation. I've checked and can reproduce it too (using .NET 4) including checking the Equals(decimal) and Equals(object) methods as well as the == operator.

It definitely looks like it's the d0 value which is the problem, as adding trailing 0s to d1 doesn't change the results (until it's the same as d0 of course). I suspect there's some corner case tripped by the exact bit representation there.

I'm surprised it isn't (and as you say, it works most of the time), but you should report the bug on Connect.


Another bug (?) that results in different bytes representation for the same decimal on different compilers: Try to compile following code on VS 2005 and then VS 2010. Or look at my article on Code Project.

class Program
{
    static void Main(string[] args)
    {
        decimal one = 1m;

        PrintBytes(one);
        PrintBytes(one + 0.0m); // compare this on different compilers!
        PrintBytes(1m + 0.0m);

        Console.ReadKey();
    }

    public static void PrintBytes(decimal d)
    {
        MemoryStream memoryStream = new MemoryStream();
        BinaryWriter binaryWriter = new BinaryWriter(memoryStream);

        binaryWriter.Write(d);

        byte[] decimalBytes = memoryStream.ToArray();

        Console.WriteLine(BitConverter.ToString(decimalBytes) + " (" + d + ")");
    }
}

Some people use following normalization code d=d+0.0000m which is not working properly on VS 2010. Your normalization code (d=d/1.000000000000000000000000000000000m) looks good - I use the same one to get the same byte array for the same decimals.


Ran into this bug too ... :-(

Tests (see below) indicate that this depends on the maximum precision available for the value. The wrong hash codes only occur near the maximum precision for the given value. As the tests show the error seems to depend on the digits left of the decimal point. Sometimes the only the hashcode for maxDecimalDigits - 1 is wrong, sometimes the value for maxDecimalDigits is wrong.

var data = new decimal[] {
//    123456789012345678901234567890
    1.0m,
    1.00m,
    1.000m,
    1.0000m,
    1.00000m,
    1.000000m,
    1.0000000m,
    1.00000000m,
    1.000000000m,
    1.0000000000m,
    1.00000000000m,
    1.000000000000m,
    1.0000000000000m,
    1.00000000000000m,
    1.000000000000000m,
    1.0000000000000000m,
    1.00000000000000000m,
    1.000000000000000000m,
    1.0000000000000000000m,
    1.00000000000000000000m,
    1.000000000000000000000m,
    1.0000000000000000000000m,
    1.00000000000000000000000m,
    1.000000000000000000000000m,
    1.0000000000000000000000000m,
    1.00000000000000000000000000m,
    1.000000000000000000000000000m,
    1.0000000000000000000000000000m,
    1.00000000000000000000000000000m,
    1.000000000000000000000000000000m,
    1.0000000000000000000000000000000m,
    1.00000000000000000000000000000000m,
    1.000000000000000000000000000000000m,
    1.0000000000000000000000000000000000m,
};

for (int i = 0; i < 1000; ++i)
{
    var d0 = i * data[0];
    var d0Hash = d0.GetHashCode();
    foreach (var d in data)
    {
        var value = i * d;
        var hash = value.GetHashCode();
        Console.WriteLine("{0};{1};{2};{3};{4};{5}", d0, value, (d0 == value), d0Hash, hash, d0Hash == hash);
    }
}