int, short, byte performance in back-to-back for-loops

The majority of this time is probably spent writing to the console. Try doing something other than that in the loop...

Additionally:

  • Using DateTime.Now is a bad way of measuring time. Use System.Diagnostics.Stopwatch instead
  • Once you've got rid of the Console.WriteLine call, a loop of 127 iterations is going to be too short to measure. You need to run the loop lots of times to get a sensible measurement.

Here's my benchmark:

using System;
using System.Diagnostics;

public static class Test
{    
    const int Iterations = 100000;

    static void Main(string[] args)
    {
        Measure(ByteLoop);
        Measure(ShortLoop);
        Measure(IntLoop);
        Measure(BackToBack);
        Measure(DelegateOverhead);
    }

    static void Measure(Action action)
    {
        GC.Collect();
        GC.WaitForPendingFinalizers();
        GC.Collect();
        Stopwatch sw = Stopwatch.StartNew();
        for (int i = 0; i < Iterations; i++)
        {
            action();
        }
        sw.Stop();
        Console.WriteLine("{0}: {1}ms", action.Method.Name,
                          sw.ElapsedMilliseconds);
    }

    static void ByteLoop()
    {
        for (byte index = 0; index < 127; index++)
        {
            index.ToString();
        }
    }

    static void ShortLoop()
    {
        for (short index = 0; index < 127; index++)
        {
            index.ToString();
        }
    }

    static void IntLoop()
    {
        for (int index = 0; index < 127; index++)
        {
            index.ToString();
        }
    }

    static void BackToBack()
    {
        for (byte index = 0; index < 127; index++)
        {
            index.ToString();
        }
        for (short index = 0; index < 127; index++)
        {
            index.ToString();
        }
        for (int index = 0; index < 127; index++)
        {
            index.ToString();
        }
    }

    static void DelegateOverhead()
    {
        // Nothing. Let's see how much
        // overhead there is just for calling
        // this repeatedly...
    }
}

And the results:

ByteLoop: 6585ms
ShortLoop: 6342ms
IntLoop: 6404ms
BackToBack: 19757ms
DelegateOverhead: 1ms

(This is on a netbook - adjust the number of iterations until you get something sensible :)

That seems to show it making basically no significant different which type you use.


First of all, it's not .NET that's optimized for int performance, it's the machine that's optimized because 32 bits is the native word size (unless you're on x64, in which case it's long or 64 bits).

Second, you're writing to the console inside each loop - that's going too be far more expensive than incrementing and testing the loop counter, so you're not measuring anything realistic here.

Third, a byte has range up to 255, so you can loop 254 times (if you try to do 255 it will overflow and the loop will never end - but you don't need to stop at 128).

Fourth, you're not doing anywhere near enough iterations to profile. Iterating a tight loop 128 or even 254 times is meaningless. What you should be doing is putting the byte/short/int loop inside another loop that iterates a much larger number of times, say 10 million, and check the results of that.

Finally, using DateTime.Now within calculations is going to result in some timing "noise" while profiling. It's recommended (and easier) to use the Stopwatch class instead.

Bottom line, this needs many changes before it can be a valid perf test.


Here's what I'd consider to be a more accurate test program:

class Program
{
    const int TestIterations = 5000000;

    static void Main(string[] args)
    {
        RunTest("Byte Loop", TestByteLoop, TestIterations);
        RunTest("Short Loop", TestShortLoop, TestIterations);
        RunTest("Int Loop", TestIntLoop, TestIterations);
        Console.ReadLine();
    }

    static void RunTest(string testName, Action action, int iterations)
    {
        Stopwatch sw = new Stopwatch();
        sw.Start();
        for (int i = 0; i < iterations; i++)
        {
            action();
        }
        sw.Stop();
        Console.WriteLine("{0}: Elapsed Time = {1}", testName, sw.Elapsed);
    }

    static void TestByteLoop()
    {
        int x = 0;
        for (byte b = 0; b < 255; b++)
            ++x;
    }

    static void TestShortLoop()
    {
        int x = 0;
        for (short s = 0; s < 255; s++)
            ++x;
    }

    static void TestIntLoop()
    {
        int x = 0;
        for (int i = 0; i < 255; i++)
            ++x;
    }
}

This runs each loop inside a much larger loop (5 million iterations) and performs a very simple operation inside the loop (increments a variable). The results for me were:

Byte Loop: Elapsed Time = 00:00:03.8949910
Short Loop: Elapsed Time = 00:00:03.9098782
Int Loop: Elapsed Time = 00:00:03.2986990

So, no appreciable difference.

Also, make sure you profile in release mode, a lot of people forget and test in debug mode, which will be significantly less accurate.