What are the differences between the different anti-aliasing / multisampling settings?

This is a great question because other than "is AA on or off?" I hadn't considered the performance implications of all the various anti-aliasing modes.

There's a good basic description of the three "main" AA modes at So Many AA Techniques, So Little Time, but pretty much all AA these days is MSAA or some tweaky optimized version of it:

  1. Super-Sampled Anti-Aliasing (SSAA). The oldest trick in the book - I list it as universal because you can use it pretty much anywhere: forward or deferred rendering, it also anti-aliases alpha cutouts, and it gives you better texture sampling at high anisotropy too. Basically, you render the image at a higher resolution and down-sample with a filter when done. Sharp edges become anti-aliased as they are down-sized. Of course, there's a reason why people don't use SSAA: it costs a fortune. Whatever your fill rate bill, it's 4x for even minimal SSAA.

  2. Multi-Sampled Anti-Aliasing (MSAA). This is what you typically have in hardware on a modern graphics card. The graphics card renders to a surface that is larger than the final image, but in shading each "cluster" of samples (that will end up in a single pixel on the final screen) the pixel shader is run only once. We save a ton of fill rate, but we still burn memory bandwidth. This technique does not anti-alias any effects coming out of the shader, because the shader runs at 1x, so alpha cutouts are jagged. This is the most common way to run a forward-rendering game. MSAA does not work for a deferred renderer because lighting decisions are made after the MSAA is "resolved" (down-sized) to its final image size.

  3. Coverage Sample Anti-Aliasing (CSAA). A further optimization on MSAA from NVidia [ed: ATI has an equivalent]. Besides running the shader at 1x and the framebuffer at 4x, the GPU's rasterizer is run at 16x. So while the depth buffer produces better anti-aliasing, the intermediate shades of blending produced are even better.

This Anandtech article has a good comparison of AA modes in relatively recent video cards that show the performance cost of each mode for ATI and NVIDIA (this is at 1920x1200):

                       ---MSAA---   --AMSAA---    ---SSAA---
                 none  2x  4x  8x   2x  4x  8x    2x  4x  8x
                 ----  ----------   ----------    ----------
ATI 5870           53  45  43  34   44  41  37    38  28  16
NVIDIA GTX 280     35  30  27  22   29  28  25

So basically, you can expect a performance loss of..

  • no AA → 2x AA

    ~15% slower

  • no AA → 4x AA

    ~25% slower

There is indeed a visible quality difference between zero, 2x, 4x and 8x antialiasing. And the tweaked MSAA variants, aka "adaptive" or "coverage sample" offer better quality at more or less the same performance level. Additional samples per pixel = higher quality anti-aliasing.

Graphic comparing AA and MSAA sampling of a pixel

Comparing the different modes on each card, where "mode" is number of samples used to generate each pixel.

Mode   NVIDIA   AMD
--------------------
2+0    2x       2x
2+2    N/A      2xEQ
4+0    4x       4x
4+4    8x       4xEQ
4+12   16x      N/A
8+0    8xQ      8x
8+8    16xQ     8xEQ
8+24   32x      N/A

In my opinion, beyond 8x AA, you'd have to have the eyes of an eagle on crack to see the difference. There is definitely some advantage to having "cheap" 2x and 4x AA modes that can reasonably approximate 8x without the performance hit, though. That's the sweet spot for performance and a visual quality increase you'd notice.


If you prune through this article you will probably be able to gather most of the information you seek, but I'll try and summarize the relevant bits.

First, you should understand that MSAA is a type of Supersampling anti-aliasing (SSAA). SSAA, also known as FSAA, removes "jags" from an image by rendering the image at a higher resolution:

Full-scene anti-aliasing by supersampling usually means that each full frame is rendered at double (2x) or quadruple (4x) the display resolution, and then down-sampled to match the display resolution. So a 2x FSAA would render 4 supersampled pixels for each single pixel of each frame. While rendering at larger resolutions will produce better results, more processor power is needed which can degrade performance and frame rate.

MSAA is a more efficient form of FSAA, but the multiplier still has the same meaning, roughly. This means (as you probably know), that a higher multiplier gives better results, but demands more processing power.

CSAA is an even more efficient form of SSAA, which uses some advanced 3D magic (just read the paper if you really must know) to deliver better results:

In summary, CSAA produces antialiased images that rival the quality of 8x or 16x MSAA, while introducing only a minimal performance hit over standard (typically 4x) MSAA.

In essence, if you equate your multipliers, MSAA will produce better results than CSAA (though it is implied that the results will not be significantly better), but will demand significantly more on the processing power department.

QCSAA is simply CSAA with twice as many sample points used to perform the anti-aliasing, so obviously QCSAA is better than CSAA.

Wikipedia's Supersampling article actually provides a great image showing why more samples mean better accuracy:

Multiple samples

All this being said, the order in which DiRT2 lists its anti-aliasing options looks perplexing, to say the least. Since I doubt you can personally tell the difference once the multiplier hits 8x and upwards, I'd stick to CSAA/QCSAA for the performance gain.

Finally, here is a nice comparison shot of the various techniques on a specific, simple, image (from the article in the first link):

Anti-aliasing results


NVIDIA has created another algorithm, FXAA (Fast Approximate Anti-Aliasing). Unlike currently used MSAA, CSAA and their variations, it works on a pixel level, never touching geometry. It finds jagged edges and smoothes them. It is faster than the rest. Like FSAA, it has no problems with alpha channels, shaders etc. However, the results are more blurry.

I see two uses for FXAA: first, if you want AA, but the penalty of MSAA/CSAA is too high for your hardware; second, if a game heavily relies on alpha and shaders and your hardware is not godly enough for FSAA.

Aliased, 4xMSAA, FXAA:

FXAA Comparison