decreasing cache misses through good design

For data bound operations

  1. use arrays & vectors over lists,maps & sets

  2. process by rows over columns


Here are some things that I like consider when working on this kind of code.

  • Consider whether you want "structures of arrays" or "arrays of structures". Which you want to use will depend on each part of the data.
  • Try to keep structures to multiples of 32 bytes so they pack cache lines evenly.
  • Partition your data in hot and cold elements. If you have an array of objects of class o, and you use o.x, o.y, o.z together frequently but only occasionally need to access o.i, o.j, o.k then consider puting o.x, o.y, and o.z together and moving the i, j, and k parts to a parallel axillary data structure.
  • If you have multi dimensional arrays of data then with the usual row-order layouts, access will be very fast when scanning along the preferred dimension and very slow along the others. Mapping it along a space-filling curve instead will help to balance access speeds when traversing in any dimension. (Blocking techniques are similar -- they're just Z-order with a larger radix.)
  • If you must incur a cache miss, then try to do as much as possible with that data in order to amortize the cost.
  • Are you doing anything multi-threaded? Watch out for slowdowns from cache consistency protocols. Pad flags and small counters so that they'll be on separate cache lines.
  • SSE on Intel provides some prefetch intrinsics if you know what you'll be accessing far enough ahead of time.

Allow CPU to prefetch data efficiently. For example you can decrease number cache misses processing multi-dimensional arrays by rows rather than by columns, unroll loops etc.

This kind of optimization depends on hardware architecture, so you better use some kind of platform-specific profiler like Intel VTune to detect possible problems with cache.