Memory Optimized Tables - can they really be so difficult to maintain?

No, in-memory really is this unpolished. If you are familiar with Agile you will know the concept of "minimal shippable product"; in-memory is that. I get the feeling that MS needed a response to SAP's Hana and its ilk. This is what they could get debugged in the timeframe for a 2014 release.

As with anything else in-memory has costs and benefits associated with it. The major benefit is the throughput that can be achieved. One of the costs is the overhead for change management, as you mentioned. This doesn't make it a useless product, in my opinion, it just reduces the number of cases where it will provide net benefit. Just as columnstore indexes are now updatable and indexes can be filtered I have no doubt that the functionality of in-memory will improve over coming releases.


SQL Server 2016 is now generally available. Just as I supposed, In-Memory OLTP has received a number of enhancements. Most of the changes implement functionality that traditional tables have enjoyed for some time. My guess is that future features will be released at the same time for both in-memory and traditional tables. Temporal tables is a case-in-point. New in this version it is supported by both In-Memory and disk-based tables.


One of the problems with new technology - especially a V1 release that has been disclosed quite loudly as not feature-complete - is that everyone jumps on the bandwagon and assumes that it is a perfect fit for every workload. It's not. Hekaton's sweet spot is OLTP workloads under 256 GB with a lot of point lookups on 2-4 sockets. Does this match your workload?

Many of the limitations have to do with in-memory tables combined with natively compiled procedures. You can of course bypass some of these limitations by using in-memory tables but not using natively compiled procedures, or at least not exclusively.

Obviously you need to test if the performance gain is substantial in your environment, and if it is, whether the trade-offs are worth it. If you are getting great performance gains out of in-memory tables, I'm not sure why you're worried about how much maintenance you're going to perform on INCLUDE columns. Your in-memory indexes are by definition covering. These should only really be helpful for avoiding lookups on range or full scans of traditional non-clustered indexes, and these operations aren't really supposed to be happening in in-memory tables (again, you should profile your workload and see which operations improve and which don't - it's not all win-win). How often do you muck with INCLUDE columns on your indexes today?

Basically, if it's not worth it for you yet in its V1 form, don't use it. That's not a question we can answer for you, except to tell you that plenty of customers are willing to live with the limitations, and are using the feature to great benefit in spite of them.

SQL Server 2016

If you are on your way toward SQL Server 2016, I have blogged about the enhancements you will see in In-Memory OLTP, as well as the elimination of some of the limitations. Most notably:

  • Increase in maximum durable table size: 256 GB => 2 TB
  • LOB/MAX columns, indexes on nullable columns, removal of BIN2 collation requirements
  • Alter & recompile of procedures
  • Some support for ALTER TABLE - it will be offline but you should be able to alter and/or drop/re-create indexes (this does not seem to be supported on current CTP builds however, so do not take this as a guarantee)
  • DML triggers, FK/check constraints, MARS
  • OR, NOT, IN, EXISTS, DISTINCT, UNION, OUTER JOINs
  • Parallelism

You can not right-click a memory optimized table, to pull up a designer, and add new columns as you like, from within Sql Server Management Studio. You also can not click within the table name as a means of renaming the table. (SQL 2014 as of my writing this.)

Instead, you can right click the table, and script out a create command to a new query window. This create command can be amended by adding any new columns.

So, to modify the table, you could store the data in a new table, temp table, or table variable. Then you could drop and re-create the table with the new schema, and finally copy back in the actual data. This 3 container shell game is only a little less convenient for most use cases.

But, you'd have no reason to bother with Memory Optimized tables if there is not a performance problem you are trying to solve.

Then, you'll have to weigh if the limitations and work-arounds are worth it for your use case. Do you have a performance problem? Have you tried everything else? Will this improve your performance by 10-100x? Using it or not using it will likely end up being a bit of a no-brain-er either way.