What are reasons **NOT** to use the MEMORY storage engine in MySQL?

Looking at the feature availability list at http://dev.mysql.com/doc/refman/5.1/en/memory-storage-engine.html two possible problems jump out:

  1. No transaction or FK support, meaning you will have to manage transactional integrity and referential integrity in your own code were needed (which could end up being a lot less efficient than letting the DB do this for you, though that very much depends on your app's expected behaviour patterns).
  2. Table level locking only: this could be a significant barrier to scalability if your app needs multiple concurrent writers to the same set of tables or in cases where your read operations use locks to ensure consistent data is read - in such cases a disk based table that supports much finer lock granularity will perform far better if enough of its content is currently cached in RAM.

Other than that, assuming you have enough RAM, a memory based table should be faster than a disk based one. Obviously you need to factor in taking snapshots to disk to address the issue of what happens when the server instance is reset, which is likely to completely negate the performance benefit overall if the data needs capturing often (if you can live with losing a day of data in such an instance you could just take a backup once per day, but in most cases that would not be acceptable).

An alternative might be to:

  1. Use disk based tables, but ensure that you have more than enough RAM to hold them all in RAM at any given time (and "enough RAM" might be more than you think as you need to account for any other processes on the machine, OS IO buffers/cache and so forth)
  2. Scan the entire contents (all data and index pages) of the table on each startup to preload the content into memory with SELECT * FROM <table> ORDER BY <pkey fields> for each table followed by SELECT <indexed fields> FROM <table> ORDER BY <index fields> for each index

This way all your data is in RAM, you only have to worry about I/O performance for write operations. If your app's common working set is much smaller than the whole DB (which it usually the case - in most applications most users will only be looking at the most recent data most if the time) you might be better of being more selective about how much you scan to preload into memory, allowing the rest to be loaded from disk on demand.


There are plenty of cases not to use the memory storage engine - and when InnoDB will be faster. You just need to think about concurrency and not trivial single threaded tests.

If you have a large enough buffer pool, then InnoDB will become entirely memory resident for read operations as well. Databases have caches. They warm themselves up!

Also - do not underestimate the value of row-level locking and MVCC (readers don't block writers). It may be "slower" when writes have to persist to disk. But at least you won't be blocking during that write operation like you would be on a memory table (no MVCC; table level locking).


For the record. I tested Mysql tables in Memory for store some information. And i tested PHP's APC (APCu) for store the same information.

For 58000 registries. (varchar + integer + date).

  1. Original information 24mb in text format (csv format).
  2. PHP's APC uses 44.7mb of RAM.
  3. Mysql's Table uses 575mb of RAM.

The table only has a single index so i don't think that it is the main factor.

Conclusion:

Memory table is not an option for "big" tables because it uses too much memory.