Recommended memory configuration for many SQL Server instances per server

Each server is a quad core virtual machine with 16GB of RAM provisioned on a server rack.

We populate each server with the maximum of 50 instances allowed by SQL Server. Each instance will be accessed sporadically by a maximum of 10 clients simultaneously, and each database will in general only be around 100MB to 500MB in size.

IMHO, your total RAM is too low. Please read my answer (with relevant links) SQL Server Maximum and Minimum memory configuration. They change when you have multiple instances of sql server running on a given host.

Capping SQL server max memory on a multi instance server is a balancing act and max memory is applicable to only buffer pool. If sql server needs more memory, it is going to use it.

You can even use Lock Pages in Memory (I would still opt for more memory before enabling LPM in your case).

As a starting point,

  • Baseline your instances. This will help you gauge what is good / acceptable for your workload.

  • Use OptimizeInstanceMemory script from Aaron's blog to help you get started. The blog post covers how to balance max memory dynamically when failover happens.

As a side note, you should monitor CPU, memory and disk utilization and based on the usage per client, you should charge them as well. Alternatively, you can move to Azure :-)


Set a maximum of 300MB per instance and be done with it. Seriously, you can monitor it with something like this to determine which may be candidates for giving a little more or less to.

SELECT DB_NAME(database_id) AS [Database Name],
COUNT(*) * 8/1024.0 AS [Cached Size (MB)]
FROM sys.dm_os_buffer_descriptors
WHERE database_id > 4 -- system databases
AND database_id <> 32767 -- ResourceDB
GROUP BY DB_NAME(database_id)    
ORDER BY [Cached Size (MB)] DESC OPTION (RECOMPILE);

From here: Memory utilization per database - SQL Server

Good article here: http://strictlysql.blogspot.com/2013/05/how-to-size-sql-server-memory-why-is-it.html

Allocating memory to the OS (from the article):

1 GB of memory reserved to Operating System
1 GB each for every 4 GB in 4 to 16 GB
1 GB each for every 8 GB in more than 16 GB.

Split the remainder (i.e. 12 GBs) evenly across the 50 DBs, approximately 250MBs.

One configuration you may also consider is turning on "optimize for ad hoc workloads". This will essentially tell the SQL server not to cache the full query plans for queries until they've been run at least twice. This will keep the 'ad-hoc' or single-use queries from taking up this limited memory.

Also, you can minimize the impact of the transaction log on the memory by setting it to 'Simple' recover mode. You'll only be able to do this, if, in the event of failure, restoring from the last backup is okay. You can read some other limitations here, https://msdn.microsoft.com/en-us/library/ms189275.aspx.

I think that's fair until you see a reason to change it, particularly if these are individual clients who are in all other respects equal.