Is there any limit to the number of databases you can put on one SQL server?

I've worked on SQL Servers with 8 to 10 thousand databases on a single instance. It's not pretty.

Restarting the server can take as long as an hour or more. Think about the recovery process for 10,000 databases.

You cannot use SQL Server Management Studio to reliably locate a database in the Object Explorer.

Backups are a nightmare, since for backups to be worthwhile you need to have a workable disaster recovery solution in place. Hopefully your team is great at scripting everything.

You start doing things like naming databases with numbers, like M01022, and T9945. Trying to make sure you're working in the correct database, e.g. M001022 instead of M01022, can be maddening.

Allocating memory for that many databases can be excruciating; SQL Server ends up doing a lot of I/O, which can be a real drag on performance. Consider a system that records carbon use details across 4 tables for 10,000 companies. If you do that in one database, you only need 4 tables; if you do that in 10,000 databases, all of sudden you need 40,000 tables in memory. The overhead of dealing with that number of tables in memory is substantial. Any query you design that will be ran against those tables will require at least 10,000 plans in the plan cache if there are 10,000 databases in use.

The list above is just a small sampling of problems you'll need to plan for when operating at that kind of scale.

You'll probably run into things like the SQL Server Service taking a very long time to start up, which can cause Service Controller errors. You can increase the service startup time yourself, create the following registry entry:

Subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
Name:   ServicesPipeTimeout
Data:   The number of milliseconds before timeout occurs during service startup

For example, to wait 600 seconds (10 minutes) before the service times out, type 600000.

Since writing my answer I've realized the question is talking about Azure. Perhaps doing this on SQL Database is not so problematic; perhaps it is more problematic. Personally, I'd probably design a system using a single database, perhaps sharded vertically across multiple servers, but certainly not one-database-per-customer.

So there are Pros and Cons to both methods. Without knowing more about your application or the services you're looking to provide I won't be able to give a definitive answer but I'll throw out some of my thoughts on the matter.

My case for why you should use 1 Database for all clients.


  • Easy maintenance. Having one DB means that you only have to do your maintenance task on one location instead of many. Imagine the nightmare of handling 1000 different databases to back up. How about updating statistics on 1000 DB's or rebuilding indexes or DBCC CHECKDB?

  • Deploying Code. Let's say you have a problem with a stored procedure in your application code or reporting. You need to make a quick change... Now you have to deploy that change to 1000+ DB's. No, thanks, I'd rather not.

  • Easy Visibility. Just picture SSMS trying to open 1000+ DB's (shudder). It would practically make the problem useless and take a surprising amount of time to just open and render SSMS. Keep in mind, that's if you're able to come up with a decent naming convention.


  • Security. It would be easier to prevent folks from looking at other customers data if you had them as separate DB's. However there are some very simple things you can do to prevent this from happening.

  • Performance. It could be argued that limiting it one DB per customer means that SQL server will have to scan through less data to get the the information you're querying. However with proper data structure and good indexing(and possible partitioning) you can likely eliminate this as a problem all together if done carefully. I would recommend giving each table that contains customer specific data some sort of leading CompanyID to reduce that overhead.

Ultimately I think that your best bet is having one DB for you application and just splitting out customer data inside the DB itself. The troubles it will give you will be nothing in comparison to the nightmare of managing 1000+ databases.

Maximum Capacity Specifications for SQL Server states that there is a limit of 32,767.

As for whether it will affect performance, the answer is yes, but the ways it will affect performance, and whether it would be substantial, would depend on a myriad of factors.

I would go with the one database unless there is a good reason to split it out to 10,000 databases. One backup or 10,000 backups? One integrity check, or 10,000? There may be a good reason to use 10,000 small DBs, but you haven't given enough detail to determine that. The question you've asked is quite broad, and there's simply not enough information for anyone to know what the best answer is.