Syncing two databases in SQL Server

Well I might not get it, but I try to answer it.

You said you need a high performance solution which runs often (minimum all 2 minutes) and you need a good approach which should be fast without locking. But you don't want a blackbox system.

Instead of a blackbox system, which is used on millions of installations with good results, you try to invent the wheel again and build your own solution? Hm, sounds a bit weird.

In fact these are my suggestions.

  1. Replication even if you said you won't use it. It's quite the easiest and best solution you can use for this. The replication is easy to setup, replicate fast and you don't have to invent the wheel again. If you just weird about locking, you may try to set the ISOLATION LEVEL to READ_COMMITTED_SNAPSHOT. You can read more about it here. This will use up a part of your tempdb, but your table is always read- and writeable and the replication can work in the background.

See the example below:

ALTER DATABASE yourDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE yourDatabase SET READ_COMMITTED_SNAPSHOT ON
  1. CDC (Change Data Capture) can also be a solution. But this way you need to build nearly everything on your own. And I've made the experience that CDC can be a fragile thing in some circumstances. CDC will capture all data on a watched table (you need to specify each watched table manually). Afterwards you'll get the value before and the value after an INSERT, UPDATE or DELETE. CDC will hold back those information for a period of time (you can specify it on your own). The approach could be to use CDC on certain tables you need to watch and manually replicate those changes to the other database. By the way, CDC uses the SQL Server Replication under the hood too. ;-) You can read more about it here.

Warning: CDC will not be aware of DDL-changes. This means, if you change a table and add a new column, CDC will watch the table but ignore all changes to the new column. In fact it only records NULL as value before and value after. You need to reinitialize it after DDL-Changes to a watched table.

  1. The way you described above is something like capturing a workload using SQL Server Profiler and run it again on another database for some benchmarks. Well it could work. But the fact that there are too many side effects is a bit too heavy for me. What do you do if you capture a procedure call on your client. Afterwards running the same command at your principle database as it is out of sync? The procedure may run through, but it may delete/update/insert rows which were not present in your client. Or how do you handle multiple clients with one principle. I think this is too tricky. In the worst case, you probably destroy your integrity.
  2. Another idea could be application based or using a trigger. Depending on how many tables you want to be synced. You can write all changes to a separate staging table and run an SQL Server Agent Job all x Minutes to sync those rows in the staging table with your master. But this may be a bit to heavy if you try to sync (e.g.) 150 tables. You would have a big overhead.

Well these are my 2 cents. Hopefully you have a good overview and maybe you found one solution which works for you.


I will try to enumerate some options here with advantages and disadvantages as I percieve them:

  1. SQL Server Replication - this is the best and most optimized native SQL Server tool for this task. But there are several problems: a. for all your clients, regardless on they are SQL Express databases or not, you will need SQL Server CAL license. This can be avoided using per processor licensing. b. You cannot sync SQL CE client as per here. c. SQL Express or LocalDB cannot act as a publisher or distributor, so you have less control on client over replication process.
  2. Microsoft Sync Framework - seems to me more fit for smaller databases of mobile apps. It adds quite a lot of tables to your database and it is not as efficient as replication. As it is implemented outside SQL Server as a component, it will be more difficult to configure. I have no experience with it, only tried it and decided not to use it.

  3. Database change tracking. It is a built-in SQL Server function that does for you change tracking including inserts, updates and deletes. Everything else like sending and applying changes, solving conflicts etc. you will have to code yourself.

  4. Rowversion (timestamp) columns If you disallow all deletes (no sync of deleted records) - you can implement your own solution based only on rowversion information. Rowversion columns are used by SQL Server Replication too, so you will need to add them anyhow.
  5. CDC as mentioned in Ionic's answer - I have no experience with it, as it is available only in Enterprise or Developer editions.

  6. Using your own trick with logging executed stored procedures - depends on the nature of your database application very much. But when the procedures get little different, there you can get a big mess in data. And how would you deal with conflicts?

From your question it seems that you need to sync just few tables and not the whole big databases. For this purpose you should analyze your needs in more detail than you have specified in the question, like:

  • Can deletes happen and what happens then?
  • Can conflicts happen, how to prevent them and how to solve them?
  • How will I deal with table structure changes?
  • ...

If you eventually find out, that deletes and conflicts are not your problem and that your structure will not change a lot, you can consider writing your own logic, but it can easily grow to 1000 rows of code.


Thank you all for your feedback.

I successfully solved the syncronization process by capturing the executed stored procedures not as a bunch but one by one which worked great in my case. Since integrity and everything is carefully considered, the system has been working real-time up to now.