SQL Server database change workflow best practices

Within our team, we handle database changes like this:

  • We (re-)generate a script which creates the complete database and check it into version control together with the other changes. We have 4 files: tables, user defined functions and views, stored procedures, and permissions. This is completely automated - only a double-click is needed to generate the script.
  • If a developer has to make changes to the database, she does so on her local db.
  • For every change, we create update scripts. Those are easy to create: The developer regenerates the db script of his local db. All the changes are now easy to identify thanks to version control. Most changes (new tables, new views etc) can simply be copied to the update script, other changes (adding columns for example) need to be created manually.
  • The update script is tested either on our common dev database, or by rolling back the local db to the last backup - which was created before starting to change the database. If it passes, it's time to commit the changes.
  • The update scripts follow a naming convention so everybody knows in which order to execute them.

This works fairly well for us, but still needs some coordination if several developers modify heavily the same tables and views. This doesn't happen often though.

The important points are:

  • database structure is only modified by scripts, except for the local developer's db. This is important.
  • SQL scripts are versioned by source control - the db can be created as it was at any point in the past
  • database backups are created regularly - at least before making changes to the db
  • changes to the db can be done quickly - because the scripts for those changes are created relatively easily.

However, if you have a lot of long lasting development branches for your projects, this may not work well.

It is by far not a perfect solution, and some special precautions are to be taken. For example, if there are updates which may fail depending on the data present in a database, the update should be tested on a copy of the production database.

In contrast to rails migrations, we do not create scripts to reverse the changes of an update. But this isn't always possible anyway, at least in respect to the data (the content of a dropped column is lost even if you recreate the column).


Version Control and your Database

The root of all things evil is making changes in the UI. SSMS is a DBA tool, not a developer one. Developers must use scripts to do any sort of changes to the database model/schema. Versioning your metadata and having upgrade script from every version N to version N+1 is the only way that is proven to work reliably. It is the solution SQL Server itself deploys to keep track of metadata changes (resource db changes).

Comparison tools like SQL Compare or vsdbcmd and .dbschema files from VS Database projects are just last resorts for shops that fail to do a proper versioned approach. They work in simple scenarios, but I see them all fail spectacularly in serious deployments. One just does not trust a tool to do a change to +5TB table if the tools tries to copy the data...


RedGate sells SQL Compare, an excellent tool to generate change scripts.

Visual Studio also has editions which support database compares. This was formerly called Database Edition.

Where I work, we abolished the Dev/Test/UAT/Prod separation long ago in favor of a very quick release cycle. If we put something broken in production, we will fix it quickly. Our customers are certainly happier, but in the risk avert corporate enterprise, it can be a hard sell.