Best practices for backing up databases

First, don't version control your database backups.
A backup is a backup - a point in time. Using version control sounds like a nice idea but realize that it means you will need to restore the whole SVN repository (ZOMG Freaking HUGE) if you have a catastrophic failure and need to get your database back. That may be additional downtime time you can't afford.

Second, make sure your backups are getting off site somehow. A backup on the local machine is great if you need to restore data because you messed up and dropped a table. It does you absolutely no good if your server's disks die.
Options include an external hard drive or shipping the backups to a remote machine using rsync. There are even storage service providers like rsync.net that specialize in that.

Third, regarding frequency of backups: Only you know how often you need to do this.
My current company has a slave database with near-real-time replication of our production data. That slave is backed up every night to a local machine, which then syncs to an off-site storage facility.
In the event of a production hardware failure we activate the slave. Data loss should be minimal, as should downtime. In the event of an accidental table deletion we can restore from the local backup (losing up to 1 day of data). In the event of a catastrophic incident we can restore from the off-site backup (which takes a while, but again will only lose up to 1 day of data).
Whether that kind of backup scheme works for you depends on your data: If it changes frequently you may need to investigate a backup strategy that gets you point-in-time recovery (log-shipping solutions can often do this). If it's mostly static you may only need to back up once a month. The key is making sure that you capture changes to your data within a reasonable time from when they are made, ensuring you don't lose those changes in the event of a major incident.


generic advice:

  • do monitor your backups
    • check if they finished successfully [eg tial result of mysqldump in search of finish line; check error codes returned by the dump commands],
    • if backup size is reasonable
  • run recovery tests once in a while - maybe every 3-6 months
  • backup to offline media so you dont lose the data in case of malicious attack
  • keep backups offsite so you dont lose the data in case of a natural disaster

specific advice:

  • mysqldumps pumped to svn for versioning sounds like overkill - removing anything from svn is quite difficult. how about using rdiff-backup to keep last backup and 'diffs' for few previous ones?
  • svn - use svnadmin dump - this is 'the proper' way of taking the dumps of svn
  • if you want to be extra-safe - use lvm and take additionally lvm snapshots of both mysql and svn data directories
  • use innodb storage engine to make backups lock-free

In preparing a backup strategy you should start by evaluating recovery point objective (RPO) and recovery time objective (RTO). The RPO indicates how much data the business is willing to lose in the event of an incident, while the RTO indicates how quickly it will take to recover. Your requirements for RTO and RPO will drive the economic and performance cost of maintaining backups. [1].

Generally there are four backup strategy:

  • Server Replica: using another server, physically and logically separated from main db server, and whenever writing a data to db also write in the replicate db.
  • Database Dump: dump database periodically to a file and send that file to a backup server.
  • Database Snapshot: use any filesystem snapshot tool like rsnapshot to make a periodic snapshot of the underlying data files of the database and send it to the backup server.
  • Cloud and Agent Based: you only install an agent in your db server an it backup periodically to the cloud.

Each approach has its own pros and cons, they can be compared from different point of views:

  • Non-Blocking: all method doesn't need to stop write access to db for backup except Database Snapshot which in some cases, for example in mongodb when journaling is enabled, even with LVM snaphshot there is no guaranty that snapshot is consistent and valid.

  • Incremental: dump and snapshot are not typically incremental and consequently the backup speed is lower than the rest. replica and cloud methods are incremental in nature.

  • Workload: snapshot has no load on the database since just underlying files are copied. dump has the most load. In other method the workload is distributed in the database working hours.