How do I backup a mysql database, but at low priority?
Andy, by now I'm guessing you've had plenty of time to find a solution. I've recently come up with a solution to this that's working great for me at tsheets, and figured I'd share it.
cstream is a general-purpose stream-handling tool like UNIX dd, usually used in commandline-constructed pipes. The thing that makes cstream useful for us is that it allows you to specify the maximum bandwidth for all input. This means you can limit the disk IO of your mysqldump command with a simple command like this:
mysqldump --single-transaction --quick -u <USER> -p<PASS> <Database> | cstream -t 1000000 > backup.sql
Assuming you're backing up a database that uses all InnoDB tables, the above command is safe (won't affect other queries) and will do your mysqldump while limiting it's disk reads to just one megabyte per second. Adjust the bandwidth with the -t paramater to whatever value will allow your environment to perform the backup without impacting your customer's experience.
If you have a spare server around that can cope with the write load of your server, you can set up replication to that server, and then backup from the slave server. This also has the advantage that you can stop replication while you do the backup and get a consistent snapshot of your data across all databases, or all tables in one database without impacting the database server. This is the set up I always recommend for backing up MySQL if you have the resources.
As a nice bonus, you now have a read-only slave you can use for slow long-running queries.
FWIW you should also be able to do this with pv (http://linux.die.net/man/1/pv)
mysqldump --single-transaction --quick -u -p | pv --rate-limit 1m > destination (or | nc or | tar cfj backup.bz2 -)
The nice thing about this is the various options for monitoring progress and the -R option which allows you to pass options to an already running process, e.g.; --rate-limit to alter the transfer rate.