Safest way to perform mysqldump on a live system with active reads and writes?

All Data is InnoDB

This is what will give you an exact point-in-time snapshot of the data:

mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql

--single-transaction produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.

--routines dumps all stored procedures and stored functions

--triggers dumps all triggers for each table that has them

All Data is MyISAM or Mix of InnoDB/MyISAM

You will have to impose a global read lock, perform the mysqldump, and release the global lock

mysql -uuser -ppass -Ae"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
sleep 5
mysql -uuser -ppass -ANe"SHOW PROCESSLIST" | grep "SELECT SLEEP(86400)" > /tmp/proclist.txt
SLEEP_ID=`cat /tmp/proclist.txt | awk '{print $1}'`
echo "KILL ${SLEEP_ID};" > /tmp/kill_sleep.sql
mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
mysql -uuser -ppass -A < /tmp/kill_sleep.sql

Give it a Try !!!

UPDATE 2012-06-22 08:12 EDT

Since you have <50MB of total data I have another option. Instead of launching a SLEEP command into the background to hold the global read lock for 86400 sec (that 24 hr) just to get the process ID and kill outside, let's try setting a 5 second timeout in mysql rather than in the OS:

SLEEP_TIMEOUT=5
SQLSTMT="FLUSH TABLES WITH READ LOCK; SELECT SLEEP(${SLEEP_TIMEOUT})"
mysql -uuser -ppass -Ae"${SQLSTMT}" &
mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql

This is a cleaner and simpler approach for very small databases.


  • For InnoDB tables, you should use the --single-transaction option, as mentioned in another answer.
  • For MyISAM there is --lock-tables.

See the official documentation here


Here's how I did it. It should work in all cases since it uses FLUSH TABLES WITH READ LOCK.

#!/bin/bash

DB=example
DUMP_FILE=export.sql

# Lock the database and sleep in background task
mysql -uroot -proot $DB -e "FLUSH TABLES WITH READ LOCK; DO SLEEP(3600);" &
sleep 3

# Export the database while it is locked
mysqldump -uroot -proot --opt $DB > $DUMP_FILE

# When finished, kill the previous background task to unlock
kill $! 2>/dev/null
wait $! 2>/dev/null

echo "Finished export, and unlocked !"

The shell sleep command is just to make sure that the background task running the mysql locking command is executed before the mysqldump starts. You could reduce it to 1 second and it should still be fine. Increase it to 30 seconds and try inserting a values in any table from another client during those 30 seconds you'll see it's locked.

There are 2 advantages in using this manual background locking, instead of using the mysqldump options --single-transaction and --lock-tables:

  1. This locks everything, if you have mixed MyISAM/InnoDB tables.
  2. You can do run other commands in addition to the mysqldump during the same locking period. It's useful, for instance, when setting up replication on a master node, because you need to get the binary log position with SHOW MASTER STATUS; at the exact state of the dump you created (before unlocking the database), to be able to create a replication slave.