speed up gzip compression

If you have a multi-core machine using pigz is much faster than traditional gzip.

pigz, which stands for parallel implementation of gzip, is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data. pigz was written by Mark Adler, and uses the zlib and pthread libraries.

Pigz ca be used as a drop-in replacement for gzip. Note than only the compression can be parallelised, not the decompression.

Using pigz the command line becomes

mysqldump "$database_name" | pigz > $BACKUP_DIR/$database_name.sql.gz

From man gzip:

   -# --fast --best
          Regulate  the  speed  of compression using the
          specified digit #, where -1  or  --fast  indi‐
          cates  the  fastest  compression  method (less
          compression) and -9 or  --best  indicates  the
          slowest compression method (best compression).
          The default compression level is -6 (that  is,
          biased  towards high compression at expense of
          speed).

If you need it to be fast because of database locking issues, and you have a fast/large enough disk to hold the data uncompressed temporarily, you could consider using this method instead:

mysqldump "$database_name" > "$BACKUP_DIR"/"$database_name".sql
nice gzip "$BACKUP_DIR"/"$database_name".sql &

I.e. store the backup first (which is faster than gzipping it IF the disk is fast and the CPU is slow) and then have the gzipping occur in the background.

This might also allow you to use a better compression algorithm, as it no longer matters (directly) how long the compression takes.