Parallel mysqldump backup script available. Testers wanted.

Large databases, long mysqldump times, long waits for globally locked tables. These problems basically never go away when you rely on mysqldump with –all-databases or a list of databases, as it dumps schemas serially. I’m not going to explain serial vs parallel processing here since that’s a larger topic. Suffice to say that in these days of multi-core / multi-cpu servers we only make use of one processor’s core when we serially export databases using mysqldump. So, I have a new script that attempts to alleviate those issues and now I need testers to provide feedback/improvements.


In order to keep some sanity when dealing with hundreds of database servers, the script takes care of the following:

  1. low global locking time requirements: solved by parallel tasks / forked processes
  2. backup file checking: with mysqldump files; it checks for “–Dump completed” at the end of the sql file
  3. ability to backup all-databases or an array of databases
  4. if backing up all databases, the ability to skip schema names like information_schema, lost&found, test, etc that can be in the data directory but aren’t really schemas
  5. compression of sql files
  6. size reporting of plain text and compressed sql files
  7. overall time and single export elapsed time reporting for each schema backup
  8. debug level logging of sub routines for troubleshooting if needed
  9. file pruning to keep 28 days of backups, so you don’t have to manually remove files to keep the backups partition from filling up
  10. talkback reporting for Monolith monitoring system / email reporting as well
  11. logs to /var/log/messages with final report so we get information in network based syslog collection
  12. process state exit code reporting in the case of various issues like “mysqldump failed because binlog not enabled when using –master-data=x”

So, that all said, if you want to help test this script email me at ” the matt reid at g m a i l . c o m”

It’s a bit over 800 lines of perl so I’m not going to paste it here, therefore email me :)

One thought on “Parallel mysqldump backup script available. Testers wanted.

  1. admin says:

    Before mentioning mk-parallel-dump or Maatkit:

    Here’s an email response I just sent out to Andrew Houghton that will address something that people are bringing up:

    If mk-parallel-dump did all of the things I needed to do then I’d use it. Since it doesn’t, I wrote this. And, since it’s main purpose is to be part of Monolith (which is what talkback is for) and I don’t want to have to worry about the release cycle of maatkit when I release versions of Monolith then it makes more sense to have this as a wrapper for mysqldump than it does to rely on installing maatkit as a requirement of Monolith and the wrapping that.

    Maatkit is a great piece of software, but in this case, not what I wanted.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>