Easy MySQL: how to backup databases to a remote machine

Here’s a simple answer to a simple question. “How do I run a backup of MySQL to another machine without writing to the local server’s filesystem?” – this is especially useful if you are running out of space on the local server and cannot write a temporary file to the filesystem during backups.

Method one – this writes a remote file.
mysqldump [options] [db_name|--all-databases]| gzip -c | ssh user@host.com "cat > /path/to/new/file.sql.gz"

Method two – this writes directly into a remote mysql server
mysqldump [options] [db_name|--all-databases]| mysql --host=[remote host] –user=root –password=[pass] [db_name]

6 thoughts on “Easy MySQL: how to backup databases to a remote machine

  1. Shlomi Noach says:

    Good write,
    may as well add “–compress” to mysql invocation on Method two. This will gzip the data over the network (and gunzip on the remote server).

  2. Good post Matt,
    Another good way to do this is to mount a dump folder from the remote machine onto local via sshfs and fuse (if they are installed but if not they are easily installed too :) esp on linux).

    sshfs user@remotebox.com:/dump/directory /local/directory
    mysqldump [options] [db_name|--all-databases]| gzip -c > /local/directory/dump.sql

  3. Mark R says:

    Unfortunately this kind of mechanism only works well for small databases on servers which aren’t in particularly active use.

    By default mysqldump will take a global lock for the duration of the dump, which is unacceptable for all but the quietest of servers.

    The alternatives are nontrivial. Taking an inconsistent dump is pointless, and taking a consistent dump either involves “stop-the-world” or relying on the server having exclusively transactional tables (innodb).

    We have found that “rsync” on the data directory works well, as the bulk of the operation can be done with the server “live”, and a short LOCK TABLES WITH READ LOCK to do the final rsync.

  4. admin says:

    @Mark R: in my experience none of those methods work well for larger sized schemas. In fact, rsync on the data directory with the server live will corrupt the data at worst and give you useless data at the least (when dealing with innodb which the majority of larger servers use exclusively for the big tables).

    I’ve found that the best method is to take a maintenance window (oh my god downtime!) and run a backup, move it to a new server, import it, initiate replication and then switch the app servers to hit the new server after data and performance have been sanity tested. If you can never take a maintenance window then you’re effectively screwed anyway so this is all irrelevant.

    Anyway, the post was just a simple method to show a command that some may find useful at some point in their admin careers – even with large customers we start with a blank database or with data that is not yet live so these commands are in fact useful. Not every situation is last minute midnight work during server emergencies.

  5. admin says:

    @Schlomi: good idea!
    @Darren: I’ve never used sshfs before but it looks rather interesting. I’ll check that out on some dev servers today :)

  6. Bodo says:

    @mark: did you try –single-transaction=true with innobase engine?
    We do dumps every night this way without trouble for many years now.
    Maybe you need the –force swith doing an import to skip errors for
    references which are not yet imported.

    Another not well known method to do a subset dump for testing is as
    an example:

    mysqldump –single-transaction=true –user=youruser –password yourpw bigtable –where=”yourdatecolumn >= date_sub(curdate(), interval 30 day)” | gzip -c > bigtable_last_30_days.sql.gz

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>