Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 12 Jan 2006 15:37:53 -0800 (PST)
From:      Philip Hallstrom <freebsd@philip.pjkh.com>
To:        Hans Nieser <h.nieser@xs4all.nl>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Remote backups, reading from and writing to the same file
Message-ID:  <20060112153427.S54310@wolf.pjkh.com>
In-Reply-To: <43C6E55A.8020500@xs4all.nl>
References:  <43C6E55A.8020500@xs4all.nl>

next in thread | previous in thread | raw e-mail | index | archive | help
> For a while I have been doing remote backups from my little server at home 
> (which hosts some personal websites and also serves as my testing webserver) 
> by tarring everything I wanted to be backed up and piping it to another 
> machine on my network with nc(1), for example:
>
> On the recieving machine: nc -l 10000 > backup-`date +%Y-%m-%d`.tar.gz
>
> On my server: tar -c -z --exclude /mnt* -f - / | nc -w 5 -o aphax 10000
>
> (Some excludes for tar(1) are left out for simplicity's sake)
>
> Among the things being backed up are my mysql database tables. This made me 
> wonder wether the backup could possibly get borked when mysql writes to any 
> of the mysql tables while tar is reading from them.
>
> Do I really have to use MySQL's tools to do a proper SQL dump or stop MySQL 
> (and any other services that may write to files included in my backup) before 
> doing a backup? Do any of the more involved remote-backup solutions have ways 
> of working around this? Or is it simply not possible to write to a file while 
> it is being read?

The short answer is yes.  The medium answer is I would if I were you :-)

The long answer (at least to the extent I know it) is...

You might be able to take a snapshot of the filesystem mysql's files are 
on and back those up as they'd be consistent to themselves.  But 
everything I've read about backing up a database suggests that doing a 
proper backup is the way to go.

If you really don't want to do that you might also be able to use one of 
the various LOCK commands in Mysql to block all writes until you've copied 
them over.

But really a mysqldump ... | gzip > file should result in a very very 
small file.  And you could pipe that over the network (or even start 
mysqldump on your backup machine) if you didn't want the temp file issue.

You might also consider rsync.  That would only copy files that have 
changed.  Might be handy if bandwidth is an issue.  You can set it up to 
keep backup copies of files that have changed as well.  And it can run 
over ssh.

-philip



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060112153427.S54310>