Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 May 2007 11:24:32 -0400
From:      Jerry McAllister <jerrymc@msu.edu>
To:        Jason Lixfeld <jason+lists.freebsd-questions@lixfeld.ca>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Backup advice
Message-ID:  <20070524152432.GB4322@gizmo.acns.msu.edu>
In-Reply-To: <28E0DBBA-BB24-4D6B-AE65-07EB5254025C@lixfeld.ca>
References:  <28E0DBBA-BB24-4D6B-AE65-07EB5254025C@lixfeld.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, May 23, 2007 at 07:27:05PM -0400, Jason Lixfeld wrote:

> So I feel a need to start backing up my servers.  To that end, I've  
> decided that it's easier for me to grab an external USB drive instead  
> of a tape.  It would seem dump/restore are the tools of choice.  My  
> backup strategy is pretty much "I don't want to be screwed if my RAID  
> goes away".  That said I have a few questions along those lines:

A popular sentiment.

> - Most articles I've read suggest a full backup, followed by  
> incremental backups.  Is there any real reason to adopt that format  
> for a backup strategy like mine, or is it reasonable to just do a  
> dump 0 nightly?  I think the only reason to do just one full backup  
> per 'cycle' would be to preserve system resources, as I'm sure it's  
> fairly taxing on the system during dump 0 times.

Yes, dump/restore is generally the way to go, unless you have not
set up your partitions conveniently to separate what you want to dump
from what you do not want to dump.

The main reason to do a full dump followed by a series of incrementals
is to save resources.   This includes dump time as well as media to
receive the dump[s].   If you happen to be using tape for example, a
large full dump may take several tapes for each dump, but an incremental 
may then take only one for each.

There is one more thing to consider.   The way dump works is that it
starts by making a large list of all the stuff it will dump.  Then it
starts writing to media (tape, disk file, network, whatever).  On systems
where files change frequently, especially new ones being added and old
ones being deleted, it is quite possible, even probable that there will
be changes between the time the index list is made and when the dump
of a particular file/directory is written.   dump and restore handle
this with now problem and just a little warning message, but it makes
the backup a little less meaningful.   You will often see messages
from restore saying it is skipping a file it cannot find.  That is
because the file was deleted from disk after the list was made, but
before the data was written to media.   Files created after the list
was made will not be dumped until the next time dump is run.  Files
that are modified after the list was made will only be dumped if they
were also modified before the list was made.

That said, if the amount I am backing up takes less than about
an hour for a level 0 and I have room for it, I always do the full 
dump each time and ignore the incremental issue.    In cases where
the full dump takes a long time, but there are typically not a lot of
changes on the system, I usually do a level 0, followed only by
a series of level 1 dumps until they tend to get large and then start
another level 0 dump.

> - Can dump incrementally update an existing dump, or is the idea that  
> a dump is a closed file and nothing except restore should ever touch it?

No, dump does not work that way.   It works on complete files.
It keeps a record of when the most recent dumps were done along with
the level of the dump that was done - in a file called /etc/dumpdates.   
Then, when it makes its list of files and directories to dump, it looks
at the date the file was changed.   If the change was more recent than
the next lower dump level than currently being done, it adds the file
to the list and dumps it to the incremental media.   Full dumps just
set the date of most recent dump to the "epoch" (1970) so any file
or directory changed since then is dumped.   Since that is the 
nominal beginning of time for UNIX of any time, all files will be
changed since then and thus be added to the list to be dumped.  

So, essentially, yes to the second part of the question.  A dump file
might as well be considered a closed file.  Incrementals are additional
closed files. 

> 
> - How much does running a backup through gzip actually save?  Is  
> taxing the system to compress the dump and the extra time it takes  
> actually worth it, assuming I have enough space on my backup drive to  
> support a dump 0 or two?

As with other data, it depends on the data.   I never compress dumps.
Maybe I am a little supersticious, but I don't want any other
complication potentially in the way under the circumstance when I
find I need something from the dump.    Also, you would have to
uncompress the dump before you could do an 'interactive' restore or
any other partial restore.

////jerry

> 
> - Other folks dumping to a hard drive at night?  Care to share any of  
> your experiences/rationale?
> 
> Thanks in advance.
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070524152432.GB4322>