Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Sep 2004 13:59:05 -0700 (PDT)
From:      "Richard Lynch" <ceo@l-i-e.com>
To:        "Brian McCann" <bjmccann@gmail.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Backup/Restore
Message-ID:  <2034.67.167.52.21.1096577945.squirrel@www.l-i-e.com>
In-Reply-To: <2b5f066d04093013344d048003@mail.gmail.com>
References:  <2b5f066d04093013344d048003@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Brian McCann wrote:
>      Hi all...I'm having a conceptual problem I can't get around and
> was hoping someone can change my focus here.  I've been backing up
> roughly 6-8 million small files (roughly 2-4k each) using dump, but
> restores take forever due to the huge number of files and directories.
>  Luckily, I haven't had to restore for an emergency yet...but if I
> need to, I'm kinda stuck.  I've looked at distributed file systems
> like CODA, but the number of files I have to deal with will make it
> choke.  Can anyone offer any suggestions?  I've pondered running
> rsync, but am very worried about how long that will take...

Do the files change a lot, or is it more like a few files added/changed
every day, and the bulk don't change?

If it's the latter, you could maybe get best performance from something
like Subversion (a CVS derivative).

Though I suspect rsync would also do well in that case.

If a ton of those files are changing all the time, try doing a test on
creating a tarball and then backing up the tarball.  That may be a simple
managable solution.  There are probably other more complex solutions of
which I am ignorant :-)

-- 
Like Music?
http://l-i-e.com/artists.htm



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2034.67.167.52.21.1096577945.squirrel>