Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 21 Mar 2003 12:57:27 +0100
From:      Alexander Haderer <alexander.haderer@charite.de>
To:        Greg 'groggy' Lehey <grog@FreeBSD.org>
Cc:        Maarten de Vries <mdv@unsavoury.net>, Dirk-Willem van Gulik <dirkx@webweaving.org>, freebsd-questions@FreeBSD.ORG
Subject:   Re: Three Terabyte
Message-ID:  <5.2.0.9.1.20030321113340.019d12a0@postamt1.charite.de>
In-Reply-To: <20030320235600.GG60356@wantadilla.lemis.com>
References:  <5.2.0.9.1.20030320125711.019eb9c8@postamt1.charite.de> <20030320111436.N74106-100000@foem.leiden.webweaving.org> <20030320111436.N74106-100000@foem.leiden.webweaving.org> <5.2.0.9.1.20030320125711.019eb9c8@postamt1.charite.de>

next in thread | previous in thread | raw e-mail | index | archive | help
At 10:26 21.03.2003 +1030, Greg 'groggy' Lehey wrote:
>On Thursday, 20 March 2003 at 13:13:18 +0100, Alexander Haderer wrote:
> > At 12:53 20.03.2003 +0100, Maarten de Vries wrote:
> >> This would be for backup. Data on about 50 webservers would be backed up
> >> to it on a nightly basis. So performance wouldn't be important.
> >
> > Sure? Consider this:
> >
> > a.
> > Filling 3TB with 1 Mbyte/s lasts more than 800 hours or 33 days.
>
>I do a nightly backup to disk.  It's compressed (gzip), which is the
>bottleneck.  I get this sort of performance:
>
>dump -2uf - /home | gzip > /dump/wantadilla/2/home.gz
>   ...
>   DUMP: DUMP: 1254971 tape blocks
>   DUMP: finished in 217 seconds, throughput 5783 KBytes/sec
>   DUMP: level 2 dump on Thu Mar 20 21:01:31 2003
>
>You don't normally fill up a backup disk at once, so this would be
>perfectly adequate.  I'd expect a system of the kind that Maarten's
>talking about to be able to transfer at least 40 MB/s sequential at
>the disk.  That would mean he could backup over 1 TB in an 8 hour
>period.

Of course you are right. My note a. was meant as a more general hint to 
think about transfer rates when dealing with large files/filesystem. 
Maarten gave no details about how the webservers are connected with the 
backup server. I should have give more details of what I mean: When backing 
up 50 Webservers over network to one backup server the network may become a 
bottleneck. If you have to use encrypted connections (ssh) because the 
webservers are located elsewhere you need CPU power at server side for each 
connection.

> > b.
> > Using ssh + dump/cpio/tar needs CPU power for encryption, especially when
> > multiple clients safe their data at the same time.
>
>You can share the compression across multiple machines.  That's what
>was happening in the example above.

It is a good idea to do compression at the client side.

As I understand your example /dump/wantadilla/2 is either a local dir or 
connected via NFS. The latter requires a local network if you don't want to 
do NFS mounts across the Internet. Is this right?

with best regards

         Alexander

-- 
------------------------------------------------------------------
Alexander Haderer                     Charite
                                       Campus Virchow-Klinikum
Tel.  +49 30 - 450 557 182            Strahlenklinik und Poliklinik
Fax.  +49 30 - 450 557 117            Sekr. Prof. Felix
Email alexander.haderer@charite.de    Augustenburger Platz 1
www   http://www.charite.de/rv/str/   13353 Berlin - Germany
------------------------------------------------------------------


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5.2.0.9.1.20030321113340.019d12a0>