Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Feb 1999 11:27:27 +1030
From:      Greg Lehey <grog@lemis.com>
To:        tim@iafrica.com.na, "Bryn Wm. Moslow" <bryn@spacemonster.org>
Cc:        freebsd-isp@FreeBSD.ORG
Subject:   Re: DPT 3334UW RAID-5 Slowness / Weird FS problems
Message-ID:  <19990218112727.L515@lemis.com>
In-Reply-To: <36CAAAA2.6798@iafrica.com.na>; from Tim Priebe on Wed, Feb 17, 1999 at 01:40:18PM %2B0200
References:  <36C88CC6.E1621F6F@spacemonster.org> <36CAAAA2.6798@iafrica.com.na>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wednesday, 17 February 1999 at 13:40:18 +0200, Tim Priebe wrote:
> I infered from your message, that your load is such that a single drive
> would spend too much time seeking. It is with this in mind that I have
> made the followig comments.
>
> Bryn Wm. Moslow wrote:
>>
>> I recently installed a DPT 3334UW with 64MB cache in a mail server
>> running RAID-5 with a 32K stripe on an external case on which the user
>> mail spool is mounted. The array is comprised of 6 Seagate Ultra Wide
>> 4.5GB SCA drives. The system is a P2 300 with 384MB and uses an
>> additional Seagate UW drive for boot, /usr, /var, swap, and staff home
>> directories. It doesn't go into swap often but if it does it only hits
>> about 5 to 10 MB. The system is running FreeBSD 2.2.8.
>>
>> My first problem was that I initially tried to do 16K per inode to speed
>
> [...]
>
>> I ended up having to use the default settings for newfs to get the
>> system to work, wasting millions of inodes and bringing me to my next
>> problem: Under load the filesystem is horribly slow. I expected some of
>> this with the RAID-5 overhead but it's actually slower than a CCD I just
>> moved from that was using 5 regular 2GB fast SCSI-2 drives, much slower.
>> When running ktrace on the processes (qpopper and mail.local mainly) and
>> watching top I can see that most of the processes are waiting for disk
>> access. I've tried enabling/disabling various DPT options in the kernel
>> but it's all about the same. I'd really like to stick with RAID-5 so
>> using 0 or 1 just isn't what I'm looking for.
>
> This is to be expected with RAID-5. The user mail spool can have more
> write requests than read requests. Every write causes every disk in the
> array to seek.

This should not be the case.  You only need to access the data
drive(s) and the parity drive(s).  As I pointed out in an earlier mail
message, you should try to keep the stripes big, in which case over
99% of all transfers only access one data drive and one parity drive.

> What this means for your performance in comparison to your CCD
> solution is:
>
> average number of seeks per drive
>
>  CCD = ( nr + nw )/N
>
>  RAID-5 = nr/N + nw

This ignores multi-block and multi-stripe transfers, but that's
reasonable.  It also ignores transfer time, which is not reasonable.
As I showed in an earlier message, the seek times on modern disks are
in the same order of magnitude as rotational latency.

> If you want the performance of your CCD, and redundancy, then you
> should consider RAID 0+1. Otherwise consider distributing your
> various subdirectories across your 6 drives, no RAID.

RAID-1 is a *lot* more expensive in disk, and writes still require at
least two seeks, depending on the number of copies you keep (Vinum
allows you to keep up to 8 copies if you can find a reason to do so).
With two copies (the minimum, of course) you're still performing two
seeks per write, but you save by not having to read before writing.

Greg
--
See complete headers for address, home page and phone numbers
finger grog@lemis.com for PGP public key


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19990218112727.L515>