Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 17 Feb 1999 13:40:18 +0200
From:      Tim Priebe <tim@iafrica.com.na>
To:        "Bryn Wm. Moslow" <bryn@spacemonster.org>
Cc:        freebsd-isp@FreeBSD.ORG
Subject:   Re: DPT 3334UW RAID-5 Slowness / Weird FS problems
Message-ID:  <36CAAAA2.6798@iafrica.com.na>
References:  <36C88CC6.E1621F6F@spacemonster.org>

next in thread | previous in thread | raw e-mail | index | archive | help
I infered from your message, that your load is such that a single drive
would spend too much time seeking. It is with this in mind that I have
made the followig comments.

Bryn Wm. Moslow wrote:
> 
> I recently installed a DPT 3334UW with 64MB cache in a mail server
> running RAID-5 with a 32K stripe on an external case on which the user
> mail spool is mounted. The array is comprised of 6 Seagate Ultra Wide
> 4.5GB SCA drives. The system is a P2 300 with 384MB and uses an
> additional Seagate UW drive for boot, /usr, /var, swap, and staff home
> directories. It doesn't go into swap often but if it does it only hits
> about 5 to 10 MB. The system is running FreeBSD 2.2.8.
> 
> My first problem was that I initially tried to do 16K per inode to speed

[...]

> I ended up having to use the default settings for newfs to get the
> system to work, wasting millions of inodes and bringing me to my next
> problem: Under load the filesystem is horribly slow. I expected some of
> this with the RAID-5 overhead but it's actually slower than a CCD I just
> moved from that was using 5 regular 2GB fast SCSI-2 drives, much slower.
> When running ktrace on the processes (qpopper and mail.local mainly) and
> watching top I can see that most of the processes are waiting for disk
> access. I've tried enabling/disabling various DPT options in the kernel
> but it's all about the same. I'd really like to stick with RAID-5 so
> using 0 or 1 just isn't what I'm looking for.

This is to be expected with RAID-5. The user mail spool can have more 
write requests than read requests. Every write causes every disk in the
array to seek. What this means for your performance in comparison to
your CCD solution is:

average number of seeks per drive

 CCD = ( nr + nw )/N

 RAID-5 = nr/N + nw

where
 nr is the number of reads
 nw is the number of writes
 N is the number of drives for CCD,
      one less than number of drives for RAID-5

for your case with N = 5, if nw = 0 then
 CCD = nr/5,  RAID-5 = nr/5,  one drive = nr

for nw = nr
 CCD = 2*nr/5,	RAID-5 = 6*nr/5   one drive = 2*nr

in a worst case your RAID-5 drives will spend as much time seeking as a
single drive, while using N + 1 times as much bandwidth on the SCSI bus.
Adding more drives to the array does not improve the situation.


If you want the performance of your CCD, and redundancy, then you should
consider RAID 0+1. Otherwise consider distributing your various
subdirectories across your 6 drives, no RAID.


Hope this is of some help.

Tim Priebe.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?36CAAAA2.6798>