Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 09 Nov 2006 01:13:54 +0000
From:      Pete French <petefrench@ticketswitch.com>
To:        freebsd-stable@FreeBSD.ORG
Subject:   Dissapointing performance of ciss RAID 0+1 ?
Message-ID:  <E1GhyUM-000Oct-6G@dilbert.firstcallgroup.co.uk>

next in thread | raw e-mail | index | archive | help
I recently overhauled my RAID array - I now have 4 drives arranged
as RAID 0+1, all being 15K 147gig Fujitsu's, and split across two
buses, which are actively terminated to give U160 speeds (and I have
verified this). The card is a 5304 (128M cache) in a PCI-X slot.

This replaces a set of 6 7200 rpm drives as RAID 5 which were running at
40meg speeds due to non LVD termination. I would expect to see a large speed
increase wouldn't I ? But it remains about the same - around 45 meg/sec
for reading a large file (3 gig or so) and half that for copying said
file. These are 'real world' tests in the sense that I us the drive for
building large ISo images and copying them around - I really dont care what
benchmarks say, it's the speed of these two operatiosn that I want to make
fats.

I've tried all the possible stripe sizes (128k gives the best performance)
but still I only get the above speeds. Just one of the 15k drives on it's
own performs better than this! I would expect the RAID-0 to give me at
least some speedup, or in the worst case be the same, surely ?

Booting up Windowws and running some tests gives me far better performance
however, so I am wondering if there is some driver issue here. Has anyone
else seen the same kind of results ? I am running the latest stable for
amd64 and the machine has twin opteron 242's with a gig of RAM each. surely
it can do better than this ?

-pcf.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E1GhyUM-000Oct-6G>