Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 11 Feb 2006 12:44:03 -0800
From:      Bakul Shah <bakul@BitBlocks.com>
To:        freebsd-hackers@freebsd.org
Subject:   RAID5 on athlon64 machines
Message-ID:  <200602112044.k1BKi3u8083329@gate.bitblocks.com>

next in thread | raw e-mail | index | archive | help
I built an Asus A8N SLI Deluxe based system and installed
FreeBSD-6.1-BETA1 on it.  This works well enough.  Now I am
looking for a decent RAID5 solution.  This motherboard has
two SATA RAID controllers.  But one does only RAID1.  The
other supports RAID5 but seems to require s/w assistance from
windows driver.  The BIOS does let you designate a set of
disks as a raid5 group but Freebsd does not recognize it as a
group in any case.

I noticed that vinum is gone from -current and we have gvinum
now but it does not implement all of the vinum commands.  But
that is ok if it provides what I need.

I played with it a little bit.  Its sequential read
performance is ok (I am using 3 disks for RAID5 and the read
rate is twice the speed of one disk as expected).  But the
write rate is abysmal!  I get about 12.5MB/s or about 1/9 of
the read rate.  So what gives?  Are there some magic stripe
sizes for better performance?  I used a stripe size of 279k
as per vinum recommendation.

Theoretically the sequential write rate should be same or
higher than the sequential read rate.  Given an N+1 disk
array, for N blocks reads you XOR N + 1 blocks and compare
the result to 0 but for N block writes you XOR N blocks.  So
there is less work for large writes.

Which leads me to ask: is gvinum stable enough for real use
or should I just get a h/w RAID card?  If the latter, any
recommendations?

What I'd like:

Critical:
- RAID5
- good write performance
- orderly shutdown (I noticed vinum stop command is gone but
  may be it is not needed?)
- quick recovery from a system crash.  It shouldn't have to
  rebuild the whole array.
- parity check on reads (a crash may have rendered a stripe
  inconsistent)
- must not correct bad parity by rewriting a stripe

Nice to have:
- ability to operate in "degraded" mode, where one of
  the disks is dead.
- ability to rebuild the array in background
- commands to take a disk offline, associate a spare with a particular disk
- use a spare drive effectively
- allow a bad parity stripe for future writes
- allow rewriting parity under user control.

Thanks!



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200602112044.k1BKi3u8083329>