Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Feb 2010 02:55:16 -0800
From:      Jeremy Chadwick <freebsd@jdc.parodius.com>
To:        freebsd-stable@freebsd.org
Subject:   Re: hardware for home use large storage
Message-ID:  <20100210105516.GA65506@icarus.home.lan>
In-Reply-To: <201002101127.53444.pieter@service2media.com>
References:  <4B6F9A8D.4050907@langille.org> <4B718EBB.6080709@acm.poly.edu> <4B723609.8010802@langille.org> <201002101127.53444.pieter@service2media.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Feb 10, 2010 at 11:27:53AM +0100, Pieter de Goeje wrote:
> On Wednesday 10 February 2010 05:28:57 Dan Langille wrote:
> > Boris Kochergin wrote:
> > > Peter C. Lai wrote:
> > >> On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
> > >>> Charles Sprickman wrote:
> > >>>> On Mon, 8 Feb 2010, Dan Langille wrote:
> > >>>> Also, it seems like
> > >>>> people who use zfs (or gmirror + gstripe) generally end up buying
> > >>>> pricey hardware raid cards for compatibility reasons.  There seem to
> > >>>> be no decent add-on SATA cards that play nice with FreeBSD other
> > >>>> than that weird supermicro card that has to be physically hacked
> > >>>> about to fit.
> > >>
> > >> Mostly only because certain cards have issues w/shoddy JBOD
> > >> implementation. Some cards (most notably ones like Adaptec 2610A which
> > >> was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the day)
> > >> won't let you run the drives in passthrough mode and seem to all want
> > >> to stick their grubby little RAID paws into your JBOD setup (i.e. the
> > >> only way to have minimal
> > >> participation from the "hardware" RAID is to set each disk as its own
> > >> RAID-0/volume in the controller BIOS) which then cascades into issues
> > >> with SMART, AHCI, "triple caching"/write reordering, etc on the
> > >> FreeBSD side (the controller's own craptastic cache, ZFS vdev cache,
> > >> vmm/app cache, oh my!). So *some* people go with something
> > >> tried-and-true (basically bordering on server-level cards that let you
> > >> ditch any BIOS type of RAID config and present the raw disk devices to
> > >> the kernel)
> > >
> > > As someone else has mentioned, recent SiL stuff works well. I have
> > > multiple http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008
> > > cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and
> > > 8.0-STABLE machines using both the old ata(4) driver and ATA_CAM. Don't
> > > let the RAID label scare you--that stuff is off by default and the
> > > controller just presents the disks to the operating system. Hot swap
> > > works. I haven't had the time to try the siis(4) driver for them, which
> > > would result in better performance.
> > 
> > That's a really good price. :)
> > 
> > If needed, I could host all eight SATA drives for $160, much cheaper
> > than any of the other RAID cards I've seen.
> > 
> > The issue then is finding a motherboard which has 4x PCI Express slots.  ;)
> 
> You should be able to put PCIe 4x card in a PCIe 16x or 8x slot. 
> For an explanation allow me to quote wikipedia:
> 
> "A PCIe card will fit into a slot of its physical size or bigger, but may not 
> fit into a smaller PCIe slot. Some slots use open-ended sockets to permit 
> physically longer cards and will negotiate the best available electrical 
> connection. The number of lanes actually connected to a slot may also be less 
> than the number supported by the physical slot size. An example is a x8 slot 
> that actually only runs at ×1; these slots will allow any ×1, ×2, ×4 or ×8 
> card to be used, though only running at the ×1 speed. This type of socket is 
> described as a ×8 (×1 mode) slot, meaning it physically accepts up to ×8 cards 
> but only runs at ×1 speed. The advantage gained is that a larger range of PCIe 
> cards can still be used without requiring the motherboard hardware to support 
> the full transfer rate???in so doing keeping design and implementation costs 
> down."

Correction -- more than likely on a consumer motherboard you *will not*
be able to put a non-VGA card into the PCIe x16 slot.  I have numerous
Asus and Gigabyte motherboards which only accept graphics cards in their
PCIe x16 slots; this """feature""" is documented in user manuals.  I
don't know how/why these companies chose to do this, but whatever.

I would strongly advocate that the OP (who has stated he's focusing on
stability and reliability over speed) purchase a server motherboard that
has a PCIe x8 slot on it and/or server chassis (usually best to buy both
of these things from the same vendor) and be done with it.

-- 
| Jeremy Chadwick                                   jdc@parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100210105516.GA65506>