Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Feb 2010 12:06:34 +0100 (CET)
From:      =?iso-8859-2?Q?G=F3t_Andr=E1s?= <andrej@antiszoc.hu>
To:        freebsd-stable@freebsd.org
Cc:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Subject:   Re: hardware for home use large storage
Message-ID:  <64053.80.95.75.131.1265799994.squirrel@mail.deployis.eu>
In-Reply-To: <20100210105516.GA65506@icarus.home.lan>
References:  <4B6F9A8D.4050907@langille.org> <4B718EBB.6080709@acm.poly.edu> <4B723609.8010802@langille.org> <201002101127.53444.pieter@service2media.com> <20100210105516.GA65506@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sze, Február 10, 2010 11:55 am, Jeremy Chadwick wrote:
> On Wed, Feb 10, 2010 at 11:27:53AM +0100, Pieter de Goeje wrote:
>
>> On Wednesday 10 February 2010 05:28:57 Dan Langille wrote:
>>
>>> Boris Kochergin wrote:
>>>
>>>> Peter C. Lai wrote:
>>>>
>>>>> On 2010-02-09 06:37:47AM -0500, Dan Langille wrote:
>>>>>
>>>>>> Charles Sprickman wrote:
>>>>>>
>>>>>>> On Mon, 8 Feb 2010, Dan Langille wrote:
>>>>>>> Also, it seems like
>>>>>>> people who use zfs (or gmirror + gstripe) generally end up
>>>>>>> buying pricey hardware raid cards for compatibility reasons.
>>>>>>> There seem to
>>>>>>> be no decent add-on SATA cards that play nice with FreeBSD
>>>>>>> other than that weird supermicro card that has to be
>>>>>>> physically hacked about to fit.
>>>>>
>>>>> Mostly only because certain cards have issues w/shoddy JBOD
>>>>> implementation. Some cards (most notably ones like Adaptec 2610A
>>>>> which was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the
>>>>> day) won't let you run the drives in passthrough mode and seem to
>>>>> all want to stick their grubby little RAID paws into your JBOD
>>>>> setup (i.e. the only way to have minimal participation from the
>>>>> "hardware" RAID is to set each disk as its own
>>>>> RAID-0/volume in the controller BIOS) which then cascades into
>>>>> issues with SMART, AHCI, "triple caching"/write reordering, etc on
>>>>> the FreeBSD side (the controller's own craptastic cache, ZFS vdev
>>>>> cache, vmm/app cache, oh my!). So *some* people go with something
>>>>> tried-and-true (basically bordering on server-level cards that
>>>>> let you ditch any BIOS type of RAID config and present the raw
>>>>> disk devices to the kernel)
>>>>
>>>> As someone else has mentioned, recent SiL stuff works well. I have
>>>> multiple
>>>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008
>>>> cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and
>>>> 8.0-STABLE machines using both the old ata(4) driver and ATA_CAM.
>>>> Don't
>>>> let the RAID label scare you--that stuff is off by default and the
>>>> controller just presents the disks to the operating system. Hot
>>>> swap works. I haven't had the time to try the siis(4) driver for
>>>> them, which would result in better performance.
>>>
>>> That's a really good price. :)
>>>
>>>
>>> If needed, I could host all eight SATA drives for $160, much cheaper
>>> than any of the other RAID cards I've seen.
>>>
>>> The issue then is finding a motherboard which has 4x PCI Express
>>> slots.  ;)
>>
>> You should be able to put PCIe 4x card in a PCIe 16x or 8x slot.
>> For an explanation allow me to quote wikipedia:
>>
>>
>> "A PCIe card will fit into a slot of its physical size or bigger, but
>> may not fit into a smaller PCIe slot. Some slots use open-ended sockets
>> to permit physically longer cards and will negotiate the best available
>> electrical connection. The number of lanes actually connected to a slot
>> may also be less than the number supported by the physical slot size. An
>> example is a x8 slot that actually only runs at ×1; these slots will
>> allow any ×1, ×2, ×4 or ×8 card to be used, though only running at the
>> ×1 speed. This type of socket is
>> described as a ×8 (×1 mode) slot, meaning it physically accepts up to ×8
>> cards but only runs at ×1 speed. The advantage gained is that a larger
>> range of PCIe cards can still be used without requiring the motherboard
>> hardware to support the full transfer rate???in so doing keeping design
>> and implementation costs down."
>
> Correction -- more than likely on a consumer motherboard you *will not*
> be able to put a non-VGA card into the PCIe x16 slot.  I have numerous Asus
> and Gigabyte motherboards which only accept graphics cards in their PCIe
> x16 slots; this """feature""" is documented in user manuals.  I don't know
> how/why these companies chose to do this, but whatever.
>
> I would strongly advocate that the OP (who has stated he's focusing on
> stability and reliability over speed) purchase a server motherboard that
> has a PCIe x8 slot on it and/or server chassis (usually best to buy both
> of these things from the same vendor) and be done with it.

Hi,

We're running an 'old' LSI U320 x4 (or x8) PCIe hw raid card in a simple
Gigabyte mobo without any problems. It was plug and play. The mobo has
some P35 chipset and an E7400 CPU. If the exact types needed I'll look
after them. (And yes, the good old U320 scsi is lightning fast compared to
any new SATA drives and only 3x36GB disks are in raid5. I know that it
won't win the capacity contest... :) )

I think these single cpu server boards are quite overpriced regarding to
the few extra features that would make some to buy them.

Anyway, I liked that Atom D510 supermicro mobo that was mentioned earlier.
I think it would handle any good PCIe cards and would fit in a nice
Supermicro tower. I'd also suggest to with as less disk as you can. 2TB
disks are here so you can make a 4TB R5 aray with only 3 and you power
bill won't wipe out your bank account.

Regards,
Andras Got






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?64053.80.95.75.131.1265799994.squirrel>