Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 27 Dec 2018 10:37:15 -0800
From:      Freddie Cash <fjwcash@gmail.com>
To:        Willem Jan Withagen <wjw@digiware.nl>
Cc:        Sami Halabi <sodynet1@gmail.com>, FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: Suggestion for hardware for ZFS fileserver
Message-ID:  <CAOjFWZ5kA=enx8Nq9rQy1vBrndEM6GeRrjMEVZAY-evusrRsHQ@mail.gmail.com>
In-Reply-To: <d423b8c3-5aba-907c-c80f-b4974571adba@digiware.nl>
References:  <CAEW%2BogZnWC07OCSuzO7E4TeYGr1E9BARKSKEh9ELCL9Zc4YY3w@mail.gmail.com> <C839431D-628C-4C73-8285-2360FE6FFE88@gmail.com> <CAEW%2BogYWKPL5jLW2H_UWEsCOiz=8fzFcSJ9S5k8k7FXMQjywsw@mail.gmail.com> <4f816be7-79e0-cacb-9502-5fbbe343cfc9@denninger.net> <3160F105-85C1-4CB4-AAD5-D16CF5D6143D@ifm.liu.se> <YQBPR01MB038805DBCCE94383219306E1DDB80@YQBPR01MB0388.CANPRD01.PROD.OUTLOOK.COM> <D0E7579B-2768-46DB-94CF-DBD23259E74B@ifm.liu.se> <CAEW%2BogaKTLsmXaUGk7rZWb7u2Xqja%2BpPBK5rduX0zXCjk=2zew@mail.gmail.com> <d423b8c3-5aba-907c-c80f-b4974571adba@digiware.nl>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Dec 27, 2018, 2:55 AM Willem Jan Withagen <wjw@digiware.nl wrote:

> On 22/12/2018 15:49, Sami Halabi wrote:
> > Hi,
> >
> > What sas hba card do you recommend for 16/24 internal ports and 2
> external
> > that are recognized and work well with freebsd ZFS.
>
> There is no real advise here, but what I saw is that it is relatively
> easy to overload a lot of the busses involved int his.
>
> I got this when building Ceph clusters on FreeBSD, where each disk has
> its own daemon to hammer away on the platters.
>
> The first bottleneck is the disk "backplane". It you do not need to wire
> every disk with a dedicated HBA-disk cable, then you are sharing the
> bandwidth on the backplane between all the disks. and dependant on the
> architecture on the backplane serveral disk share one expander. And the
> feed into that will be share by the disks attached to that expander.
> Some expanders will have multiple inputs from the HBA, but I seen cases
> where 4 sas lanes go in and only 2 get used.
>

You can get backplanes that use multi-lane SFF-8087 connectors and cables
between the HBA and backplane, but provide individual connections to each
drive bay. You get the best of both worlds (individual connections to each
drive, but only 1 cable for every 4 drives. :) No expanders or port
multipliers involved.

Supermicro 836A backplane is an example of that. It's what we use for all
our ZFS and iSCSI boxes.

AMD Epyc motherboards provide lots of PCIe slots and lanes to stuff with
HBAs, without worrying about bottlenecking. :)

--
Cheers,
Freddie

Typos due to phone keyboard.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOjFWZ5kA=enx8Nq9rQy1vBrndEM6GeRrjMEVZAY-evusrRsHQ>