From owner-freebsd-stable@freebsd.org Tue Aug 2 08:27:08 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BE388BAC21B; Tue, 2 Aug 2016 08:27:08 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu01176b.smtpx.saremail.com (cu01176b.smtpx.saremail.com [195.16.151.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 64B0518E3; Tue, 2 Aug 2016 08:27:07 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.36] (izaro.sarenet.es [192.148.167.11]) by proxypop01.sare.net (Postfix) with ESMTPSA id B555C9DD613; Tue, 2 Aug 2016 10:27:03 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: mfi driver performance too bad on LSI MegaRAID SAS 9260-8i From: Borja Marcos In-Reply-To: <579F8743.8030104@sorbs.net> Date: Tue, 2 Aug 2016 10:27:02 +0200 Cc: "O. Hartmann" , Jason Zhang , freebsd-performance@freebsd.org, freebsd-current , FreeBSD-STABLE Mailing List , freebsd-hardware@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <16CD100A-3BD0-47BA-A91E-F445E5DF6DBC@cyphytech.com> <1466527001.2694442.644278905.18E236CD@webmail.messagingengine.com> <1790833A-9292-4A46-B43C-BF41C7C801BE@cyphytech.com> <20160801084504.563c79cf@freyja.zeit4.iv.bundesimmobilien.de> <1519EC23-0DBC-4139-96F6-250EF872A14B@sarenet.es> <20160801151203.14a7a67d@freyja.zeit4.iv.bundesimmobilien.de> <0CA1A1F1-AFDD-4763-84C3-2FC059F44789@sarenet.es> <579F8743.8030104@sorbs.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Aug 2016 08:27:08 -0000 > On 01 Aug 2016, at 19:30, Michelle Sullivan = wrote: >=20 > There are reasons for using either=E2=80=A6 Indeed, but my decision was to run ZFS. And getting a HBA in some = configurations can be difficult because vendors insist on using=20 RAID adapters. After all, that=E2=80=99s what most of their customers = demand. Fortunately, at least some Avago/LSI cards can work as HBAs pretty well. = An example is the now venerable LSI2008. > Nowadays its seems the conversations have degenerated into those like = Windows vs Linux vs Mac where everyone thinks their answer is the right = one (just as you suggested you (Borja Marcos) did with the Dell = salesman), where in reality each has its own advantages and = disadvantages. I know, but this is not the case. But it=E2=80=99s quite frustrating to = try to order a server with a HBA rather than a RAID and receiving an = answer such as =E2=80=9Cthe HBA option is not available=E2=80=9D. That=E2=80=99s why = people are zapping, flashing and, generally, torturing HBA cards rather = cruelly ;) So, in my case, it=E2=80=99s not about what=E2=80=99s better or worse. = It=E2=80=99s just a simpler issue. Customer (myself) has made a = decision, which can be right or wrong. Manufacturer fails to deliver = what I need. If it was only one manufacturer, well, off with them, but = the issue is widespread in industry.=20 > Eg: I'm running 2 zfs servers on 'LSI 9260-16i's... big mistake! (the = ZFS, not LSI's)... one is a 'movie server' the other a 'postgresql = database' server... The latter most would agree is a bad use of zfs, = the die-hards won't but then they don't understand database servers and = how they work on disk. The former has mixed views, some argue that zfs = is the only way to ensure the movies will always work, personally I = think of all the years before zfs when my data on disk worked without = failure until the disks themselves failed... and RAID stopped that = happening... what suddenly changed, are disks and ram suddenly not = reliable at transferring data? .. anyhow back to the issue there is = another part with this particular hardware that people just throw = away=E2=80=A6 Well, silent corruption can happen. I=E2=80=99ve seen it once caused by = a flaky HBA and ZFS saved the cake. Yes. there were reliable replicas. = Still, rebuilding would be a pain in the ass.=20 > The LSI 9260-* controllers have been designed to provide on hardware = RAID. The caching whether using the Cachecade SSD or just oneboard ECC = memory is *ONLY* used when running some sort of RAID set and LVs... this = is why LSI recommend 'MegaCli -CfgEachDskRaid0' because it does enable = caching.. A good read on how to setup something similar is here: = https://calomel.org/megacli_lsi_commands.html (disclaimer, I haven't = parsed it all so the author could be clueless, but it seems to give = generally good advice.) Going the way of 'JBOD' is a bad thing to do, = just don't, performance sucks. As for the recommended command above, = can't comment because currently I don't use it nor will I need to in the = near future... but=E2=80=A6 Actually it=E2=80=99s not a good idea to use heavy disk caching when = running ZFS. Its reliability depends on being able to commit metadata to = disk. So I don=E2=80=99t care about that caching option. Provided you = have enough RAM, ZFS is very effective caching data itself. > If you (O Hartmann) want to use or need to use ZFS with any OS = including FreeBSD don't go with the LSI 92xx series controllers, its = just the wrong thing to do.. Pick an HBA that is designed to give you = direct access to the drives not one you have to kludge and cajole.. = Including LSI controllers with caches that use the mfi driver, just not = those that are not designed to work in a non RAID mode (with or without = the passthru command/mode above.) As I said, the problem is, sometimes it=E2=80=99s not so easy to find = the right HBA.=20 > So moral of the story/choices. Don't go with ZFS because people tell = you its best, because it isn't, go with ZFS if it suits your hardware = and application, and if ZFS suits your application, get hardware for it. Indeed, I second this. But really, "hardware for it" covers a rather = broad cathegory ;) ZFS can even manage to work on hardware _against_ it. Borja.