From owner-freebsd-questions@freebsd.org Sat Sep 10 08:57:18 2016 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D991EBD21F1 for ; Sat, 10 Sep 2016 08:57:18 +0000 (UTC) (envelope-from c.pilka@asconix.com) Received: from aibo.runbox.com (aibo.runbox.com [91.220.196.211]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 105FEE13 for ; Sat, 10 Sep 2016 08:57:17 +0000 (UTC) (envelope-from c.pilka@asconix.com) Received: from [10.9.9.210] (helo=mailfront10.runbox.com) by bars.runbox.com with esmtp (Exim 4.71) (envelope-from ) id 1bie66-0007Kz-3M for freebsd-questions@freebsd.org; Sat, 10 Sep 2016 10:57:14 +0200 Received: from cm-84.211.200.201.getinternet.no ([84.211.200.201] helo=houdini.fritz.box) by mailfront10.runbox.com with esmtpsa (uid:865152 ) (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.82) id 1bie5z-0007Kj-LC for freebsd-questions@freebsd.org; Sat, 10 Sep 2016 10:57:07 +0200 From: Christoph Pilka Message-Id: Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: 40 cores, 48 NVMe disks, feel free to take over Date: Sat, 10 Sep 2016 10:57:07 +0200 References: <1473455690.58708.93.camel@pki2.com> To: freebsd-questions@freebsd.org In-Reply-To: <1473455690.58708.93.camel@pki2.com> X-Mailer: Apple Mail (2.3124) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 10 Sep 2016 08:57:18 -0000 Hi, the server we got to experiment with is the SuperMicro 2028R-NR48N = (https://www.supermicro.nl/products/system/2U/2028/SSG-2028R-NR48N.cfm = ), = the board itself is a X10DSC+ //Chris > On 09 Sep 2016, at 23:14, Dennis Glatting wrote: >=20 > On Fri, 2016-09-09 at 22:51 +0200, Christoph Pilka wrote: >> Hi, >>=20 >> we've just been granted a short-term loan of a server from Supermicro >> with 40 physical cores (plus HTT) and 48 NVMe drives. After a bit of >> mucking about, we managed to get 11-RC running. A couple of things >> are preventing the system from being terribly useful: >>=20 >> - We have to use hw.nvme.force_intx=3D1 for the server to boot >> If we don't, it panics around the 9th NVMe drive with "panic: >> couldn't find an APIC vector for IRQ...". Increasing >> hw.nvme.min_cpus_per_ioq brings it further, but it still panics later >> in the NVMe enumeration/init. hw.nvme.per_cpu_io_queues=3D0 causes it >> to panic later (I suspect during ixl init - the box has 4x10gb >> ethernet ports). >>=20 >> - zfskern seems to be the limiting factor when doing ~40 parallel "dd >> if=3D/dev/zer of=3D bs=3D1m" on a zpool stripe of all 48 = drives. Each >> drive shows ~30% utilization (gstat), I can do ~14GB/sec write and 16 >> read. >>=20 >> - direct writing to the NVMe devices (dd from /dev/zero) gives about >> 550MB/sec and ~91% utilization per device=20 >>=20 >> Obviously, the first item is the most troublesome. The rest is based >> on entirely synthetic testing and may have little or no actual impact >> on the server's usability or fitness for our purposes.=20 >>=20 >> There is nothing but sshd running on the server, and if anyone wants >> to play around you'll have IPMI access (remote kvm, virtual media, >> power) and root. >>=20 >> Any takers? >>=20 >=20 >=20 > I'm curious to know what board you have. I have had FreeBSD, including > release 11 candidates, running on SM boards without any trouble > although some of them are older boards. I haven't looked at ZFS > performance because mine are typically low disk use. That said, my > virtual server (also a SM) IOPs suck but so do its disks. >=20 > I recently found the Intel RAID chip on one SM isn't real RAID, rather > it's pseudo RAID but for a few dollars more it could be real RAID. :( > It was killing IOPs so I popped in an old LSI board, routed the cables > from the Intel chip, and the server is now a happy camper. I then > replaced 11-RC with Ubuntu 16.10 due to a specific application but I = am > also running RAIDz2 under Ubuntu on three trash 2.5T disks (I didn't = do > this for any reason other than fun).=20 >=20 > root@Tuck3r:/opt/bin# zpool status > pool: opt > state: ONLINE > scan: none requested > config: >=20 > NAME STATE READ WRITE CKSUM > opt ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > sda ONLINE 0 0 0 > sdb ONLINE 0 0 0 > sdc ONLINE 0 0 0 >=20 >=20 >=20 >> Wbr >> Christoph Pilka >> Modirum MDpay >>=20 >> Sent from my iPhone >> _______________________________________________ >> freebsd-questions@freebsd.org = mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-questions = >> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freeb >> sd.org " > _______________________________________________ > freebsd-questions@freebsd.org = mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-questions = > To unsubscribe, send any mail to = "freebsd-questions-unsubscribe@freebsd.org = "