From owner-freebsd-stable@freebsd.org Mon Mar 7 15:48:33 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0B311AC2B76 for ; Mon, 7 Mar 2016 15:48:33 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu01176b.smtpx.saremail.com (cu01176b.smtpx.saremail.com [195.16.151.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BF1C49C7; Mon, 7 Mar 2016 15:48:32 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.96] (izaro.sarenet.es [192.148.167.11]) by proxypop01.sare.net (Postfix) with ESMTPSA id 154809DD831; Mon, 7 Mar 2016 16:48:29 +0100 (CET) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.2 \(3112\)) Subject: Re: FreeBSD 10.3 - nvme regression From: Borja Marcos In-Reply-To: Date: Mon, 7 Mar 2016 16:48:28 +0100 Cc: FreeBSD-STABLE Mailing List , John Baldwin Content-Transfer-Encoding: quoted-printable Message-Id: <9499E409-3573-47A7-ABF9-043FC4870FEE@sarenet.es> References: <5A6B5C6F-26D1-40C5-8CCF-26EB8F17C59A@sarenet.es> To: Jim Harris X-Mailer: Apple Mail (2.3112) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Mar 2016 15:48:33 -0000 > On 07 Mar 2016, at 15:28, Jim Harris wrote: > (Moving to freebsd-stable. NVMe is not associated with the SCSI stack = at all.) Oops, my apologies. I was assuming that, being storage stuff, -scsi was = a good list.=20 > Can you please file a bug report on this? Sure, doing doing some simple tests right now and I=E2=80=99ll file it. >=20 > Also, can you try setting the following loader variable before = install? >=20 > hw.nvme.min_cpus_per_ioq=3D4 It now boots, thanks :) Note that it=E2=80=99s the first time I use NVMe drives, so bear with me = in case I do anything stupid ;) I have noticed some odd performance problems. I have created a = =E2=80=9Craidz2=E2=80=9D ZFS pool with the 10 drives. Doing some silly tests with several =E2=80=9CBonnie++=E2=80=9D = instances, I have noticed that delete commands seem to be very slow. After running several bonnie++ instances in parallel, when = deleting the files, the drivers are almost stuck for a fairly long time, showing 100% bandwidth usage on = =E2=80=9Cgstat=E2=80=9D and indeed being painfully slow. Disabling the usage of BIO_DELETE for ZFS (sysctl = vfs.zfs.vdev.bio_delete_disable=3D1) solves this problem, although, of course, BIO_DELETE is desirable as far as I know.=20 I observed the same behavior on 10.2. This is not a proper report, I know, I will follow up tomorrow.=20 Thanks! Borja.