From owner-freebsd-stable@freebsd.org Thu Jul 28 10:38:00 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AFD40BA77DA for ; Thu, 28 Jul 2016 10:38:00 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu1176c.smtpx.saremail.com (cu1176c.smtpx.saremail.com [195.16.148.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 793701CD3 for ; Thu, 28 Jul 2016 10:38:00 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.36] (izaro.sarenet.es [192.148.167.11]) by proxypop02.sare.net (Postfix) with ESMTPSA id 3C5109DD2B0 for ; Thu, 28 Jul 2016 12:29:59 +0200 (CEST) From: Borja Marcos Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Intel NVMe troubles? Message-Id: Date: Thu, 28 Jul 2016 12:29:58 +0200 To: FreeBSD-STABLE Mailing List Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Jul 2016 10:38:00 -0000 Hi :) Still experimenting with NVMe drives and FreeBSD, and I have ran into = problems, I think. I=C2=B4ve got a server with 10 Intel DC P3500 NVMe drives. Right now, = running 11-BETA2. I have updated the firmware in the drives to the latest version = (8DV10174) using the Data Center Tools. And I=E2=80=99ve formatted them for 4 KB blocks (LBA format #3) nvmecontrol identify nvme0ns1 Size (in LBAs): 488378646 (465M) Capacity (in LBAs): 488378646 (465M) Utilization (in LBAs): 488378646 (465M) Thin Provisioning: Not Supported Number of LBA Formats: 7 Current LBA Format: LBA Format #03 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 4096 Metadata Size: 0 LBA Format #04: Data Size: 4096 Metadata Size: 8 LBA Format #05: Data Size: 4096 Metadata Size: 64 LBA Format #06: Data Size: 4096 Metadata Size: 128 ZFS properly detects the 4 KB block size and sets the correct ashift = (12). But I=E2=80=99ve found these error messages generated while I created a pool (zpool create tank raidz2 /dev/nvd[0-8] = spare /dev/nvd9) Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:63 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:63 cdw0:0 Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:62 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:62 cdw0:0 Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:61 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:61 cdw0:0 Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:60 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:60 cdw0:0 Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:59 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:59 cdw0:0 Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:58 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:58 cdw0:0 Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:57 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:57 cdw0:0 Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:56 = nsid:1 Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6 = cid:56 cdw0:0 And the same for the rest of the drives [0-9]. Should I worry?=20 Thanks! Borja.