Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 12 Apr 2019 21:22:23 +0200
From:      "Patrick M. Hausen" <hausen@punkt.de>
To:        Warner Losh <imp@bsdimp.com>
Cc:        FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: NVME aborting outstanding i/o and controller resets
Message-ID:  <92DAD65A-9BFE-4294-9066-977F498300A3@punkt.de>
In-Reply-To: <CANCZdfrcnRwqDPXMyT6xNKUZ5nX8x9Fj6DHbCnh%2BQ4mWzx0vGQ@mail.gmail.com>
References:  <818CF16A-D71C-47C0-8A1B-35C9D8F68F4E@punkt.de> <CF2365AE-23EA-4F18-9520-C998216155D5@punkt.de> <CANCZdfoPZ9ViQzZ2k8GT5pNw5hjso3rzmYxzU=s%2B3K=ze%2BLZwg@mail.gmail.com> <58E4FC01-D154-42D4-BA0F-EF9A2C60DBF7@punkt.de> <CANCZdfpeZ-MMKB3Sh=3vhsjJcmFkGG7Jq8nW52D5S45PL3menA@mail.gmail.com> <45D98122-7596-4E8A-8A0D-C33E017C1109@punkt.de> <CANCZdfrcnRwqDPXMyT6xNKUZ5nX8x9Fj6DHbCnh%2BQ4mWzx0vGQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Warner,

thanks for taking the time again =E2=80=A6

> OK. This means that whatever I/O workload we've done has caused the =
NVME card to stop responding for 30s, so we reset it.

I figured as much ;-)

> So it's an intel card.

Yes - I already added this info several times. 6 of them, 2.5=E2=80=9C =
NVME =E2=80=9Edisk drives=E2=80=9C.

> OK. That suggests Intel has a problem with their firmware.

I came across this one:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211713

Is it more probable that Intel has got buggy firmware here than that
=E2=80=9Ewe=E2=80=9C are missing interrupts?

The mainboard is the Supermicro H11SSW-NT. Two NVME drive bays share
a connector on the mainboard:

	NVMe Ports ( NVMe 0~7, 10, 11, 14, 15)

	The H11SSW-iN/NT has tweleve (12) NVMe ports (2 ports per 1 Slim =
SAS connector) on the motherboard.
	These ports provide high-speed, low-latency PCI-E 3.0 x4 =
connections directly from the CPU to NVMe Solid
	State (SSD) drives. This greatly increases SSD data- throughput =
performance and significantly reduces PCI-E
	latency by simplifying driver/software requirements resulting =
from direct PCI-E interface from the CPU to the NVMe SSD drives.

Is this purely mechanical or do two drives share PCI-E resources? Which =
would explain
why the problems always come in pairs (nvme6 and nvme7, for example).

This afternoon I set up a system with 4 drives and I was not able to =
reproduce the problem.
(We just got 3 more machines which happened to have 4 drives each and no =
M.2 directly
on the mainboard).
I will change the config to 6 drives like with the two FreeNAS systems =
in our data center.

> [=E2=80=A6 nda(4) ...]
> I doubt that would have any effect. They both throw as much I/O onto =
the card as possible in the default config.

I found out - yes, just the same.

> There's been some minor improvements in -current here. Any chance you =
could experimentally try that with this test? You won't get as many I/O =
abort errors (since we don't print those), and we have a few more =
workarounds for the reset path (though honestly, it's still kinda =
stinky).

HEAD or RELENG_12, too?

Kind regards,
Patrick
--=20
punkt.de GmbH			Internet - Dienstleistungen - Beratung
Kaiserallee 13a			Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe			info@punkt.de	http://punkt.de
AG Mannheim 108285		Gf: Juergen Egeling




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?92DAD65A-9BFE-4294-9066-977F498300A3>