Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 5 Apr 2019 09:08:06 -0600
From:      Warner Losh <imp@bsdimp.com>
To:        "Patrick M. Hausen" <hausen@punkt.de>
Cc:        FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: NVME aborting outstanding i/o
Message-ID:  <CANCZdfpeZ-MMKB3Sh=3vhsjJcmFkGG7Jq8nW52D5S45PL3menA@mail.gmail.com>
In-Reply-To: <58E4FC01-D154-42D4-BA0F-EF9A2C60DBF7@punkt.de>
References:  <818CF16A-D71C-47C0-8A1B-35C9D8F68F4E@punkt.de> <CF2365AE-23EA-4F18-9520-C998216155D5@punkt.de> <CANCZdfoPZ9ViQzZ2k8GT5pNw5hjso3rzmYxzU=s%2B3K=ze%2BLZwg@mail.gmail.com> <58E4FC01-D154-42D4-BA0F-EF9A2C60DBF7@punkt.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Apr 5, 2019 at 1:33 AM Patrick M. Hausen <hausen@punkt.de> wrote:

> Hi all,
>
> > Am 04.04.2019 um 17:11 schrieb Warner Losh <imp@bsdimp.com>:
> > There's a request that was sent down to the drive. It took longer than
> 30s to respond. One of them, at least, was a trim request.
> > [=E2=80=A6]
>
> Thanks for the explanation.
>
> This further explains why I was seeing a lot more of those and the system
> occasionally froze for a couple of seconds after I increased these:
>
> vfs.zfs.vdev.async_write_max_active: 10
> vfs.zfs.vdev.async_read_max_active: 3
> vfs.zfs.vdev.sync_write_max_active: 10
> vfs.zfs.vdev.sync_read_max_active: 10
>
> as recommended by Allan Jude reasoning that NVME devices could work on
> up to 64 requests in parallel. I have since reverted that change and I am
> running with the defaults.
>
> If I understand correctly, this:
>
> >         hw.nvme.per_cpu_io_queues=3D0
>
> essentially limits the rate at which the system throws commands at the
> devices. Correct?
>

Yes. It de-facto limits the number of commands the system can throw at a
nvme drive. Some drives have trouble with multiple CPUs submitting things.
Others just have trouble with the volume of commands sometimes. This limits
both.


> So it=E2=80=99s not a real fix and there=E2=80=99s nothing fundamentally =
wrong with the
> per CPU queue or
> interrupt implementation. I will look into new firmware for my Intel
> devices and
> try tweaking the vfs.zfs.vdev.trim_max_active and related parameters.
>

Correct. It's a hack-a-around.


> Out of curiosity: what happens if I disable TRIM? My knowledge is rather
> superficial
> and I just filed that under =E2=80=9ETRIM is absolutely essential lest pe=
rformance
> will
> suffer severely and your devices die - plus bad karma, of course =E2=80=
=A6=E2=80=9C ;-)
>

TRIMs help the drive optimize their garbage collection by giving it a
larger pool of free blocks to work with. This has the effect of reducing
write amplification. Write amp is the measure of the amount of extra work
the drive  has to do for every user write it processes. Ideally, you want
this number to be 1.0. You'll never get to 1.0, but numbers less than 1.5
are common and most of the models drive makers use to rate the lifetime of
their NAND assume a write amp of about 2.

So, if you eliminate the TRIMs you eliminate this optimization and write
amp will increase. This has two bad effects. First, wear and tear on the
NAND. Second, it takes resources away from the user. In practice, however,
the bad effects are quite limited if you don't have a write intensive
workload. Your drive is rated for so many drive writes per day (or
equivalently total data written over the life of the drive). This will be
on the spec sheet somewhere. If you don't have a write intensive workload
(which I'd say is any sustained write load greater than about 1/10th the
datasheet write limit), then if you think TRIMs are causing issues, you
should disable them. The effects of not trimming are likely to be in the
noise on such systems, and the benefits of having things TRIMed will be
less.

At work, for a large video streaming company, we enable the TRIMs, even
though we're on the edge of the rule of thumb since we're unsure how long
the machines really need to be in the field and don't want to risk it.
Except for the version of Samsung nvme drives (PM963, no longer made) we
got a while ago... those we turn TRIM off on because UFS' machine-gunning
down of TRIMs and nvd's blind pass-through of TRIMs took down the drive.
UFS now combines TRIMs and we've moved to using nda since it also combines
TRIMs and it won't be so bad if we tried again today.

Drive makers optimize different things. Enterprise drives handle TRIMs a
lot better than consumer drives. consumer drives are cheaper (in oh so many
ways), so some care is needed. Intel makes a wide range of drives, from the
super duper awesome (with prices to match) to the somewhat disappointing
(but incredibly cheap and good enough for a lot of applicaitons). Not sure
where on this scale your drives fall on this spectrum.

tl;dr: Unless you are writing the snot out of those Intel drives, disabling
TRIM entirely will likely help avoid pushing so many commands they timeout.

Warner



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CANCZdfpeZ-MMKB3Sh=3vhsjJcmFkGG7Jq8nW52D5S45PL3menA>