Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Sep 2016 13:01:02 +0100
From:      Steven Hartland <killing@multiplay.co.uk>
To:        freebsd-stable@freebsd.org
Subject:   Re: zfs/raidz and creation pause/blocking
Message-ID:  <d5026871-81b7-f7ce-4a07-a40b225d7004@multiplay.co.uk>
In-Reply-To: <57E3C68C.8060200@norma.perm.ru>
References:  <57E3C68C.8060200@norma.perm.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
Almost certainly its TRIMing the drives try setting the sysctl 
vfs.zfs.vdev.trim_on_init=0

On 22/09/2016 12:54, Eugene M. Zheganin wrote:
> Hi.
>
> Recently I spent a lot of time setting up various zfs installations, and
> I got a question.
> Often when creating a raidz on disks considerably big (>~ 1T) I'm seeing
> a weird stuff: "zpool create" blocks, and waits for several minutes. In
> the same time system is fully responsive and I can see in gstat that the
> kernel starts to tamper all the pool candidates sequentially at 100%
> busy with iops around zero (in the example below, taken from a live
> system, it's doing something with da11):
>
> (zpool create gamestop raidz da5 da7 da8 da9 da10 da11)
>
> dT: 1.064s  w: 1.000s
>   L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
>      0      0      0      0    0.0      0      0    0.0    0.0| da0
>      0      0      0      0    0.0      0      0    0.0    0.0| da1
>      0      0      0      0    0.0      0      0    0.0    0.0| da2
>      0      0      0      0    0.0      0      0    0.0    0.0| da3
>      0      0      0      0    0.0      0      0    0.0    0.0| da4
>      0      0      0      0    0.0      0      0    0.0    0.0| da5
>      0      0      0      0    0.0      0      0    0.0    0.0| da6
>      0      0      0      0    0.0      0      0    0.0    0.0| da7
>      0      0      0      0    0.0      0      0    0.0    0.0| da8
>      0      0      0      0    0.0      0      0    0.0    0.0| da9
>      0      0      0      0    0.0      0      0    0.0    0.0| da10
>    150      3      0      0    0.0      0      0    0.0  112.6| da11
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p1
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p2
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p3
>      0      0      0      0    0.0      0      0    0.0    0.0| da1p1
>      0      0      0      0    0.0      0      0    0.0    0.0| da1p2
>      0      0      0      0    0.0      0      0    0.0    0.0| da1p3
>      0      0      0      0    0.0      0      0    0.0    0.0| da0p4
>      0      0      0      0    0.0      0      0    0.0    0.0| gpt/boot0
>      0      0      0      0    0.0      0      0    0.0    0.0|
> gptid/22659641-7ee6-11e6-9b56-0cc47aa41194
>      0      0      0      0    0.0      0      0    0.0    0.0| gpt/zroot0
>      0      0      0      0    0.0      0      0    0.0    0.0| gpt/esx0
>      0      0      0      0    0.0      0      0    0.0    0.0| gpt/boot1
>      0      0      0      0    0.0      0      0    0.0    0.0|
> gptid/23c1fbec-7ee6-11e6-9b56-0cc47aa41194
>      0      0      0      0    0.0      0      0    0.0    0.0| gpt/zroot1
>      0      0      0      0    0.0      0      0    0.0    0.0| mirror/mirror
>      0      0      0      0    0.0      0      0    0.0    0.0| da1p4
>      0      0      0      0    0.0      0      0    0.0    0.0| gpt/esx1
>
> The most funny thing is that da5,7-11 are SSD, with a capability of like
> 30K iops at their least.
> So I wonder what is happening during this and why does it take that
> long. Because usually pools are creating very fast.
>
> Thanks.
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d5026871-81b7-f7ce-4a07-a40b225d7004>