Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jul 2019 15:42:58 +0200 (CEST)
From:      =?UTF-8?Q?Trond_Endrest=C3=B8l?= <trond.endrestol@ximalas.info>
To:        freebsd-questions@freebsd.org
Subject:   Re: extremely slow disk I/O after updating to 12.0
Message-ID:  <alpine.BSF.2.21.9999.1907031505400.1251@enterprise.ximalas.info>
In-Reply-To: <b76b814f-e440-67a9-5424-4e7c5d03d5ca@malikania.fr>
References:  <b76b814f-e440-67a9-5424-4e7c5d03d5ca@malikania.fr>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 3 Jul 2019 13:34+0200, David Demelier wrote:

> zpool status indicates that the blocksize is erroneous and that I may expect
> performance degradation. But that much is impressive. Can someone confirm?
> 
> # zpool status
>   pool: tank
>  state: ONLINE
> status: One or more devices are configured to use a non-native block size.
>         Expect reduced performance.
> action: Replace affected devices with devices that support the
>         configured block size, or migrate data to a properly configured
>         pool.
>   scan: none requested
> config:
> 
>         NAME          STATE     READ WRITE CKSUM
>         tank          ONLINE       0     0     0
>           raidz1-0    ONLINE       0     0     0
>             gpt/zfs0  ONLINE       0     0     0  block size: 512B configured, 4096B native
>             gpt/zfs1  ONLINE       0     0     0  block size: 512B configured, 4096B native
> 
> errors: No known data errors
> 
> 
> 
> According to some googling, I must update those pools to change the block
> size. However there are no many articles on that so I'm a bit afraid of doing
> this. The zfs0 and zfs1 are in raidz.
> 
> Any help is very welcome.

If you want to change the block size, I'm afraid you must backup your 
data somewhere, destroy tank, and recreate it after you set:

sysctl vfs.zfs.min_auto_ashift=12

If you only deal with 4Kn drives, then I suggest you edit 
/etc/sysctl.conf, adding for future use:

vfs.zfs.min_auto_ashift=12

Options range from replicating the data on another computer, simply as 
a file (do this twice while saving to a different filename each time), 
or receiving and unpacking the zstream on another computer's zpool, or 
migrating to a new pair of disks.

Here's my outline for doing the ZFS transfer:

==

Prepare computer B for receiving the zstream:

nc -l 1234 > some.file.zfs

Or, still on computer B:

nc -l 1234 | zfs recv -Fduv somepool
# Optional, to be done after the transfer:
zfs destroy -Rv somepool@transfer

In the latter case, existing filesystems beneath the toplevel 
filesystem in somepool will be replaced by whatever is in the zstream. 
Filesystems with "pathnames" unique to somepool will be unaffected.

On computer A:

zfs snap tank@transfer
zfs send -RLev tank@transfer | nc -N computer.B.some.domain 1234
zfs destroy -Rv tank@transfer

==

Feel free to replace nc (netcat) with ssh or something else.

==

zfs send and zfs recv can be piped together if the pools are connected 
to the same computer:

zfs send -RLev tank@transfer | zfs recv -Fduv newtank

newtank can be renamed simply by exporting it and importing it using 
its current and desired name:

zpool export newtank
zpool import -N newtank tank

Note, this must be done while running FreeBSD from some other media, 
such as a DVD or a memstick.

Take care to ensure the bootfs pool property is pointing to the 
correct BE before rebooting.

==

To transfer the data back to the new tank pool:

Prepare computer A for receiving the zstream:

nc -l 1234 | zfs recv -Fduv tank
# Do these two commands after the transfer:
zfs destroy -Rv tank@transfer
zpool set bootfs=tank/the/correct/boot/environment tank

On computer B:

nc -N computer.A.some.domain 1234 < some.file.zfs

Or, still on computer B:

zfs snap somepool@transfer # If you removed the previous @transfer snapshot
zfs send -RLev somepool@transfer | nc -N computer.A.some.domain 1234

-- 
Trond.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.21.9999.1907031505400.1251>