Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jul 2019 16:31:40 +0200
From:      David Demelier <markand@malikania.fr>
To:        freebsd-questions@freebsd.org
Subject:   Re: extremely slow disk I/O after updating to 12.0
Message-ID:  <2b3b70f4-630f-b2bf-99fa-ab237b0610d3@malikania.fr>
In-Reply-To: <9d31fb68-df3e-76d3-195d-0da9749b0b1d@denninger.net>
References:  <b76b814f-e440-67a9-5424-4e7c5d03d5ca@malikania.fr> <alpine.BSF.2.21.9999.1907031505400.1251@enterprise.ximalas.info> <9d31fb68-df3e-76d3-195d-0da9749b0b1d@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Le 03/07/2019 à 15:51, Karl Denninger a écrit :
> On 7/3/2019 08:42, Trond Endrestøl wrote:
>> On Wed, 3 Jul 2019 13:34+0200, David Demelier wrote:
>>
>>> zpool status indicates that the blocksize is erroneous and that I may expect
>>> performance degradation. But that much is impressive. Can someone confirm?
>>>
>>> # zpool status
>>>    pool: tank
>>>   state: ONLINE
>>> status: One or more devices are configured to use a non-native block size.
>>>          Expect reduced performance.
>>> action: Replace affected devices with devices that support the
>>>          configured block size, or migrate data to a properly configured
>>>          pool.
>>>    scan: none requested
>>> config:
>>>
>>>          NAME          STATE     READ WRITE CKSUM
>>>          tank          ONLINE       0     0     0
>>>            raidz1-0    ONLINE       0     0     0
>>>              gpt/zfs0  ONLINE       0     0     0  block size: 512B configured, 4096B native
>>>              gpt/zfs1  ONLINE       0     0     0  block size: 512B configured, 4096B native
>>>
>>> errors: No known data errors
>>>
>>>
>>>
>>> According to some googling, I must update those pools to change the block
>>> size. However there are no many articles on that so I'm a bit afraid of doing
>>> this. The zfs0 and zfs1 are in raidz.
>>>
>>> Any help is very welcome.
> 
> ashift=9 on a 4k native block device is going to do horrible things to
> performance.  There's no way to change it on an existing pool, as the
> other respondent noted; you will have to back up the data on the pool,
> destroy the pool and then re-create it.
> 
> Was this pool originally created with 512b disks and then the drives
> were swapped out with a "replace" at some point for advanced-format units?

Thanks for your answers.

No, it was created almost a decade ago back in 2012 using FreeBSD 9. I 
don't have the history for these commands but it was something like

zpool create tank raidz /dev/gpt/zfs0 /dev/gpt/zfs1

Regards,

-- 
David




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2b3b70f4-630f-b2bf-99fa-ab237b0610d3>