Skip site navigation (1)Skip section navigation (2)
From:      mike tancsa <mike@sentex.net>
To:        =?UTF-8?Q?Dag-Erling_Sm=C3=B8rgrav?= <des@FreeBSD.org>
Cc:        FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: block size: 512B configured, 4096B native all of a sudden
In-Reply-To: <8634rd8l2p.fsf@ltc.des.dev>
References:  <9a593ca1-4975-438a-afec-d8dd5199dbcf@sentex.net> <86bk618lyh.fsf@ltc.des.dev> <d3536b16-fbdb-4a9f-a85c-47b1cc7213db@sentex.net> <867cgp8lgu.fsf@ltc.des.dev> <57a1cc6b-2bd2-4292-9fde-f8799ffa291a@sentex.net> <8634rd8l2p.fsf@ltc.des.dev>

| previous in thread | raw e-mail | index | archive | help
On 4/22/2024 11:37 AM, Dag-Erling Smørgrav wrote:
> mike tancsa <mike@sentex.net> writes:
>> I guess the next question is how can I fix the issue?  its over 73TB
>> and would take quite a long time to zfs send | zre recv.  The HDDs are
>> indeed 4K disks.  If I offline the 2 SSD disks that are part of the
>> special device one by one and resilver, will it fix it?
> No, the ashift is a vdev property.  You'll have to remove one disk from
> the pool, create a new pool, send | recv across, then destroy the old
> pool and add the remaining disk to the new pool.  Make sure you have a
> backup before you start.

I was afraid of that. So basically I have to copy the entire pool or 
live with the performance penalty :(  73TB is a lot to copy / move.

     ---Mike




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?>