Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 17 May 2015 11:17:30 +0200
From:      Kai Gallasch <k@free.de>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS RAID 10 capacity expansion and uneven data distribution
Message-ID:  <55585CAA.6080105@free.de>
In-Reply-To: <C46F686C-4765-4B0F-8A7D-F5670936FC62@digsys.bg>
References:  <CABnVG=cc_7UNMO=XUFq4esPDZyZO8wDXhfXnA4tXSu77raK42Q@mail.gmail.com> <C46F686C-4765-4B0F-8A7D-F5670936FC62@digsys.bg>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--mLJCukBQA4ftFBfQCgHnoluGGbgsecphj
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On 14.05.2015 15:59 Daniel Kalchev wrote:
> Not a total bs, but.. it could be made simpler/safer.
>=20
> skip 2,3,4 and 5
> 7a. zfs snapshot -r zpool.old@send
> 7b. zfs send -R zpool.old@send | zfs receive -F zpool
> do not skip 8 :)
> 11. zpool attach zpool da1 da2 && zpool attach zpool da3 da4

Somehow nifty. I tried this on a test server and found out, that after
the zpool split it is safer to do the import with e.g."-o altroot=3D/mnt"=
,
because if you are doing this on a root pool the import will be mounted
over the existing root fs and the zfs installation becomes unusable at
this point. (-> reboot)

Also the zfs receive should not mount the received Filesystem.

> After this operation, you should have the exact same zpool, with evenly=
 redistributed data. You could use the chance to change ashift etc. Sadly=
, this works only for mirrors.

In my case this is not true. After completion, data is still not evenly
distributed across the mirror pairs and each pair has a differing FRAG
value. (Before doing the zfs send I destroyed the old z filesystems on
the receiving side..) Although when accessing the data afterwards the
situation with one mirror pair being overused and the other almost idle
has become better - so this method fixes the problem a bit..

When I recreate the pool and restore the data the picture looks
different: Data then is really equal size across the mirrors and they
all have the same FRAG value - as expected.

My conclusion: Expanding a RAID 10 zpool by adding mirrored vdevs is not
really an option if you also want to benefit from the gained IOPS of the
new devices. In this case recreating the pool is the cleanest solution.
If you cannot recreate the pool you can think about this zpool split
hack to reditribute data across all vdevs - althoug you temporarly loose
your pool redundancy between zpool split and the end of the resilvering
process (risky)

Kai.




--mLJCukBQA4ftFBfQCgHnoluGGbgsecphj
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBCgAGBQJVWFyqAAoJEHBlTXxPsfWIJEcQAIn1U7KXsfPHVnJIXHr/QrY7
dgegqx5zLdDGYZYbgxVvU2N4yxs7YktNcMrdXeXqT/y4kf59/5Q7ByRjldE3d9VI
fZSO74F9TETjajH87OjFP7opSScfv/FSxAZtveHXtrGsw6KyQUu2H4i+EUWnvpMA
C1RgEnDixyX6JyDPiesgdtAUxa/l7fRINNWoKr+o6GpZJ3Btvtibs/JTDYd+Zyv6
+f7H3NCN79OrAyIhmbR+1S59RPPRg5OgZaGI5EGnzWs9df/0uq/pEVAzDn0qF4OI
4tgxPpYDybHtbVF31WZsii6JiVDOVueoTwoDNhGcumDAt5NFDznrnC7pC+FmNkro
e8qjBTonbSI+YQo/RvN+PL4pESxK/lV0aPeLguZpUEgQPKoj4vSWS2GdLOfd2eIX
b247G56KDRnMjPtxragaP9AWkNh/AuJdLMRyP+Y4hvpQN2N+zVlmkD5OeffK2cQy
J2+jvhH9mWPpNtZFYsV8G72gybFyUre6EMQQnTrkmr4miDgmGIkrRzSFASvmJMwr
LtsizhH6vmtLVClI5jbaB+ZSHhwTyclxyJBRZT9zGht2xm2xkdGdM1FhAL86jkTl
QglskgFo6bhihBxG9gV5RuHDsGEcgbl8I/Whne4r4qPrLYjIPvewUBB8NbHLuw/a
87NM1iquyGb5kJ3JsZN2
=FQwl
-----END PGP SIGNATURE-----

--mLJCukBQA4ftFBfQCgHnoluGGbgsecphj--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?55585CAA.6080105>