Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Jan 2013 20:58:02 +0100
From:      Fabian Keil <freebsd-listen@fabiankeil.de>
To:        Ulrich =?UTF-8?B?U3DDtnJsZWlu?= <uqs@FreeBSD.org>
Cc:        Peter Jeremy <peter@rulingia.com>, current@freebsd.org, fs@freebsd.org
Subject:   Re: Zpool surgery
Message-ID:  <20130128205802.1ffab53e@fabiankeil.de>
In-Reply-To: <20130128085820.GR35868@acme.spoerlein.net>
References:  <20130127103612.GB38645@acme.spoerlein.net> <1F0546C4D94D4CCE9F6BB4C8FA19FFF2@multiplay.co.uk> <20130127201140.GD29105@server.rulingia.com> <20130128085820.GR35868@acme.spoerlein.net>

next in thread | previous in thread | raw e-mail | index | archive | help
--Sig_/NoSyaoazf+aPmp=rJ5E9Umz
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Ulrich Sp=C3=B6rlein <uqs@FreeBSD.org> wrote:

> On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote:
> > On 2013-Jan-27 14:31:56 -0000, Steven Hartland <killing@multiplay.co.uk=
> wrote:
> > >----- Original Message -----=20
> > >From: "Ulrich Sp=C3=B6rlein" <uqs@FreeBSD.org>
> > >> I want to transplant my old zpool tank from a 1TB drive to a new 2TB
> > >> drive, but *not* use dd(1) or any other cloning mechanism, as the po=
ol
> > >> was very full very often and is surely severely fragmented.
> > >
> > >Cant you just drop the disk in the original machine, set it as a mirror
> > >then once the mirror process has completed break the mirror and remove
> > >the 1TB disk.
> >=20
> > That will replicate any fragmentation as well.  "zfs send | zfs recv"
> > is the only (current) way to defragment a ZFS pool.

It's not obvious to me why "zpool replace" (or doing it manually)
would replicate the fragmentation.

> But are you then also supposed to be able send incremental snapshots to
> a third pool from the pool that you just cloned?

Yes.

> I did the zpool replace now over night, and it did not remove the old
> device yet, as it found cksum errors on the pool:
>=20
> root@coyote:~# zpool status -v
>   pool: tank
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
>         corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
>         entire pool from backup.
>    see: http://illumos.org/msg/ZFS-8000-8A
>   scan: resilvered 873G in 11h33m with 24 errors on Mon Jan 28 09:45:32 2=
013
> config:
>=20
>         NAME           STATE     READ WRITE CKSUM
>         tank           ONLINE       0     0    27
>           replacing-0  ONLINE       0     0    61
>             da0.eli    ONLINE       0     0    61
>             ada1.eli   ONLINE       0     0    61
>=20
> errors: Permanent errors have been detected in the following files:
>=20
>         tank/src@2013-01-17:/.svn/pristine/8e/8ed35772a38e0fec00bc1cbc2f0=
5480f4fd4759b.svn-base
[...]
>         tank/ncvs@2013-01-17:/ports/textproc/uncrustify/distinfo,v
>=20
> Interestingly, these only seem to affect the snapshot, and I'm now
> wondering if that is the problem why the backup pool did not accept the
> next incremental snapshot from the new pool.

I doubt that. My expectation would be that it only prevents
the "zfs send" to finish successfully.

BTW, you could try reading the files to be sure that the checksum
problems are permanent and not just temporary USB issues.

> How does the receiving pool known that it has the correct snapshot to
> store an incremental one anyway? Is there a toplevel checksum, like for
> git commits? How can I display and compare that?

Try zstreamdump:

fk@r500 ~ $sudo zfs send -i @2013-01-24_20:48 tank/etc@2013-01-26_21:14 | z=
streamdump | head -11
BEGIN record
	hdrtype =3D 1
	features =3D 4
	magic =3D 2f5bacbac
	creation_time =3D 5104392a
	type =3D 2
	flags =3D 0x0
	toguid =3D a1eb3cfe794e675c
	fromguid =3D 77fb8881b19cb41f
	toname =3D tank/etc@2013-01-26_21:14
END checksum =3D 1047a3f2dceb/67c999f5e40ecf9/442237514c1120ed/efd508ab5203=
c91c

fk@r500 ~ $sudo zfs send lexmark/backup/r500/tank/etc@2013-01-24_20:48 | zs=
treamdump | head -11
BEGIN record
	hdrtype =3D 1
	features =3D 4
	magic =3D 2f5bacbac
	creation_time =3D 51018ff4
	type =3D 2
	flags =3D 0x0
	toguid =3D 77fb8881b19cb41f
	fromguid =3D 0
	toname =3D lexmark/backup/r500/tank/etc@2013-01-24_20:48
END checksum =3D 1c262b5ffe935/78d8a68e0eb0c8e7/eb1dde3bd923d153/9e08291036=
49ae22

Fabian

--Sig_/NoSyaoazf+aPmp=rJ5E9Umz
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (FreeBSD)

iEYEARECAAYFAlEG2FAACgkQBYqIVf93VJ28pwCdElXCUi5LtiuQDigCoscMjT3q
bXAAn0MaWH2Uuj3tqtaoWIKXeMBeW76D
=w0Za
-----END PGP SIGNATURE-----

--Sig_/NoSyaoazf+aPmp=rJ5E9Umz--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130128205802.1ffab53e>