From owner-freebsd-current@FreeBSD.ORG Mon Jan 28 08:58:25 2013 Return-Path: Delivered-To: current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id EB380A82; Mon, 28 Jan 2013 08:58:24 +0000 (UTC) (envelope-from uqs@FreeBSD.org) Received: from acme.spoerlein.net (acme.spoerlein.net [IPv6:2a01:4f8:131:23c2::1]) by mx1.freebsd.org (Postfix) with ESMTP id 786A9256; Mon, 28 Jan 2013 08:58:24 +0000 (UTC) Received: from localhost (acme.spoerlein.net [IPv6:2a01:4f8:131:23c2::1]) by acme.spoerlein.net (8.14.6/8.14.6) with ESMTP id r0S8wLP2029200 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Mon, 28 Jan 2013 09:58:22 +0100 (CET) (envelope-from uqs@FreeBSD.org) Date: Mon, 28 Jan 2013 09:58:20 +0100 From: Ulrich =?utf-8?B?U3DDtnJsZWlu?= To: Peter Jeremy Subject: Re: Zpool surgery Message-ID: <20130128085820.GR35868@acme.spoerlein.net> Mail-Followup-To: Peter Jeremy , Steven Hartland , current@freebsd.org, fs@freebsd.org References: <20130127103612.GB38645@acme.spoerlein.net> <1F0546C4D94D4CCE9F6BB4C8FA19FFF2@multiplay.co.uk> <20130127201140.GD29105@server.rulingia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20130127201140.GD29105@server.rulingia.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Steven Hartland , current@freebsd.org, fs@freebsd.org X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Jan 2013 08:58:25 -0000 On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote: > On 2013-Jan-27 14:31:56 -0000, Steven Hartland wrote: > >----- Original Message ----- > >From: "Ulrich Spörlein" > >> I want to transplant my old zpool tank from a 1TB drive to a new 2TB > >> drive, but *not* use dd(1) or any other cloning mechanism, as the pool > >> was very full very often and is surely severely fragmented. > > > >Cant you just drop the disk in the original machine, set it as a mirror > >then once the mirror process has completed break the mirror and remove > >the 1TB disk. > > That will replicate any fragmentation as well. "zfs send | zfs recv" > is the only (current) way to defragment a ZFS pool. But are you then also supposed to be able send incremental snapshots to a third pool from the pool that you just cloned? I did the zpool replace now over night, and it did not remove the old device yet, as it found cksum errors on the pool: root@coyote:~# zpool status -v pool: tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: resilvered 873G in 11h33m with 24 errors on Mon Jan 28 09:45:32 2013 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 27 replacing-0 ONLINE 0 0 61 da0.eli ONLINE 0 0 61 ada1.eli ONLINE 0 0 61 errors: Permanent errors have been detected in the following files: tank/src@2013-01-17:/.svn/pristine/8e/8ed35772a38e0fec00bc1cbc2f05480f4fd4759b.svn-base tank/src@2013-01-17:/.svn/pristine/4f/4febd82f50bd408f958d4412ceea50cef48fe8f7.svn-base tank/src@2013-01-17:/sys/dev/mvs/mvs_soc.c tank/src@2013-01-17:/secure/usr.bin/openssl/man/pkcs8.1 tank/src@2013-01-17:/.svn/pristine/ab/ab1efecf2c0a8f67162b2ed760772337017c5a64.svn-base tank/src@2013-01-17:/.svn/pristine/90/907580a473b00f09b01815a52251fbdc3e34e8f6.svn-base tank/src@2013-01-17:/sys/dev/agp/agpreg.h tank/src@2013-01-17:/sys/dev/isci/scil/scic_sds_remote_node_context.h tank/src@2013-01-17:/.svn/pristine/a8/a8dfc65edca368c5d2af3d655859f25150795bc5.svn-base tank/src@2013-01-17:/contrib/llvm/utils/TableGen/DAGISelMatcher.cpp tank/src@2013-01-17:/contrib/tcpdump/print-babel.c tank/src@2013-01-17:/.svn/pristine/30/30ef0f53aa09a5185f55f4ecac842dbc13dab8fd.svn-base tank/src@2013-01-17:/.svn/pristine/cb/cb32411a6873621a449b24d9127305b2ee6630e9.svn-base tank/src@2013-01-17:/.svn/pristine/03/030d211b1e95f703f9a61201eed63efdbb8e41c0.svn-base tank/src@2013-01-17:/.svn/pristine/27/27f1181d33434a72308de165c04202b6159d6ac2.svn-base tank/src@2013-01-17:/lib/libpam/modules/pam_exec/pam_exec.c tank/src@2013-01-17:/contrib/llvm/include/llvm/PassSupport.h tank/src@2013-01-17:/.svn/pristine/90/90f818b5f897f26c7b301c1ac2d0ce0d3eaef28d.svn-base tank/src@2013-01-17:/sys/vm/vm_pager.c tank/src@2013-01-17:/.svn/pristine/5e/5e9331052e8c2e0fa5fd8c74c4edb04058e3b95f.svn-base tank/src@2013-01-17:/.svn/pristine/1d/1d5d6e75cfb77e48e4711ddd10148986392c4fae.svn-base tank/src@2013-01-17:/.svn/pristine/c5/c55e964c62ed759089c4bf5e49adf6e49eb59108.svn-base tank/src@2013-01-17:/crypto/openssl/crypto/cms/cms_lcl.h tank/ncvs@2013-01-17:/ports/textproc/uncrustify/distinfo,v Interestingly, these only seem to affect the snapshot, and I'm now wondering if that is the problem why the backup pool did not accept the next incremental snapshot from the new pool. How does the receiving pool known that it has the correct snapshot to store an incremental one anyway? Is there a toplevel checksum, like for git commits? How can I display and compare that? Cheers, Uli