From owner-freebsd-questions@FreeBSD.ORG Thu Jun 19 12:22:14 2014 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 83C5C8D2 for ; Thu, 19 Jun 2014 12:22:14 +0000 (UTC) Received: from anny.lostinspace.de (anny.lostinspace.de [IPv6:2a01:138:a006::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anny.lostinspace.de", Issuer "anny.lostinspace.de" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 2CC7729EB for ; Thu, 19 Jun 2014 12:22:13 +0000 (UTC) Received: from server.idefix.lan (ppp-93-104-70-163.dynamic.mnet-online.de [93.104.70.163]) (authenticated bits=0) by anny.lostinspace.de (8.14.8/8.14.8) with ESMTP id s5JCM4Oc068000 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Thu, 19 Jun 2014 14:22:08 +0200 (CEST) (envelope-from idefix@fechner.net) Received: from server.idefix.lan (localhost [IPv6:::1]) by server.idefix.lan (Postfix) with ESMTP id 0BDA1B9D2 for ; Thu, 19 Jun 2014 14:22:04 +0200 (CEST) X-Virus-Scanned: amavisd-new at fechner.net Received: from server.idefix.lan ([127.0.0.1]) by server.idefix.lan (server.idefix.lan [127.0.0.1]) (amavisd-new, port 10024) with LMTP id A921jtlTWBfc for ; Thu, 19 Jun 2014 14:22:03 +0200 (CEST) Received: from [192.168.2.126] (p54A76942.dip0.t-ipconnect.de [84.167.105.66]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by server.idefix.lan (Postfix) with ESMTPSA id 0B0C4B9CA for ; Thu, 19 Jun 2014 14:22:03 +0200 (CEST) Message-ID: <53A2D5DE.8070903@fechner.net> Date: Thu, 19 Jun 2014 14:21:50 +0200 From: Matthias Fechner User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-questions@freebsd.org Subject: Re: Change block size on ZFS pool References: <5370C1DE.4010805@fechner.net> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.4.3 (anny.lostinspace.de [0.0.0.0]); Thu, 19 Jun 2014 14:22:08 +0200 (CEST) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on anny.lostinspace.de X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jun 2014 12:22:14 -0000 Am 12.05.2014 18:55, schrieb Trond Endrestøl: >> >> Be very careful! ok, I tried it now and the recreation of the pool worked fine. But the rename of the pool failed as it mounted the pool double after the reboot and that destroyed everything. Problem is, you will not see the problem immediately but after your next reboot. Luckily I had a backup I could use. Here what I did, maybe someone sees the problem: Adjust sector to 4k With the upgrade to FreeBSD10 I see now the error message: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/504acf1f-5487-11e1-b3f1-001b217b3468 ONLINE 0 0 0 block size: 512B configured, 4096B native gpt/disk1 ONLINE 0 0 330 block size: 512B configured, 4096B native We would like to allign the partitions to 4k sectors and recreate the zpool with 4k size without losing data or require to restore it from a backup. Type gpart show ada0 to see if partion allignment is fine. This is fine: => 34 3907029101 ada2 GPT (1.8T) 34 6 - free - (3.0K) 40 128 1 freebsd-boot (64K) 168 8388608 2 freebsd-swap (4.0G) 8388776 3898640352 3 freebsd-zfs (1.8T) 3907029128 7 - free - (3.5K) Create the partions as explained above, we will handle here only the steps how to convert the zpool to 4k size. Make sure you have a bootable usb stick with mfsbsd. Boot from it and try to mount your pool: Login with root and password mfsroot zpool import -f -o altroot=/mnt zroot If it can import your pool and see your data in /mnt you can reboot again and boot up the normal system. Now make a backup of your pool. If anything goes wrong you would need it. I used rsync to copy all important data to another pool where I had enough space for it. I had the problem that I had running zfs-snapshot-mgmt which stopped working with the new zfs layout with FreeBSD10 so I had at first to remove all auto snapshots as that will make it imposible to copy the pool (I had over 100000 snapshots on the system). zfs list -H -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r Detach one of the mirrors: zpool detach zroot gptid/504acf1f-5487-11e1-b3f1-001b217b3468 My disk was named disk0 but it does not show up on /dev/gpt/disk0 so I had to reboot. As we removed the first disk it can be possible that you must say your BIOS to boot from the second harddisk. Clear ZFS label: zpool labelclear /dev/gpt/disk0 Create gnop(8) device emulating 4k disk blocks: gnop create -S 4096 /dev/gpt/disk0 Create a new single disk zpool named zroot1 using the gnop device as the vdev: zpool create zroot1 gpt/disk0.nop Export the zroot1: zpool export zroot1 Destroy the gnop device: gnop destroy /dev/gpt/disk0.nop Reimport the zroot1 pool, searching for vdevs in /dev/gpt zpool import -d /dev/gpt zroot1 Create a snapshot: zfs snapshot -r zroot@transfer Transfer the snapshot from zroot to zroot1, preserving every detail, without mounting the destination filesystems zfs send -R zroot@transfer | zfs receive -duv zroot1 Verify that the zroot1 has indeed received all datasets zfs list -r -t all zroot1 Now boot from the usbstick the mfsbsd. Import your pools: zpool import -fN zroot zpool import -fN zroot1 Make a second snapshot and copy it incremental: zfs snapshot -r zroot@transfer2 zfs send -Ri zroot@transfer zroot@transfer2 | zfs receive -Fduv zroot1 Correct the bootfs option zpool set bootfs=zroot1/ROOT/default zroot1 Edit the loader.conf: mkdir -p /zroot1 mount -t zfs zroot1/ROOT/default /zroot1 vi /zroot1/boot/loader.conf vfs.root.mountfrom="zfs:zroot1/ROOT/default" Destroy the old zroot zpool destroy zroot Reboot again into your new pool, make sure everything is mounted correctly. Attach the disk to the pool zpool attach zroot1 gpt/disk0 gpt/disk1 I reinstalled the gpt bootloader, not necessary but I wanted to be sure a current version of it is on both disks: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2 Wait while you allow the newly attached mirror to resilver completely. You can check the status with zpool status zroot1 (with the old allignment it took me about 7 days for the resilver, with the 4k allignment now it takes only about 2 hours by a speed of about 90MB/s) After the pool finished you maybe want to remove the snapshots: zfs destroy -r zroot1@transfer zfs destroy -r zroot1@transfer2 !!!!! WARNING RENAME OF THE POOL FAILED AND ALL DATA IS LOST !!!!! If you want to rename the pool back to zroot boot again from the USB stick: zpool import -fN zroot1 zroot Edit the loader.conf: mkdir -p /zroot mount -t zfs zroot/ROOT/default /zroot1 vi /zroot/boot/loader.conf vfs.root.mountfrom="zfs:zroot/ROOT/default" Gruß Matthias -- "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning." -- Rich Cook