From owner-freebsd-current@FreeBSD.ORG Mon Aug 17 15:25:09 2009 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EDEF6106568F; Mon, 17 Aug 2009 15:25:09 +0000 (UTC) (envelope-from serenity@exscape.org) Received: from ch-smtp01.sth.basefarm.net (ch-smtp01.sth.basefarm.net [80.76.149.212]) by mx1.freebsd.org (Postfix) with ESMTP id 67D798FC55; Mon, 17 Aug 2009 15:25:09 +0000 (UTC) Received: from c83-253-252-234.bredband.comhem.se ([83.253.252.234]:34069 helo=mx.exscape.org) by ch-smtp01.sth.basefarm.net with esmtp (Exim 4.68) (envelope-from ) id 1Md44m-0003Om-5M; Mon, 17 Aug 2009 17:24:51 +0200 Received: from [192.168.1.5] (macbookpro [192.168.1.5]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mx.exscape.org (Postfix) with ESMTPSA id 9F93640732; Mon, 17 Aug 2009 17:24:49 +0200 (CEST) Message-Id: <25E11A9B-9FDE-4C34-8C6F-8A7883E9876A@exscape.org> From: Thomas Backman To: Pawel Dawidek Jakub In-Reply-To: Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v936) Date: Mon, 17 Aug 2009 17:24:47 +0200 References: X-Mailer: Apple Mail (2.936) X-Originating-IP: 83.253.252.234 X-Scan-Result: No virus found in message 1Md44m-0003Om-5M. X-Scan-Signature: ch-smtp01.sth.basefarm.net 1Md44m-0003Om-5M 4d5b7d8f6a460e9904f795ae935727d1 Cc: FreeBSD current Subject: Re: Bad news re: new (20080817) ZFS patches and send/recv (broken again) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2009 15:25:10 -0000 On Aug 17, 2009, at 15:25, Thomas Backman wrote: > So, I've got myself a source tree almost completely free of patches > after today's batch of ZFS patches merged - all that remains is that > I uncommented ps -axl from /usr/sbin/crashinfo, since it only > coredumps anyway, and added CFLAGS+=-DDEBUG=1 to zfs/Makefile. > > One of the changes I didn't already have prior to this must have > broken something, though, because this script worked just fine > before the merges earlier today. > The script below is the exact same I linked in http://lists.freebsd.org/pipermail/freebsd-current/2009-July/009174.html > back in July (URL to the script: http://exscape.org/temp/zfs_clone_panic.sh > ) - I made some local changes, thus the name invoked below. > > Now that all the patches are merged, you should need nothing but the > script, bash, and the ~200MB free space on the partition containing / > root/ to reproduce this problem. > (Note that the "no such pool" in the FIRST script is normal; it > simply tries to clean up something that isn't there, without error/ > sanity checking.) > > [root@chaos ~]# bash -x panic-tests/CLONE_CRASH.submitted.sh initial > + PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/ > local/bin:/root/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/ > usr/local/sbin > + MASTERPATH=/root/zfsclonecrash/crashtestmaster.disk > + SLAVEPATH=/root/zfsclonecrash/crashtestslave.disk > + LASTBACKUP=/root/zfsclonecrash/last-backup-name > + MASTERNUM=1482 > + SLAVENUM=1675 > + '[' '!' -z initial ']' > + case $1 in > + initial > ++ dirname /root/zfsclonecrash/crashtestmaster.disk > + mkdir -p /root/zfsclonecrash > + rm -rf /crashtestmaster /crashtestslave > + mount_unmount unmount > + '[' -z unmount ']' > + [[ unmount == \m\o\u\n\t ]] > + [[ unmount == \u\n\m\o\u\n\t ]] > + zpool export crashtestmaster > cannot open 'crashtestmaster': no such pool > + ggatel destroy -u 1482 > + zpool export crashtestslave > cannot open 'crashtestslave': no such pool > + ggatel destroy -u 1675 > + echo Creating files and syncing > Creating files and syncing > + dd if=/dev/zero of=/root/zfsclonecrash/crashtestmaster.disk > bs=1000k count=100 > 100+0 records in > 100+0 records out > 102400000 bytes transferred in 0.367217 secs (278854144 bytes/sec) > + dd if=/dev/zero of=/root/zfsclonecrash/crashtestslave.disk > bs=1000k count=100 > 100+0 records in > 100+0 records out > 102400000 bytes transferred in 0.286532 secs (357377280 bytes/sec) > + sync > + echo Sleeping 5 seconds > Sleeping 5 seconds > + sleep 5 > + echo 'Creating GEOM providers (~10 secs)' > Creating GEOM providers (~10 secs) > + ggatel create -u 1482 /root/zfsclonecrash/crashtestmaster.disk > + sleep 5 > + ggatel create -u 1675 /root/zfsclonecrash/crashtestslave.disk > + sleep 5 > + echo 'Creating pools' > Creating pools > + zpool create -f crashtestmaster ggate1482 > + zpool create -f crashtestslave ggate1675 > + echo 'Adding some data to the master pool' > Adding some data to the master pool > + zfs create crashtestmaster/test_orig > + dd if=/dev/random of=/crashtestmaster/test_orig/file1 bs=1000k > count=10 > 10+0 records in > 10+0 records out > 10240000 bytes transferred in 0.447472 secs (22884109 bytes/sec) > + dd if=/dev/random of=/crashtestmaster/test_orig/file2 bs=1000k > count=10 > 10+0 records in > 10+0 records out > 10240000 bytes transferred in 0.626774 secs (16337625 bytes/sec) > + echo 'Cloning test_base' > Cloning test_base > + zfs snapshot crashtestmaster/test_orig@snap > + zfs clone crashtestmaster/test_orig@snap crashtestmaster/test_cloned > + zfs promote crashtestmaster/test_cloned > ++ date +backup-%Y%m%d-%H%M%S > + NOW=backup-20090817-151412 > + echo 'Creating snapshots' > Creating snapshots > + zfs snapshot -r crashtestmaster@backup-20090817-151412 > + echo 'Doing initial clone to slave pool' > Doing initial clone to slave pool > + zfs send -R crashtestmaster@backup-20090817-151412 > + zfs recv -vFd crashtestslave > cannot receive: invalid stream (malformed nvlist) > warning: cannot send 'crashtestmaster/test_cloned@snap': Broken pipe > warning: cannot send 'crashtestmaster/ > test_cloned@backup-20090817-151412': Broken pipe > + mount_unmount unmount > + '[' -z unmount ']' > + [[ unmount == \m\o\u\n\t ]] > + [[ unmount == \u\n\m\o\u\n\t ]] > + zpool export crashtestmaster > + ggatel destroy -u 1482 > + zpool export crashtestslave > + ggatel destroy -u 1675 > + echo backup-20090817-151412 > + echo 'Done!' > Done! > + exit 0 > > > I first noticed this after trying to make an incremental backup > right after the installworld+reboot, as I always do; it didn't find > the slave zpool to import...(! This may be very bad in real-life > cases - I have no clue if ggate is the culprit or if people will > soon start reporting lost pools.) So I tried wiping the backup and > creating a new one from scratch (~15GB over the network), giving me > the very same problem, instantly: > > [...] > + zpool create -f -R /slave slave ggate666 > ++ date +backup-%Y%m%d-%H%M > + NOW=backup-20090817-1522 > + echo 'Creating snapshots' > Creating snapshots > + zfs snapshot -r tank@backup-20090817-1522 > + echo 'Cloning pool' > Cloning pool > + zfs send -R tank@backup-20090817-1522 > + zfs recv -vFd slave > cannot receive: invalid stream (malformed nvlist) > warning: cannot send 'tank@backup-20090817-1522': Broken pipe > > > Regards, > Thomas This is perhaps more troubling... [root@chaos ~]# dd if=/dev/zero of=./zfstestfile bs=1000k count=100 && ggatel create -u 142 ./zfstestfile && zpool create -f testpool142 ggate142 100+0 records in 100+0 records out 102400000 bytes transferred in 0.266194 secs (384681697 bytes/sec) [root@chaos ~]# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 66G 9.86G 56.1G 14% ONLINE - testpool142 93M 73.5K 92.9M 0% ONLINE - [root@chaos ~]# zpool export testpool142 [root@chaos ~]# ggatel destroy -u 142 [root@chaos ~]# ggatel create -u 142 ./zfstestfile [root@chaos ~]# zpool import testpool142 cannot import 'testpool142': no such pool available [root@chaos ~]# zpool import [root@chaos ~]# ggatel list ggate142 [root@chaos ~]# Worse yet: [root@chaos ~]# zpool create testpool ad0s1d [root@chaos ~]# zpool export testpool [root@chaos ~]# zpool import testpool cannot import 'testpool': no such pool available Regards, Thomas