From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 00:27:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 44239C5F for ; Mon, 16 Dec 2013 00:27:10 +0000 (UTC) Received: from pi.nmdps.net (pi.nmdps.net [IPv6:2a01:be00:10:201:0:80:0:1]) by mx1.freebsd.org (Postfix) with ESMTP id 851471008 for ; Mon, 16 Dec 2013 00:27:09 +0000 (UTC) Received: from pi.nmdps.net (localhost [127.0.0.1]) (Authenticated sender: krichy@cflinux.hu) by pi.nmdps.net (Postfix) with ESMTPSA id 35A2A136B; Mon, 16 Dec 2013 01:27:06 +0100 (CET) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=_2592f2ae183913a1079652aa01cc934b" Date: Mon, 16 Dec 2013 01:27:03 +0100 From: krichy@cflinux.hu To: delphij@delphij.net Subject: Re: Fwd: Re: Re: zfs deadlock In-Reply-To: References: <04fac9b4a2352d97a23470c9da5db029@cflinux.hu> Message-ID: X-Sender: krichy@cflinux.hu User-Agent: Roundcube Webmail/0.9.5 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 00:27:10 -0000 --=_2592f2ae183913a1079652aa01cc934b Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8; format=flowed Dear devs, I've managed to fix my issue somehow, please review the attached patch. First, the traverse() call was made to conform to lock order described in kern/vfs_subr.c before vfs_busy(). Also, traverse() will return a locked vnode in the event of success, even when there are no mounted filesystems over the given vnode. And last a deadlock race between zfsctl_snapdir_lookup() and zfsctl_snapshot_inactive() is handled, which may need the most review, as that may be buggy, or implement new bugs. This applies to stable/10 right now. I am waiting on feedback. Regards, 2013-12-11 11:43 időpontban krichy@cflinux.hu ezt írta: > Dear devs, > > I have still have no success fixing these bugs, please help somehow. I > currently dont understand the recursive lock problem, how should it be > avoided. > > Thanks in advance, > > 2013-12-07 15:42 időpontban krichy@cflinux.hu ezt írta: >> Dear Xin, >> >> I dont know if you read the -fs list or not, but there is a possible >> bug in zfs snapshot handling, and unfortunately I cannot fix the >> problem, but at least I could reproduce it. >> Please have a look at it, and if I can help resolving it, i will. >> >> Regards, >> >> -------- Eredeti üzenet -------- >> Tárgy: Re: Re: zfs deadlock >> Dátum: 2013-12-07 14:38 >> Feladó: krichy@cflinux.hu >> Címzett: Steven Hartland >> Másolat: freebsd-fs@freebsd.org >> >> Dear Steven, >> >> A crash is very easily reproducible with the attached script, just >> make an empty dataset, make a snapshot of it, >> and run the script. >> In my virtual machine it crashed in a few seconds, producing the >> attached output. >> >> Regards, >> 2013-12-06 17:28 időpontban krichy@cflinux.hu ezt írta: >>> Dear Steven, >>> >>> using the previously provided scripts, the bug still appears. And I >>> got the attaches traces when the deadlock occured. >>> >>> It seems that one process is in zfs_mount(), while the other is in >>> zfs_unmount_snap(). Look for the 'zfs' and 'ls' commands. >>> >>> Hope it helps. >>> >>> Regards, >>> 2013-12-06 16:59 időpontban krichy@cflinux.hu ezt írta: >>>> So maybe the force flag is too strict. Under linux the snapshots >>>> remains mounted after a send. >>>> >>>> 2013-12-06 16:54 időpontban krichy@cflinux.hu ezt írta: >>>>> Dear Steven, >>>>> >>>>> Of course. But I got further now. You mentioned that is normal that >>>>> zfs send umounts snapshots. I dont know, but this indeed causes a >>>>> problem: >>>>> >>>>> It is also reproducible without zfs send. >>>>> 1. Have a large directory structure (just to make sure find runs >>>>> long >>>>> enough), make a snapshot of it. >>>>> # cd /mnt/pool/set/.zfs/snapshot/snap >>>>> # find . >>>>> >>>>> meanwhile, on another console >>>>> # umount -f /mnt/pool/set/.zfs/snapshot/snap >>>>> >>>>> will cause a panic, or such. >>>>> >>>>> So effectively a regular user on a system can cause a crash. >>>>> >>>>> Regards, >>>>> >>>>> 2013-12-06 16:50 időpontban Steven Hartland ezt írta: >>>>>> kernel compiled, installed and rebooted? >>>>>> ----- Original Message ----- From: >>>>>> To: >>>>>> Sent: Friday, December 06, 2013 12:17 PM >>>>>> Subject: Fwd: Re: zfs deadlock >>>>>> >>>>>> >>>>>>> Dear shm, >>>>>>> >>>>>>> I've applied r258294 on top fo releng/9.2, but my test seems to >>>>>>> trigger >>>>>>> the deadlock again. >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> -------- Eredeti üzenet -------- >>>>>>> Tárgy: Re: zfs deadlock >>>>>>> Dátum: 2013-12-06 13:17 >>>>>>> Feladó: krichy@cflinux.hu >>>>>>> Címzett: freebsd-fs@freebsd.org >>>>>>> >>>>>>> I've applied r258294 on top of releng/9.2, and using the attached >>>>>>> scripts parallel, the system got into a deadlock again. >>>>>>> >>>>>>> 2013-12-06 11:35 időpontban Steven Hartland ezt írta: >>>>>>>> Thats correct it unmounts the mounted snapshot. >>>>>>>> >>>>>>>> Regards >>>>>>>> Steve >>>>>>>> >>>>>>>> ----- Original Message ----- From: >>>>>>>> To: "Steven Hartland" >>>>>>>> Cc: >>>>>>>> Sent: Friday, December 06, 2013 8:50 AM >>>>>>>> Subject: Re: zfs deadlock >>>>>>>> >>>>>>>> >>>>>>>>> What is strange also, when a zfs send finishes, the paralell >>>>>>>>> running >>>>>>>>> find command issues errors: >>>>>>>>> >>>>>>>>> find: ./e/Chuje: No such file or directory >>>>>>>>> find: ./e/singe: No such file or directory >>>>>>>>> find: ./e/joree: No such file or directory >>>>>>>>> find: ./e/fore: No such file or directory >>>>>>>>> find: fts_read: No such file or directory >>>>>>>>> Fri Dec 6 09:46:04 CET 2013 2 >>>>>>>>> >>>>>>>>> Seems if the filesystem got unmounted meanwhile. But the script >>>>>>>>> is >>>>>>>>> changed its working directory to the snapshot dir. >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> >>>>>>>>> 2013-12-06 09:03 időpontban krichy@cflinux.hu ezt írta: >>>>>>>>>> Dear Steven, >>>>>>>>>> >>>>>>>>>> While I was playig with zfs, trying to reproduce the previous >>>>>>>>>> bug, >>>>>>>>>> accidentaly hit another one, which caused a trace I attached. >>>>>>>>>> >>>>>>>>>> The snapshot contains directories in 2 depth, which contain >>>>>>>>>> files. It >>>>>>>>>> was to simulate a vmail setup, with domain/user hierarchy. >>>>>>>>>> >>>>>>>>>> I hope it is useful for someone. >>>>>>>>>> >>>>>>>>>> I used the attached two scripts to reproduce the ZFS bug. >>>>>>>>>> >>>>>>>>>> It definetly crashes the system, in the last 10 minutes it is >>>>>>>>>> the 3rd >>>>>>>>>> time. >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> 2013-12-05 20:26 időpontban krichy@cflinux.hu ezt írta: >>>>>>>>>>> Dear Steven, >>>>>>>>>>> >>>>>>>>>>> Thanks for your reply. Do you know how to reproduce the bug? >>>>>>>>>>> Because >>>>>>>>>>> simply sending a snapshot which is mounted does not >>>>>>>>>>> automatically >>>>>>>>>>> trigger the deadlock. Some special cases needed, or what? >>>>>>>>>>> How to prove that the patch fixes this? >>>>>>>>>>> >>>>>>>>>>> Regards, >>>>>>>>>>> 2013-12-05 19:39 időpontban Steven Hartland ezt írta: >>>>>>>>>>>> Known issue you want: >>>>>>>>>>>> http://svnweb.freebsd.org/changeset/base/258595 >>>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> Steve >>>>>>>>>>>> >>>>>>>>>>>> ----- Original Message ----- From: "Richard Kojedzinszky" >>>>>>>>>>>> >>>>>>>>>>>> To: >>>>>>>>>>>> Sent: Thursday, December 05, 2013 2:56 PM >>>>>>>>>>>> Subject: zfs deadlock >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Dear fs devs, >>>>>>>>>>>>> >>>>>>>>>>>>> We have a freenas server, which is basicaly a freebsd. I >>>>>>>>>>>>> was >>>>>>>>>>>>> trying to look at snapshots using ls .zfs/snapshot/. >>>>>>>>>>>>> >>>>>>>>>>>>> When I issued it, the system entered a deadlock. An NFSD >>>>>>>>>>>>> was >>>>>>>>>>>>> running, a zfs send was running when I issued the command. >>>>>>>>>>>>> >>>>>>>>>>>>> I attached to command outputs while the system was in a >>>>>>>>>>>>> deadlock >>>>>>>>>>>>> state. I tried to issue >>>>>>>>>>>>> # reboot -q >>>>>>>>>>>>> But that did not restart the system. After a while (5-10 >>>>>>>>>>>>> minutes) >>>>>>>>>>>>> the system rebooted, I dont know if the deadman caused >>>>>>>>>>>>> that. >>>>>>>>>>>>> >>>>>>>>>>>>> Now the system is up and running. >>>>>>>>>>>>> >>>>>>>>>>>>> It is basically a freebsd 9.2 kernel. >>>>>>>>>>>>> >>>>>>>>>>>>> Do someone has a clue? >>>>>>>>>>>>> >>>>>>>>>>>>> Kojedzinszky Richard >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -------------------------------------------------------------------------------- >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> freebsd-fs@freebsd.org mailing list >>>>>>>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>>>>>>>>> To unsubscribe, send any mail to >>>>>>>>>>>>> "freebsd-fs-unsubscribe@freebsd.org" >>>>>>>>>>>> >>>>>>>>>>>> ================================================ >>>>>>>>>>>> This e.mail is private and confidential between Multiplay >>>>>>>>>>>> (UK) Ltd. >>>>>>>>>>>> and the person or entity to whom it is addressed. In the >>>>>>>>>>>> event of >>>>>>>>>>>> misdirection, the recipient is prohibited from using, >>>>>>>>>>>> copying, >>>>>>>>>>>> printing or otherwise disseminating it or any information >>>>>>>>>>>> contained >>>>>>>>>>>> in >>>>>>>>>>>> it. >>>>>>>>>>>> >>>>>>>>>>>> In the event of misdirection, illegible or incomplete >>>>>>>>>>>> transmission >>>>>>>>>>>> please telephone +44 845 868 1337 >>>>>>>>>>>> or return the E.mail to postmaster@multiplay.co.uk. >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ================================================ >>>>>>>> This e.mail is private and confidential between Multiplay (UK) >>>>>>>> Ltd. >>>>>>>> and the person or entity to whom it is addressed. In the event >>>>>>>> of >>>>>>>> misdirection, the recipient is prohibited from using, copying, >>>>>>>> printing or otherwise disseminating it or any information >>>>>>>> contained in >>>>>>>> it. >>>>>>>> >>>>>>>> In the event of misdirection, illegible or incomplete >>>>>>>> transmission >>>>>>>> please telephone +44 845 868 1337 >>>>>>>> or return the E.mail to postmaster@multiplay.co.uk. >>>>>> >>>>>> >>>>>> ================================================ >>>>>> This e.mail is private and confidential between Multiplay (UK) >>>>>> Ltd. >>>>>> and the person or entity to whom it is addressed. In the event of >>>>>> misdirection, the recipient is prohibited from using, copying, >>>>>> printing or otherwise disseminating it or any information >>>>>> contained in >>>>>> it. >>>>>> >>>>>> In the event of misdirection, illegible or incomplete transmission >>>>>> please telephone +44 845 868 1337 >>>>>> or return the E.mail to postmaster@multiplay.co.uk. --=_2592f2ae183913a1079652aa01cc934b Content-Transfer-Encoding: base64 Content-Type: text/x-diff; name=zfs-deadlock-1.patch Content-Disposition: attachment; filename=zfs-deadlock-1.patch; size=3563 ZGlmZiAtLWdpdCBhL3N5cy9jZGRsL2NvbXBhdC9vcGVuc29sYXJpcy9rZXJuL29wZW5zb2xhcmlz X2xvb2t1cC5jIGIvc3lzL2NkZGwvY29tcGF0L29wZW5zb2xhcmlzL2tlcm4vb3BlbnNvbGFyaXNf bG9va3VwLmMKaW5kZXggOTQzODNkNi4uMjI1NTIxYSAxMDA2NDQKLS0tIGEvc3lzL2NkZGwvY29t cGF0L29wZW5zb2xhcmlzL2tlcm4vb3BlbnNvbGFyaXNfbG9va3VwLmMKKysrIGIvc3lzL2NkZGwv Y29tcGF0L29wZW5zb2xhcmlzL2tlcm4vb3BlbnNvbGFyaXNfbG9va3VwLmMKQEAgLTgxLDYgKzgx LDggQEAgdHJhdmVyc2Uodm5vZGVfdCAqKmN2cHAsIGludCBsa3R5cGUpCiAJICogcHJvZ3Jlc3Mg b24gdGhpcyB2bm9kZS4KIAkgKi8KIAorCXZuX2xvY2soY3ZwLCBsa3R5cGUpOworCiAJZm9yICg7 OykgewogCQkvKgogCQkgKiBSZWFjaGVkIHRoZSBlbmQgb2YgdGhlIG1vdW50IGNoYWluPwpAQCAt ODksMTMgKzkxLDcgQEAgdHJhdmVyc2Uodm5vZGVfdCAqKmN2cHAsIGludCBsa3R5cGUpCiAJCWlm ICh2ZnNwID09IE5VTEwpCiAJCQlicmVhazsKIAkJZXJyb3IgPSB2ZnNfYnVzeSh2ZnNwLCAwKTsK LQkJLyoKLQkJICogdHZwIGlzIE5VTEwgZm9yICpjdnBwIHZub2RlLCB3aGljaCB3ZSBjYW4ndCB1 bmxvY2suCi0JCSAqLwotCQlpZiAodHZwICE9IE5VTEwpCi0JCQl2cHV0KGN2cCk7Ci0JCWVsc2UK LQkJCXZyZWxlKGN2cCk7CisJCXZwdXQoY3ZwKTsKIAkJaWYgKGVycm9yKQogCQkJcmV0dXJuIChl cnJvcik7CiAKZGlmZiAtLWdpdCBhL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2Nv bW1vbi9mcy9nZnMuYyBiL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9m cy9nZnMuYwppbmRleCA1OTk0NGExLi5jZTQzZmZmIDEwMDY0NAotLS0gYS9zeXMvY2RkbC9jb250 cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvZ2ZzLmMKKysrIGIvc3lzL2NkZGwvY29udHJp Yi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL2dmcy5jCkBAIC00NDgsNyArNDQ4LDcgQEAgZ2Zz X2xvb2t1cF9kb3Qodm5vZGVfdCAqKnZwcCwgdm5vZGVfdCAqZHZwLCB2bm9kZV90ICpwdnAsIGNv bnN0IGNoYXIgKm5tKQogCQkJVk5fSE9MRChwdnApOwogCQkJKnZwcCA9IHB2cDsKIAkJfQotCQl2 bl9sb2NrKCp2cHAsIExLX0VYQ0xVU0lWRSB8IExLX1JFVFJZKTsKKwkJdm5fbG9jaygqdnBwLCBM S19FWENMVVNJVkUgfCBMS19SRVRSWSB8IExLX0NBTlJFQ1VSU0UpOwogCQlyZXR1cm4gKDApOwog CX0KIApkaWZmIC0tZ2l0IGEvc3lzL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9u L2ZzL3pmcy96ZnNfY3RsZGlyLmMgYi9zeXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9j b21tb24vZnMvemZzL3pmc19jdGxkaXIuYwppbmRleCAyOGFiMWZhLi5iMzgyMGRjIDEwMDY0NAot LS0gYS9zeXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL3pmc19j dGxkaXIuYworKysgYi9zeXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMv emZzL3pmc19jdGxkaXIuYwpAQCAtMTAxMiw3ICsxMDEyLDE1IEBAIHpmc2N0bF9zbmFwZGlyX2xv b2t1cChhcCkKIAkJCS8qCiAJCQkgKiBUaGUgc25hcHNob3Qgd2FzIHVubW91bnRlZCBiZWhpbmQg b3VyIGJhY2tzLAogCQkJICogdHJ5IHRvIHJlbW91bnQgaXQuCisJCQkgKiBDb25jdXJyZW50IHpm c2N0bF9zbmFwc2hvdF9pbmFjdGl2ZSgpIHdvdWxkIHJlbW92ZSBvdXIgZW50cnkKKwkJCSAqIHNv IGRvIHRoaXMgb3Vyc2VsdmVzLCBhbmQgbWFrZSBhIGZyZXNoIG5ldyBtb3VudC4KIAkJCSAqLwor CQkJYXZsX3JlbW92ZSgmc2RwLT5zZF9zbmFwcywgc2VwKTsKKwkJCWttZW1fZnJlZShzZXAtPnNl X25hbWUsIHN0cmxlbihzZXAtPnNlX25hbWUpICsgMSk7CisJCQlrbWVtX2ZyZWUoc2VwLCBzaXpl b2YgKHpmc19zbmFwZW50cnlfdCkpOworCQkJdnB1dCgqdnBwKTsKKwkJCS8qIGZpbmQgbmV3IHBs YWNlIGZvciBzZXAgZW50cnkgKi8KKwkJCWF2bF9maW5kKCZzZHAtPnNkX3NuYXBzLCAmc2VhcmNo LCAmd2hlcmUpOwogCQkJVkVSSUZZKHpmc2N0bF9zbmFwc2hvdF96bmFtZShkdnAsIG5tLCBNQVhO QU1FTEVOLCBzbmFwbmFtZSkgPT0gMCk7CiAJCQlnb3RvIGRvbW91bnQ7CiAJCX0gZWxzZSB7CkBA IC0xMDI4LDYgKzEwMzYsNyBAQCB6ZnNjdGxfc25hcGRpcl9sb29rdXAoYXApCiAJCXJldHVybiAo ZXJyKTsKIAl9CiAKK2RvbW91bnQ6CiAJLyoKIAkgKiBUaGUgcmVxdWVzdGVkIHNuYXBzaG90IGlz IG5vdCBjdXJyZW50bHkgbW91bnRlZCwgbG9vayBpdCB1cC4KIAkgKi8KQEAgLTEwNjgsNyArMTA3 Nyw2IEBAIHpmc2N0bF9zbmFwZGlyX2xvb2t1cChhcCkKIAlhdmxfaW5zZXJ0KCZzZHAtPnNkX3Nu YXBzLCBzZXAsIHdoZXJlKTsKIAogCWRtdV9vYmpzZXRfcmVsZShzbmFwLCBGVEFHKTsKLWRvbW91 bnQ6CiAJbW91bnRwb2ludF9sZW4gPSBzdHJsZW4oZHZwLT52X3Zmc3AtPm1udF9zdGF0LmZfbW50 b25uYW1lKSArCiAJICAgIHN0cmxlbigiLyIgWkZTX0NUTERJUl9OQU1FICIvc25hcHNob3QvIikg KyBzdHJsZW4obm0pICsgMTsKIAltb3VudHBvaW50ID0ga21lbV9hbGxvYyhtb3VudHBvaW50X2xl biwgS01fU0xFRVApOwpAQCAtMTQ2MywxMSArMTQ3MSwxOCBAQCB6ZnNjdGxfc25hcHNob3RfaW5h Y3RpdmUoYXApCiAJemZzX3NuYXBlbnRyeV90ICpzZXAsICpuZXh0OwogCWludCBsb2NrZWQ7CiAJ dm5vZGVfdCAqZHZwOworCWdmc19kaXJfdCAqZHA7CiAKLQlpZiAodnAtPnZfY291bnQgPiAwKQot CQlnb3RvIGVuZDsKLQotCVZFUklGWShnZnNfZGlyX2xvb2t1cCh2cCwgIi4uIiwgJmR2cCwgY3Is IDAsIE5VTEwsIE5VTEwpID09IDApOworCS8qIFRoaXMgaXMgZm9yIGFjY2Vzc2luZyB0aGUgcmVh bCBwYXJlbnQgZGlyZWN0bHksIHdpdGhvdXQgYSBwb3NzaWJsZSBkZWFkbG9jaworCSAqIHdpdGgg emZzY3RsX3NuYXBkaXJfbG9va3VwKCkuIFRoZSByZWxlYXNlIG9mIGxvY2sgb24gdnAgYW5kIGxv Y2sgb24gZHZwIHByb3ZpZGVzCisJICogdGhlIHNhbWUgbG9jayBvcmRlciBhcyBpbiB6ZnNjdGxf c25hcHNob3RfbG9va3VwKCkuCisJICovCisJZHAgPSB2cC0+dl9kYXRhOworCWR2cCA9IGRwLT5n ZnNkX2ZpbGUuZ2ZzX3BhcmVudDsKKwlWTl9IT0xEKGR2cCk7CisJVk9QX1VOTE9DSyh2cCwgMCk7 CisJdm5fbG9jayhkdnAsIExLX1NIQVJFRCB8IExLX1JFVFJZIHwgTEtfQ0FOUkVDVVJTRSk7CisJ dm5fbG9jayh2cCwgTEtfRVhDTFVTSVZFIHwgTEtfUkVUUlkpOwogCXNkcCA9IGR2cC0+dl9kYXRh OwogCVZPUF9VTkxPQ0soZHZwLCAwKTsKIApAQCAtMTQ5NCw3ICsxNTA5LDYgQEAgemZzY3RsX3Nu YXBzaG90X2luYWN0aXZlKGFwKQogCQltdXRleF9leGl0KCZzZHAtPnNkX2xvY2spOwogCVZOX1JF TEUoZHZwKTsKIAotZW5kOgogCS8qCiAJICogRGlzcG9zZSBvZiB0aGUgdm5vZGUgZm9yIHRoZSBz bmFwc2hvdCBtb3VudCBwb2ludC4KIAkgKiBUaGlzIGlzIHNhZmUgdG8gZG8gYmVjYXVzZSBvbmNl IHRoaXMgZW50cnkgaGFzIGJlZW4gcmVtb3ZlZAo= --=_2592f2ae183913a1079652aa01cc934b-- From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 00:57:17 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 866082CD for ; Mon, 16 Dec 2013 00:57:17 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 3390D11B4 for ; Mon, 16 Dec 2013 00:57:16 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAJxPrlKDaFve/2dsb2JhbABZg0JVgwO1YYExdIIlAQEFI1YbDgoCAg0FFAJZBogXDa8sl2IXgSmNBQkRARw0BxIMghAPMYFIBIlDkAOQZINIHoE1OQ X-IronPort-AV: E=Sophos;i="4.95,491,1384318800"; d="scan'208";a="78985798" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 15 Dec 2013 19:57:07 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id D72BAB416F; Sun, 15 Dec 2013 19:57:07 -0500 (EST) Date: Sun, 15 Dec 2013 19:57:07 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <1442671360.31094174.1387155427873.JavaMail.root@uoguelph.ca> In-Reply-To: <52A7E53D.8000002@cse.yorku.ca> Subject: Re: mount ZFS snapshot on Linux system MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 00:57:17 -0000 Jason Keltz wrote: > On 10/12/2013 7:21 PM, Rick Macklem wrote: > > Jason Keltz wrote: > >> I'm running FreeBSD 9.2 with various ZFS datasets. > >> I export a dataset to a Linux system (RHEL64), and mount it. It > >> works > >> fine... > >> When I try to access the ZFS snapshot directory on the Linux NFS > >> client, > >> things go weird. > >> > >> With NFSv4: > >> > >> [jas@archive /]# cd /mnt/.zfs/snapshot > >> [jas@archive snapshot]# ls > >> 20131203 20131205 20131206 20131207 20131208 20131209 > >> 20131210 > >> [jas@archive snapshot]# cd 20131210 > >> 20131210: Not a directory. > >> > >> huh? > >> > >> [jas@archive snapshot]# ls -al > >> total 77 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> [jas@archive snapshot]# stat * > >> [jas@archive snapshot]# ls -al > >> total 292 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 > >> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 > >> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 > >> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 > >> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 > >> > >> But it gets even more fun.. > >> Just to let everyone know, Jason sent me a packet capture and it does appear that the FreeBSD NFSv4 server generates bogus attributes (the ones listed just above) in a Readdir reply when the .zfs/snapshot directory is read. I have sent him a simple patch which makes the server use VOP_LOOKUP() unconditionally instead of switching from VFS_VGET() to VOP_LOOKUP() upon a EOPNOTSUPP reply from VFS_VGET(). { It seems that zfs_vget() returns vnodes which VOP_GETATTR() gets the bogus attributes from. } Hopefully he will be able to test the patch, but I'm not sure at this point. I still don't know if VOP_LOOKUP() will return a vnode with v_mountedhere != NULL when it does a lookup of a snapshot in .zfs/snapshot, but I should find out the answer to that if/when he tests the patch. If someone else knows what zfs_lookup() will return when doing a lookup of a snapshot in .zfs/snapshot or is willing to test the patch to find out, please email. rick > >> # ls -ali > >> total 205 > >> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> > >> This is not a user id mapping issue because all the files in /mnt > >> have > >> the proper owner/groups, and I can access them there fine. > >> > >> I also tried explicitly exporting .zfs/snapshot. The result isn't > >> any > >> different. > >> > >> If I use nfs v3 it "works", but I'm seeing a whole lot of errors > >> like > >> these in syslog: > >> > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131203: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131209: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131210: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131207: Invalid argument > >> > >> It's not clear to me why this doesn't just "work". > >> > >> Can anyone provide any advice on debugging this? > >> > > As I think you already know, I know nothing about ZFS and never > > use it. > Yup! :) > > Having said that, I suspect that there are filenos (i-node #s) > > that are the same in the snapshot as in the parent file system > > tree. > > > > The basic assumptions are: > > - within a file system, all i-node# are unique (represent one file > > object only) and all file objects have the same fsid > > - when the fsid changes, that indicates a file system boundary and > > fileno (i-node#s) can be reused in the subtree with a different > > fsid > > > > For NFSv3, the server should export single volumes only (all > > objects > > have the same fsid and the filenos are unique). This is indicated > > to > > the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and > > friends. > > > > For NFSv4, the server does export multiple volumes and the boundary > > is indicated by a change in fsid value. > > > > I suspect ZFS snaphots don't obey the above in some way, but that > > is > > just a hunch. > > > > Now, how to narrow this down... > > - Do the above tests (both NFSv4 and NFSv3) and capture the > > packets, > > then look at them in wireshark. In particular, look at the > > fileid numbers > > and fsid values for the various directories under .zfs. > > I gave this a shot, but I haven't used wireshark to capture NFS > traffic > before, so if I need to provide additional details, let me know.. > > NFSv4: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid4.major=1446349656 > fsid4.minor=222 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid4.major=1845998066 > fsid4.minor=222 > > For /mnt/jas: > fileid=144 > fsid4.major=597946950 > fsid4.minor=222 > > For /mnt/jas1: > fileid=338 > fsid4.major=597946950 > fsid4.minor=222 > > So fsid is the same for all the different "data" directories, which > is > what I would expect given what you said. I guess each snapshot is > seen > as a unique filesystem... but then a repeating inode in different > filesystems shouldn't be a problem... > > NFSv3: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid=0x0000000056358b58 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid=0x000000006e07b1f2 > > For /mnt/jas > fileid=144 > fsid=0x0000000023a3f246 > > For /mnt/jas1: > fileid=338 > fsid=0x0000000023a3f246 > > Here, it seems it's the same, even though it's NFSv3... hmm. > > > > - Try mounting the individual snapshot directory, like > > .zfs/snapshot/20131209 and see if that works (for both NFSv3 > > and NFSv4). > > Hmm .. I tried this: > > /local/backup/home9/.zfs/snapshot/20131203 -ro > archive-mrpriv.cs.yorku.ca > V4: / > > ... but syslog reports: > > Dec 10 22:28:22 jungle mountd[85405]: can't export > /local/backup/home9/.zfs/snapshot/20131203 > > ... and of course I can't mount from either v3/v4. > > On the other hand, I kept it as: > > /local/backup/home9 -ro archive-mrpriv.cs.yorku.ca > V4:/ > > ... and was able to NFSv4 mount > /local/backup/home9/.zfs/snapshot/20131203, and this does indeed > work. > > > - Try doing the mounts with a FreeBSD client and see if you get the > > same > > behaviour? > I found this: > http://forums.freenas.org/threads/mounting-snapshot-directory-using-nfs-from-linux-broken.6060/ > .. implies it will work from FreeBSD/Nexenta, just not Linux. > Found this as well: > https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/lKyfYsjPMNM > > Jason. > > From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 09:38:20 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 090D4FD6 for ; Mon, 16 Dec 2013 09:38:20 +0000 (UTC) Received: from mx.tetcu.info (mx.tetcu.info [217.19.15.179]) by mx1.freebsd.org (Postfix) with ESMTP id 7A1771B20 for ; Mon, 16 Dec 2013 09:38:19 +0000 (UTC) Received: from F2 (f1e.forteasig.com [81.181.146.226]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.tetcu.info (Postfix) with ESMTPSA id 83F103A2D9E for ; Mon, 16 Dec 2013 11:38:18 +0200 (EET) Date: Mon, 16 Dec 2013 11:38:18 +0200 From: Ion-Mihai Tetcu To: freebsd-fs@freebsd.org Subject: GTP ZFS boot failed after upgrading to 9.2-STABLE (can't read MOS) Message-Id: <20131216113818.b108196769e1fd1dd3b7e67d@FreeBSD.org> X-Mailer: Sylpheed 3.3.0 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 09:38:20 -0000 Hi, After upgrading to 9.2-STABLE #8 r259396: Sun Dec 15 01:20:44 EET 2013 GENERIC amd64 the system in question didn't come up: ZFS: i/o error - all block copies unavailable ZFS: can't read MOS of pool zroot gptszfsboot: failed to mount default pool zroot GPTZFSBoot setup like in the wiki, except I didn't bothered with gnop. At the time of the upgrade the boot disk was ada0. Luckly I can boot without problems from any of the other 2 disks. Pool history: 1. started as a mirror over partitions on a 500GB HDD and a 1TB HDD (still present in the system - ada1 bellow, can boot from it) 2. the 500GB was replaced by ada0, the system was able to boot from the new disk; the pool extended via zfs online -e 3. ada1 (1TB) was replaced by ada2, the pool not extended yet --- At this point I could boot from any of the disks. 4. zfs scrub the mirror without any error 5. upgrade (svn up, buildworld, buildkernel, installkernel, mergemaster -p, installworld, mergemaster -iU, delete-old-libs, update ports .ko modules, reboot) 6. The error above. 7. Boot from ada2, gpart bootcode ... ada0, still the same error. I have snapshots of the pool from before the upgrade, so I could try to rollback and see if it makes any difference. (The machine is in production so I can't do it on the spot). # zpool list -v NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 896G 608G 288G 67% 1.00x ONLINE - mirror 896G 608G 288G 901G gpt/z_ES3_2T - - - - gpt/z_wd2T - - - - # gpart show -l -p => 34 3907029101 ada0 GPT (1.8T) 34 6 - free - (3.0k) 40 216 ada0p1 boot_wd2T (108k) 256 67108864 ada0p2 swap_wd2T (32G) 67109120 3774873600 ada0p3 z_wd2T (1.8T) 3841982720 65046415 - free - (31G) => 34 1953525101 ada1 GPT (931G) 34 6 - free - (3.0k) 40 216 ada1p1 boot1 (108k) 256 67108864 ada1p2 swap1 (32G) 67109120 1885339648 ada1p3 disk1 (899G) 1952448768 1076367 - free - (525M) => 34 3907029101 ada2 GPT (1.8T) 34 216 ada2p1 boot_ES3_2T (108k) 250 67108864 ada2p2 swap_ES3_2T (32G) 67109114 3837788160 ada2p3 z_ES3_2T (1.8T) 3904897274 2131861 - free - (1.0G) # zfs list -t snapshot | wc -l 16025 /dev/ada0: Device Model: WDC WD2000F9YZ-09N20L0 Serial Number: WD-WCC1P0590651 LU WWN Device Id: 5 0014ee 25e1b7330 Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical /dev/ada1: Model Family: Western Digital RE4 Device Model: WDC WD1003FBYX-01Y7B1 Serial Number: WD-WCAW35154447 LU WWN Device Id: 5 0014ee 2b2a4da9a Firmware Version: 01.01V02 User Capacity: 1,000,204,886,016 bytes [1.00 TB] /dev/ada2: Model Family: Seagate Constellation ES.3 Device Model: ST2000NM0033-9ZM175 Serial Number: Z1X0W9SP LU WWN Device Id: 5 000c50 064bc4572 Firmware Version: SN03 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Similar to this, it seems: From: Łukasz Wąsikowski To: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: ZFS: can't read MOS of pool Date: Mon, 22 Jul 2013 18:18:49 +0200 Any help is greatly appreciated. -- Ion-Mihai Tetcu From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 10:08:38 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4845872A for ; Mon, 16 Dec 2013 10:08:38 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id 07DDD1E51 for ; Mon, 16 Dec 2013 10:08:37 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id B3E365E0E; Mon, 16 Dec 2013 11:08:11 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id ADB555E0D for ; Mon, 16 Dec 2013 11:08:11 +0100 (CET) Date: Mon, 16 Dec 2013 11:08:11 +0100 (CET) From: krichy@tvnetwork.hu To: freebsd-fs@freebsd.org Subject: kern/184677 Message-ID: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="1030603365-1718570176-1387188491=:7004" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 10:08:38 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --1030603365-1718570176-1387188491=:7004 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Dear devs, I've attached a patch, which makes the recursive lockmgr disappear, and makes the reported bug to disappear. I dont know if I followed any guidelines well, or not, but at least it works for me. Please some ZFS/FreeBSD fs expert review it, and fix it where it needed. But unfortunately, my original problem is still not solved, maybe the same as Ryan's: http://lists.freebsd.org/pipermail/freebsd-fs/2013-December/018707.html Tracing the problem down is that zfsctl_snapdir_lookup() tries to acquire spa_namespace_lock while when finishing a zfs send -R does a zfsdev_close(), and that also holds the same mutex. And this causes a deadlock scenario. I looked at illumos's code, and for some reason they use another mutex on zfsdev_close(), which therefore may not deadlock with zfsctl_snapdir_lookup(). But I am still investigating the problem. I would like to help making ZFS more stable on freebsd also with its whole functionality. I would be very thankful if some expert would give some advice, how to solve these bugs. PJD, Steven, Xin? Thanks in advance, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. --1030603365-1718570176-1387188491=:7004 Content-Type: TEXT/x-diff; name=184677.patch Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=184677.patch Y29tbWl0IDc5NjVlMDFmY2E3OTE4N2Q4MTViOGE4NjU3MGY1MDU0N2QwMGU1 MzENCkF1dGhvcjogUmljaGFyZCBLb2plZHppbnN6a3kgPGtyaWNoeUBjZmxp bnV4Lmh1Pg0KRGF0ZTogICBNb24gRGVjIDE2IDA5OjU5OjU3IDIwMTMgKzAx MDANCg0KICAgIFpGUyBsb2NrIG9yZGVyaW5nIGZpeA0KDQpkaWZmIC0tZ2l0 IGEvc3lzL2NkZGwvY29tcGF0L29wZW5zb2xhcmlzL2tlcm4vb3BlbnNvbGFy aXNfbG9va3VwLmMgYi9zeXMvY2RkbC9jb21wYXQvb3BlbnNvbGFyaXMva2Vy bi9vcGVuc29sYXJpc19sb29rdXAuYw0KaW5kZXggOTQzODNkNi4uMjI1NTIx YSAxMDA2NDQNCi0tLSBhL3N5cy9jZGRsL2NvbXBhdC9vcGVuc29sYXJpcy9r ZXJuL29wZW5zb2xhcmlzX2xvb2t1cC5jDQorKysgYi9zeXMvY2RkbC9jb21w YXQvb3BlbnNvbGFyaXMva2Vybi9vcGVuc29sYXJpc19sb29rdXAuYw0KQEAg LTgxLDYgKzgxLDggQEAgdHJhdmVyc2Uodm5vZGVfdCAqKmN2cHAsIGludCBs a3R5cGUpDQogCSAqIHByb2dyZXNzIG9uIHRoaXMgdm5vZGUuDQogCSAqLw0K IA0KKwl2bl9sb2NrKGN2cCwgbGt0eXBlKTsNCisNCiAJZm9yICg7Oykgew0K IAkJLyoNCiAJCSAqIFJlYWNoZWQgdGhlIGVuZCBvZiB0aGUgbW91bnQgY2hh aW4/DQpAQCAtODksMTMgKzkxLDcgQEAgdHJhdmVyc2Uodm5vZGVfdCAqKmN2 cHAsIGludCBsa3R5cGUpDQogCQlpZiAodmZzcCA9PSBOVUxMKQ0KIAkJCWJy ZWFrOw0KIAkJZXJyb3IgPSB2ZnNfYnVzeSh2ZnNwLCAwKTsNCi0JCS8qDQot CQkgKiB0dnAgaXMgTlVMTCBmb3IgKmN2cHAgdm5vZGUsIHdoaWNoIHdlIGNh bid0IHVubG9jay4NCi0JCSAqLw0KLQkJaWYgKHR2cCAhPSBOVUxMKQ0KLQkJ CXZwdXQoY3ZwKTsNCi0JCWVsc2UNCi0JCQl2cmVsZShjdnApOw0KKwkJdnB1 dChjdnApOw0KIAkJaWYgKGVycm9yKQ0KIAkJCXJldHVybiAoZXJyb3IpOw0K IA0KZGlmZiAtLWdpdCBhL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMv dXRzL2NvbW1vbi9mcy9nZnMuYyBiL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNv bGFyaXMvdXRzL2NvbW1vbi9mcy9nZnMuYw0KaW5kZXggNTk5NDRhMS4uY2U0 M2ZmZiAxMDA2NDQNCi0tLSBhL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFy aXMvdXRzL2NvbW1vbi9mcy9nZnMuYw0KKysrIGIvc3lzL2NkZGwvY29udHJp Yi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL2dmcy5jDQpAQCAtNDQ4LDcg KzQ0OCw3IEBAIGdmc19sb29rdXBfZG90KHZub2RlX3QgKip2cHAsIHZub2Rl X3QgKmR2cCwgdm5vZGVfdCAqcHZwLCBjb25zdCBjaGFyICpubSkNCiAJCQlW Tl9IT0xEKHB2cCk7DQogCQkJKnZwcCA9IHB2cDsNCiAJCX0NCi0JCXZuX2xv Y2soKnZwcCwgTEtfRVhDTFVTSVZFIHwgTEtfUkVUUlkpOw0KKwkJdm5fbG9j aygqdnBwLCBMS19FWENMVVNJVkUgfCBMS19SRVRSWSB8IExLX0NBTlJFQ1VS U0UpOw0KIAkJcmV0dXJuICgwKTsNCiAJfQ0KIA0KZGlmZiAtLWdpdCBhL3N5 cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMv emZzX2N0bGRpci5jIGIvc3lzL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91 dHMvY29tbW9uL2ZzL3pmcy96ZnNfY3RsZGlyLmMNCmluZGV4IDI4YWIxZmEu LjlhMDU5NzYgMTAwNjQ0DQotLS0gYS9zeXMvY2RkbC9jb250cmliL29wZW5z b2xhcmlzL3V0cy9jb21tb24vZnMvemZzL3pmc19jdGxkaXIuYw0KKysrIGIv c3lzL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pm cy96ZnNfY3RsZGlyLmMNCkBAIC0xMTIsNiArMTEyLDMwIEBAIHNuYXBlbnRy eV9jb21wYXJlKGNvbnN0IHZvaWQgKmEsIGNvbnN0IHZvaWQgKmIpDQogCQly ZXR1cm4gKDApOw0KIH0NCiANCitzdGF0aWMgdm9pZA0KK3NuYXBkaXJfZW50 cnlfcmVtb3ZlX2ZyZWUoemZzY3RsX3NuYXBkaXJfdCAqc2RwLCB6ZnNfc25h cGVudHJ5X3QgKnNlcCkNCit7DQorCWF2bF9yZW1vdmUoJnNkcC0+c2Rfc25h cHMsIHNlcCk7DQorCWttZW1fZnJlZShzZXAtPnNlX25hbWUsIHN0cmxlbihz ZXAtPnNlX25hbWUpICsgMSk7DQorCWttZW1fZnJlZShzZXAsIHNpemVvZiAo emZzX3NuYXBlbnRyeV90KSk7DQorfQ0KKw0KK3N0YXRpYyB6ZnNjdGxfc25h cGRpcl90Kg0KK3NuYXBzaG90X2dldF9zbmFwZGlyKHZub2RlX3QgKnZwLCB2 bm9kZV90ICoqZHZwcCkNCit7DQorCWdmc19kaXJfdCAqZHAgPSB2cC0+dl9k YXRhOw0KKwkqZHZwcCA9IGRwLT5nZnNkX2ZpbGUuZ2ZzX3BhcmVudDsNCisJ emZzY3RsX3NuYXBkaXJfdCAqc2RwOw0KKw0KKwlWTl9IT0xEKCpkdnBwKTsN CisJVk9QX1VOTE9DSyh2cCwgMCk7DQorCXZuX2xvY2soKmR2cHAsIExLX1NI QVJFRCB8IExLX1JFVFJZIHwgTEtfQ0FOUkVDVVJTRSk7DQorCXNkcCA9ICgq ZHZwcCktPnZfZGF0YTsNCisJVk9QX1VOTE9DSygqZHZwcCwgMCk7DQorDQor CXJldHVybiAoc2RwKTsNCit9DQorDQogI2lmZGVmIHN1bg0KIHZub2Rlb3Bz X3QgKnpmc2N0bF9vcHNfcm9vdDsNCiB2bm9kZW9wc190ICp6ZnNjdGxfb3Bz X3NuYXBkaXI7DQpAQCAtMTAxMiw3ICsxMDM2LDEzIEBAIHpmc2N0bF9zbmFw ZGlyX2xvb2t1cChhcCkNCiAJCQkvKg0KIAkJCSAqIFRoZSBzbmFwc2hvdCB3 YXMgdW5tb3VudGVkIGJlaGluZCBvdXIgYmFja3MsDQogCQkJICogdHJ5IHRv IHJlbW91bnQgaXQuDQorCQkJICogQ29uY3VycmVudCB6ZnNjdGxfc25hcHNo b3RfaW5hY3RpdmUoKSB3b3VsZCByZW1vdmUgb3VyIGVudHJ5DQorCQkJICog c28gZG8gdGhpcyBvdXJzZWx2ZXMsIGFuZCBtYWtlIGEgZnJlc2ggbmV3IG1v dW50Lg0KIAkJCSAqLw0KKwkJCXNuYXBkaXJfZW50cnlfcmVtb3ZlX2ZyZWUo c2RwLCBzZXApOw0KKwkJCXZwdXQoKnZwcCk7DQorCQkJLyogZmluZCBuZXcg cGxhY2UgZm9yIHNlcCBlbnRyeSAqLw0KKwkJCWF2bF9maW5kKCZzZHAtPnNk X3NuYXBzLCAmc2VhcmNoLCAmd2hlcmUpOw0KIAkJCVZFUklGWSh6ZnNjdGxf c25hcHNob3Rfem5hbWUoZHZwLCBubSwgTUFYTkFNRUxFTiwgc25hcG5hbWUp ID09IDApOw0KIAkJCWdvdG8gZG9tb3VudDsNCiAJCX0gZWxzZSB7DQpAQCAt MTAyOCw2ICsxMDU4LDcgQEAgemZzY3RsX3NuYXBkaXJfbG9va3VwKGFwKQ0K IAkJcmV0dXJuIChlcnIpOw0KIAl9DQogDQorZG9tb3VudDoNCiAJLyoNCiAJ ICogVGhlIHJlcXVlc3RlZCBzbmFwc2hvdCBpcyBub3QgY3VycmVudGx5IG1v dW50ZWQsIGxvb2sgaXQgdXAuDQogCSAqLw0KQEAgLTEwNjgsNyArMTA5OSw2 IEBAIHpmc2N0bF9zbmFwZGlyX2xvb2t1cChhcCkNCiAJYXZsX2luc2VydCgm c2RwLT5zZF9zbmFwcywgc2VwLCB3aGVyZSk7DQogDQogCWRtdV9vYmpzZXRf cmVsZShzbmFwLCBGVEFHKTsNCi1kb21vdW50Og0KIAltb3VudHBvaW50X2xl biA9IHN0cmxlbihkdnAtPnZfdmZzcC0+bW50X3N0YXQuZl9tbnRvbm5hbWUp ICsNCiAJICAgIHN0cmxlbigiLyIgWkZTX0NUTERJUl9OQU1FICIvc25hcHNo b3QvIikgKyBzdHJsZW4obm0pICsgMTsNCiAJbW91bnRwb2ludCA9IGttZW1f YWxsb2MobW91bnRwb2ludF9sZW4sIEtNX1NMRUVQKTsNCkBAIC0xMzUwLDkg KzEzODAsNyBAQCB6ZnNjdGxfc25hcGRpcl9pbmFjdGl2ZShhcCkNCiAJICov DQogCW11dGV4X2VudGVyKCZzZHAtPnNkX2xvY2spOw0KIAl3aGlsZSAoKHNl cCA9IGF2bF9maXJzdCgmc2RwLT5zZF9zbmFwcykpICE9IE5VTEwpIHsNCi0J CWF2bF9yZW1vdmUoJnNkcC0+c2Rfc25hcHMsIHNlcCk7DQotCQlrbWVtX2Zy ZWUoc2VwLT5zZV9uYW1lLCBzdHJsZW4oc2VwLT5zZV9uYW1lKSArIDEpOw0K LQkJa21lbV9mcmVlKHNlcCwgc2l6ZW9mICh6ZnNfc25hcGVudHJ5X3QpKTsN CisJCXNuYXBkaXJfZW50cnlfcmVtb3ZlX2ZyZWUoc2RwLCBzZXApOw0KIAl9 DQogCW11dGV4X2V4aXQoJnNkcC0+c2RfbG9jayk7DQogCWdmc19kaXJfaW5h Y3RpdmUodnApOw0KQEAgLTE0NjMsMTcgKzE0OTEsMTkgQEAgemZzY3RsX3Nu YXBzaG90X2luYWN0aXZlKGFwKQ0KIAl6ZnNfc25hcGVudHJ5X3QgKnNlcCwg Km5leHQ7DQogCWludCBsb2NrZWQ7DQogCXZub2RlX3QgKmR2cDsNCisJZ2Zz X2Rpcl90ICpkcDsNCiANCi0JaWYgKHZwLT52X2NvdW50ID4gMCkNCi0JCWdv dG8gZW5kOw0KLQ0KLQlWRVJJRlkoZ2ZzX2Rpcl9sb29rdXAodnAsICIuLiIs ICZkdnAsIGNyLCAwLCBOVUxMLCBOVUxMKSA9PSAwKTsNCi0Jc2RwID0gZHZw LT52X2RhdGE7DQotCVZPUF9VTkxPQ0soZHZwLCAwKTsNCisJLyogVGhpcyBp cyBmb3IgYWNjZXNzaW5nIHRoZSByZWFsIHBhcmVudCBkaXJlY3RseSwgd2l0 aG91dCBhIHBvc3NpYmxlIGRlYWRsb2NrDQorCSAqIHdpdGggemZzY3RsX3Nu YXBkaXJfbG9va3VwKCkuIFRoZSByZWxlYXNlIG9mIGxvY2sgb24gdnAgYW5k IGxvY2sgb24gZHZwIHByb3ZpZGVzDQorCSAqIHRoZSBzYW1lIGxvY2sgb3Jk ZXIgYXMgaW4gemZzY3RsX3NuYXBzaG90X2xvb2t1cCgpLg0KKwkgKi8NCisJ c2RwID0gc25hcHNob3RfZ2V0X3NuYXBkaXIodnAsICZkdnApOw0KIA0KIAlp ZiAoIShsb2NrZWQgPSBNVVRFWF9IRUxEKCZzZHAtPnNkX2xvY2spKSkNCiAJ CW11dGV4X2VudGVyKCZzZHAtPnNkX2xvY2spOw0KIA0KKwl2bl9sb2NrKHZw LCBMS19FWENMVVNJVkUgfCBMS19SRVRSWSk7DQorDQogCUFTU0VSVCghdm5f aXNtbnRwdCh2cCkpOw0KIA0KIAlzZXAgPSBhdmxfZmlyc3QoJnNkcC0+c2Rf c25hcHMpOw0KQEAgLTE0ODEsOSArMTUxMSw3IEBAIHpmc2N0bF9zbmFwc2hv dF9pbmFjdGl2ZShhcCkNCiAJCW5leHQgPSBBVkxfTkVYVCgmc2RwLT5zZF9z bmFwcywgc2VwKTsNCiANCiAJCWlmIChzZXAtPnNlX3Jvb3QgPT0gdnApIHsN Ci0JCQlhdmxfcmVtb3ZlKCZzZHAtPnNkX3NuYXBzLCBzZXApOw0KLQkJCWtt ZW1fZnJlZShzZXAtPnNlX25hbWUsIHN0cmxlbihzZXAtPnNlX25hbWUpICsg MSk7DQotCQkJa21lbV9mcmVlKHNlcCwgc2l6ZW9mICh6ZnNfc25hcGVudHJ5 X3QpKTsNCisJCQlzbmFwZGlyX2VudHJ5X3JlbW92ZV9mcmVlKHNkcCwgc2Vw KTsNCiAJCQlicmVhazsNCiAJCX0NCiAJCXNlcCA9IG5leHQ7DQpAQCAtMTQ5 NCw3ICsxNTIyLDYgQEAgemZzY3RsX3NuYXBzaG90X2luYWN0aXZlKGFwKQ0K IAkJbXV0ZXhfZXhpdCgmc2RwLT5zZF9sb2NrKTsNCiAJVk5fUkVMRShkdnAp Ow0KIA0KLWVuZDoNCiAJLyoNCiAJICogRGlzcG9zZSBvZiB0aGUgdm5vZGUg Zm9yIHRoZSBzbmFwc2hvdCBtb3VudCBwb2ludC4NCiAJICogVGhpcyBpcyBz YWZlIHRvIGRvIGJlY2F1c2Ugb25jZSB0aGlzIGVudHJ5IGhhcyBiZWVuIHJl bW92ZWQNCkBAIC0xNTk1LDIwICsxNjIyLDE4IEBAIHpmc2N0bF9zbmFwc2hv dF9sb29rdXAoYXApDQogc3RhdGljIGludA0KIHpmc2N0bF9zbmFwc2hvdF92 cHRvY25wKHN0cnVjdCB2b3BfdnB0b2NucF9hcmdzICphcCkNCiB7DQotCXpm c3Zmc190ICp6ZnN2ZnMgPSBhcC0+YV92cC0+dl92ZnNwLT52ZnNfZGF0YTsN Ci0Jdm5vZGVfdCAqZHZwLCAqdnA7DQorCXZub2RlX3QgKnZwID0gYXAtPmFf dnA7DQorCXZub2RlX3QgKmR2cDsNCiAJemZzY3RsX3NuYXBkaXJfdCAqc2Rw Ow0KIAl6ZnNfc25hcGVudHJ5X3QgKnNlcDsNCiAJaW50IGVycm9yOw0KIA0K LQlBU1NFUlQoemZzdmZzLT56X2N0bGRpciAhPSBOVUxMKTsNCi0JZXJyb3Ig PSB6ZnNjdGxfcm9vdF9sb29rdXAoemZzdmZzLT56X2N0bGRpciwgInNuYXBz aG90IiwgJmR2cCwNCi0JICAgIE5VTEwsIDAsIE5VTEwsIGtjcmVkLCBOVUxM LCBOVUxMLCBOVUxMKTsNCi0JaWYgKGVycm9yICE9IDApDQotCQlyZXR1cm4g KGVycm9yKTsNCi0Jc2RwID0gZHZwLT52X2RhdGE7DQorCXNkcCA9IHNuYXBz aG90X2dldF9zbmFwZGlyKHZwLCAmZHZwKTsNCiANCiAJbXV0ZXhfZW50ZXIo JnNkcC0+c2RfbG9jayk7DQorDQorCXZuX2xvY2sodnAsIExLX0VYQ0xVU0lW RSB8IExLX1JFVFJZKTsNCisNCiAJc2VwID0gYXZsX2ZpcnN0KCZzZHAtPnNk X3NuYXBzKTsNCiAJd2hpbGUgKHNlcCAhPSBOVUxMKSB7DQogCQl2cCA9IHNl cC0+c2Vfcm9vdDsNCg== --1030603365-1718570176-1387188491=:7004-- From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 11:06:47 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6F297E9E for ; Mon, 16 Dec 2013 11:06:47 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 51017133A for ; Mon, 16 Dec 2013 11:06:47 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id rBGB6lUT019342 for ; Mon, 16 Dec 2013 11:06:47 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id rBGB6kAD019340 for freebsd-fs@FreeBSD.org; Mon, 16 Dec 2013 11:06:46 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 16 Dec 2013 11:06:46 GMT Message-Id: <201312161106.rBGB6kAD019340@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 11:06:47 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182570 fs [zfs] [patch] ZFS panic in receive o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o kern/121385 fs [unionfs] unionfs cross mount -> kernel panic o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 337 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 11:49:20 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E1D3A85D; Mon, 16 Dec 2013 11:49:19 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 1C0021828; Mon, 16 Dec 2013 11:49:18 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id NAA20891; Mon, 16 Dec 2013 13:49:16 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VsWfk-000CXm-It; Mon, 16 Dec 2013 13:49:16 +0200 Message-ID: <52AEE884.5000307@FreeBSD.org> Date: Mon, 16 Dec 2013 13:48:20 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Ion-Mihai Tetcu , freebsd-fs@FreeBSD.org Subject: Re: GTP ZFS boot failed after upgrading to 9.2-STABLE (can't read MOS) References: <20131216113818.b108196769e1fd1dd3b7e67d@FreeBSD.org> In-Reply-To: <20131216113818.b108196769e1fd1dd3b7e67d@FreeBSD.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 11:49:20 -0000 on 16/12/2013 11:38 Ion-Mihai Tetcu said the following: > Hi, > > > After upgrading to > 9.2-STABLE #8 r259396: Sun Dec 15 01:20:44 EET 2013 GENERIC amd64 > the system in question didn't come up: > ZFS: i/o error - all block copies unavailable > ZFS: can't read MOS of pool zroot > gptszfsboot: failed to mount default pool zroot > > GPTZFSBoot setup like in the wiki, except I didn't bothered with gnop. Could you please build zfsboottest utility in tools/tools/zfsboottest and then run it like this? zfsboottest /dev/gpt/z_ES3_2T /dev/gpt/z_wd2T - /boot/zfsloader Thanks! > At the time of the upgrade the boot disk was ada0. > Luckly I can boot without problems from any of the other 2 disks. > > Pool history: > 1. started as a mirror over partitions on a 500GB HDD and a 1TB HDD > (still present in the system - ada1 bellow, can boot from it) > 2. the 500GB was replaced by ada0, the system was able to boot from the > new disk; the pool extended via zfs online -e > 3. ada1 (1TB) was replaced by ada2, the pool not extended yet > --- At this point I could boot from any of the disks. > 4. zfs scrub the mirror without any error > 5. upgrade > (svn up, buildworld, buildkernel, installkernel, mergemaster -p, > installworld, mergemaster -iU, delete-old-libs, update ports .ko > modules, reboot) > 6. The error above. > 7. Boot from ada2, gpart bootcode ... ada0, still the same error. > > I have snapshots of the pool from before the upgrade, so I could try to > rollback and see if it makes any difference. (The machine is in > production so I can't do it on the spot). > > # zpool list -v > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 896G 608G 288G 67% 1.00x ONLINE - > mirror 896G 608G 288G 901G > gpt/z_ES3_2T - - - - > gpt/z_wd2T - - - - > > > # gpart show -l -p > => 34 3907029101 ada0 GPT (1.8T) > 34 6 - free - (3.0k) > 40 216 ada0p1 boot_wd2T (108k) > 256 67108864 ada0p2 swap_wd2T (32G) > 67109120 3774873600 ada0p3 z_wd2T (1.8T) > 3841982720 65046415 - free - (31G) > > => 34 1953525101 ada1 GPT (931G) > 34 6 - free - (3.0k) > 40 216 ada1p1 boot1 (108k) > 256 67108864 ada1p2 swap1 (32G) > 67109120 1885339648 ada1p3 disk1 (899G) > 1952448768 1076367 - free - (525M) > > => 34 3907029101 ada2 GPT (1.8T) > 34 216 ada2p1 boot_ES3_2T (108k) > 250 67108864 ada2p2 swap_ES3_2T (32G) > 67109114 3837788160 ada2p3 z_ES3_2T (1.8T) > 3904897274 2131861 - free - (1.0G) > > > # zfs list -t snapshot | wc -l > 16025 > > /dev/ada0: > Device Model: WDC WD2000F9YZ-09N20L0 > Serial Number: WD-WCC1P0590651 > LU WWN Device Id: 5 0014ee 25e1b7330 > Firmware Version: 01.01A01 > User Capacity: 2,000,398,934,016 bytes [2.00 TB] > Sector Sizes: 512 bytes logical, 4096 bytes physical > /dev/ada1: > Model Family: Western Digital RE4 > Device Model: WDC WD1003FBYX-01Y7B1 > Serial Number: WD-WCAW35154447 > LU WWN Device Id: 5 0014ee 2b2a4da9a > Firmware Version: 01.01V02 > User Capacity: 1,000,204,886,016 bytes [1.00 TB] > /dev/ada2: > Model Family: Seagate Constellation ES.3 > Device Model: ST2000NM0033-9ZM175 > Serial Number: Z1X0W9SP > LU WWN Device Id: 5 000c50 064bc4572 > Firmware Version: SN03 > User Capacity: 2,000,398,934,016 bytes [2.00 TB] > > > Similar to this, it seems: > From: Łukasz Wąsikowski > To: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org > Subject: ZFS: can't read MOS of pool > Date: Mon, 22 Jul 2013 18:18:49 +0200 > > > Any help is greatly appreciated. > -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 11:55:48 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0BB50B35; Mon, 16 Dec 2013 11:55:48 +0000 (UTC) Received: from mx.tetcu.info (mx.tetcu.info [217.19.15.179]) by mx1.freebsd.org (Postfix) with ESMTP id B9D5718AD; Mon, 16 Dec 2013 11:55:47 +0000 (UTC) Received: from F2 (f1e.forteasig.com [81.181.146.226]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.tetcu.info (Postfix) with ESMTPSA id 7A22739E068; Mon, 16 Dec 2013 13:55:46 +0200 (EET) Date: Mon, 16 Dec 2013 13:55:46 +0200 From: Ion-Mihai Tetcu To: Andriy Gapon Subject: Re: GTP ZFS boot failed after upgrading to 9.2-STABLE (can't read MOS) Message-Id: <20131216135546.7ceb65c5991344d32303b64b@FreeBSD.org> In-Reply-To: <52AEE884.5000307@FreeBSD.org> References: <20131216113818.b108196769e1fd1dd3b7e67d@FreeBSD.org> <52AEE884.5000307@FreeBSD.org> X-Mailer: Sylpheed 3.3.0 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 11:55:48 -0000 On Mon, 16 Dec 2013 13:48:20 +0200 Andriy Gapon wrote: > on 16/12/2013 11:38 Ion-Mihai Tetcu said the following: > > Hi, > > > > > > After upgrading to > > 9.2-STABLE #8 r259396: Sun Dec 15 01:20:44 EET 2013 GENERIC amd64 > > the system in question didn't come up: > > ZFS: i/o error - all block copies unavailable > > ZFS: can't read MOS of pool zroot > > gptszfsboot: failed to mount default pool zroot > > > > GPTZFSBoot setup like in the wiki, except I didn't bothered with > > gnop. > > Could you please build zfsboottest utility in tools/tools/zfsboottest > and then run it like this? > zfsboottest /dev/gpt/z_ES3_2T /dev/gpt/z_wd2T - /boot/zfsloader # /root/bin/zfsboottest /dev/gpt/z_ES3_2T /dev/gpt/z_wd2T - /boot/zfsloader pool: zroot bootfs: zroot/ROOT/default config: NAME STATE zroot ONLINE mirror ONLINE gpt/z_ES3_2T ONLINE gpt/z_wd2T ONLINE 809b79a8e78d637dddc618d992b37004 /boot/zfsloader -- Ion-Mihai Tetcu From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 12:21:36 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CE2BF3F3; Mon, 16 Dec 2013 12:21:36 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id E4F6E1AE8; Mon, 16 Dec 2013 12:21:35 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA21509; Mon, 16 Dec 2013 14:21:34 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VsXB0-000CZm-3K; Mon, 16 Dec 2013 14:21:34 +0200 Message-ID: <52AEF02A.6020108@FreeBSD.org> Date: Mon, 16 Dec 2013 14:20:58 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Ion-Mihai Tetcu Subject: Re: GTP ZFS boot failed after upgrading to 9.2-STABLE (can't read MOS) References: <20131216113818.b108196769e1fd1dd3b7e67d@FreeBSD.org> <52AEE884.5000307@FreeBSD.org> <20131216135546.7ceb65c5991344d32303b64b@FreeBSD.org> In-Reply-To: <20131216135546.7ceb65c5991344d32303b64b@FreeBSD.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 12:21:36 -0000 on 16/12/2013 13:55 Ion-Mihai Tetcu said the following: > On Mon, 16 Dec 2013 13:48:20 +0200 > Andriy Gapon wrote: > >> on 16/12/2013 11:38 Ion-Mihai Tetcu said the following: >>> Hi, >>> >>> >>> After upgrading to >>> 9.2-STABLE #8 r259396: Sun Dec 15 01:20:44 EET 2013 GENERIC amd64 >>> the system in question didn't come up: >>> ZFS: i/o error - all block copies unavailable >>> ZFS: can't read MOS of pool zroot >>> gptszfsboot: failed to mount default pool zroot >>> >>> GPTZFSBoot setup like in the wiki, except I didn't bothered with >>> gnop. >> >> Could you please build zfsboottest utility in tools/tools/zfsboottest >> and then run it like this? >> zfsboottest /dev/gpt/z_ES3_2T /dev/gpt/z_wd2T - /boot/zfsloader > > # /root/bin/zfsboottest /dev/gpt/z_ES3_2T /dev/gpt/z_wd2T - /boot/zfsloader > pool: zroot > bootfs: zroot/ROOT/default > config: > > NAME STATE > zroot ONLINE > mirror ONLINE > gpt/z_ES3_2T ONLINE > gpt/z_wd2T ONLINE > > 809b79a8e78d637dddc618d992b37004 /boot/zfsloader > > Okay, so ZFS boot code is able to read the pool in general. Could you please also do the following as well? zdb -l /dev/gpt/z_wd2T zdb -l /dev/gpt/z_ES3_2T zdb -dddd zroot 1 -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 14:23:27 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E5E956DA for ; Mon, 16 Dec 2013 14:23:27 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id A537E13D3 for ; Mon, 16 Dec 2013 14:23:27 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id D27D35FCD; Mon, 16 Dec 2013 15:23:06 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id D096F5FCC for ; Mon, 16 Dec 2013 15:23:06 +0100 (CET) Date: Mon, 16 Dec 2013 15:23:06 +0100 (CET) From: krichy@tvnetwork.hu To: freebsd-fs@freebsd.org Subject: Re: kern/184677 In-Reply-To: Message-ID: References: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 14:23:28 -0000 Seems that pjd did a change which eliminated the zfsdev_state_lock on Fri Aug 12 07:04:16 2011 +0000, which might introduced a new deadlock situation. Any comments on this? Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: > Date: Mon, 16 Dec 2013 11:08:11 +0100 (CET) > From: krichy@tvnetwork.hu > To: freebsd-fs@freebsd.org > Subject: kern/184677 > > Dear devs, > > I've attached a patch, which makes the recursive lockmgr disappear, and makes > the reported bug to disappear. I dont know if I followed any guidelines well, > or not, but at least it works for me. Please some ZFS/FreeBSD fs expert > review it, and fix it where it needed. > > But unfortunately, my original problem is still not solved, maybe the same as > Ryan's: > http://lists.freebsd.org/pipermail/freebsd-fs/2013-December/018707.html > > Tracing the problem down is that zfsctl_snapdir_lookup() tries to acquire > spa_namespace_lock while when finishing a zfs send -R does a zfsdev_close(), > and that also holds the same mutex. And this causes a deadlock scenario. I > looked at illumos's code, and for some reason they use another mutex on > zfsdev_close(), which therefore may not deadlock with > zfsctl_snapdir_lookup(). But I am still investigating the problem. > > I would like to help making ZFS more stable on freebsd also with its whole > functionality. I would be very thankful if some expert would give some > advice, how to solve these bugs. PJD, Steven, Xin? > > Thanks in advance, > > > Kojedzinszky Richard > Euronet Magyarorszag Informatikai Zrt. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 14:37:01 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0D9B6BF9; Mon, 16 Dec 2013 14:37:01 +0000 (UTC) Received: from mx.tetcu.info (mx.tetcu.info [217.19.15.179]) by mx1.freebsd.org (Postfix) with ESMTP id 334FD14FB; Mon, 16 Dec 2013 14:36:59 +0000 (UTC) Received: from F2 (f1e.forteasig.com [81.181.146.226]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.tetcu.info (Postfix) with ESMTPSA id E85BE3A2878; Mon, 16 Dec 2013 16:36:58 +0200 (EET) Date: Mon, 16 Dec 2013 16:36:59 +0200 From: Ion-Mihai Tetcu To: Andriy Gapon Subject: Re: GTP ZFS boot failed after upgrading to 9.2-STABLE (can't read MOS) Message-Id: <20131216163659.fdcd9cad35bb479f40122aeb@FreeBSD.org> In-Reply-To: <52AEF02A.6020108@FreeBSD.org> References: <20131216113818.b108196769e1fd1dd3b7e67d@FreeBSD.org> <52AEE884.5000307@FreeBSD.org> <20131216135546.7ceb65c5991344d32303b64b@FreeBSD.org> <52AEF02A.6020108@FreeBSD.org> X-Mailer: Sylpheed 3.3.0 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 14:37:01 -0000 On Mon, 16 Dec 2013 14:20:58 +0200 Andriy Gapon wrote: > on 16/12/2013 13:55 Ion-Mihai Tetcu said the following: > > On Mon, 16 Dec 2013 13:48:20 +0200 > > Andriy Gapon wrote: > > > >> on 16/12/2013 11:38 Ion-Mihai Tetcu said the following: > >>> Hi, > >>> > >>> > >>> After upgrading to > >>> 9.2-STABLE #8 r259396: Sun Dec 15 01:20:44 EET 2013 GENERIC amd64 > >>> the system in question didn't come up: > >>> ZFS: i/o error - all block copies unavailable > >>> ZFS: can't read MOS of pool zroot > >>> gptszfsboot: failed to mount default pool zroot > >>> > >>> GPTZFSBoot setup like in the wiki, except I didn't bothered with > >>> gnop. > >> > >> Could you please build zfsboottest utility in > >> tools/tools/zfsboottest and then run it like this? > >> zfsboottest /dev/gpt/z_ES3_2T /dev/gpt/z_wd2T - /boot/zfsloader > > > > # /root/bin/zfsboottest /dev/gpt/z_ES3_2T /dev/gpt/z_wd2T > > # - /boot/zfsloader > > pool: zroot > > bootfs: zroot/ROOT/default > > config: > > > > NAME STATE > > zroot ONLINE > > mirror ONLINE > > gpt/z_ES3_2T ONLINE > > gpt/z_wd2T ONLINE > > > > 809b79a8e78d637dddc618d992b37004 /boot/zfsloader > > > > > > Okay, so ZFS boot code is able to read the pool in general. > Could you please also do the following as well? > > zdb -l /dev/gpt/z_wd2T # zdb -l /dev/gpt/z_wd2T -------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: '' top_guid: 859717930543015389 guid: 15465645118630601473 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: -------------------------------------------- LABEL 1 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: 'f1.c.forteasig.com' top_guid: 859717930543015389 guid: 15465645118630601473 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: -------------------------------------------- LABEL 2 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: '' top_guid: 859717930543015389 guid: 15465645118630601473 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: -------------------------------------------- LABEL 3 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: 'f1.c.forteasig.com' top_guid: 859717930543015389 guid: 15465645118630601473 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: root@f1:/usr/src/tools/tools > zdb -l /dev/gpt/z_ES3_2T # zdb -l /dev/gpt/z_ES3_2T -------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: '' top_guid: 859717930543015389 guid: 12917042815582263639 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: -------------------------------------------- LABEL 1 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: 'f1.c.forteasig.com' top_guid: 859717930543015389 guid: 12917042815582263639 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: -------------------------------------------- LABEL 2 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: '' top_guid: 859717930543015389 guid: 12917042815582263639 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: -------------------------------------------- LABEL 3 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 321793 pool_guid: 6514863746620611513 hostid: 2982265512 hostname: 'f1.c.forteasig.com' top_guid: 859717930543015389 guid: 12917042815582263639 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 859717930543015389 metaslab_array: 33 metaslab_shift: 32 ashift: 9 asize: 965289181184 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12917042815582263639 path: '/dev/gpt/z_ES3_2T' phys_path: '/dev/gpt/z_ES3_2T' whole_disk: 1 DTL: 66201 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 15465645118630601473 path: '/dev/gpt/z_wd2T' phys_path: '/dev/gpt/z_wd2T' whole_disk: 1 DTL: 1032 create_txg: 4 features_for_read: > zdb -dddd zroot 1 # zdb -dddd zroot 1 Dataset mos [META], ID 0, cr_txg 4, 350M, 46112 objects, rootbp DVA[0]=<0:a868cb3200:200> DVA[1]=<0:5972804200:200> DVA[2]=<0:1709271a00:200> [L0 DMU objset] fletcher4 lzjb LE contiguous uni que triple size=800L/200P birth=340049L/340049P fill=46112 cksum=10f34cf819:6a938097d4b:156f1629bb5f4:2f0fcf1ec9220d Object lvl iblk dblk dsize lsize %full type 1 1 16K 16K 12.0K 32K 100.00 object directory dnode flags: USED_BYTES dnode maxblkid: 1 Fat ZAP stats: Pointer table: 1024 elements zt_blk: 0 zt_numblks: 0 zt_shift: 10 zt_blks_copied: 0 zt_nextblk: 0 ZAP entries: 15 Leaf blocks: 1 Total blocks: 2 zap_block_type: 0x8000000000000001 zap_magic: 0x2f52ab2ab zap_salt: 0xbf9403 Leafs with 2^n pointers: 9: 1 * Blocks with n*5 entries: 3: 1 * Blocks n/10 full: 2: 1 * Entries with n chunks: 3: 14 ************** 4: 0 5: 0 6: 0 7: 0 8: 0 9: 1 * Buckets with n entries: 0: 497 **************************************** 1: 15 ** history = 32 scan = 2 2 0 3 293592 294507 293592 1386927167 1386933313 609892873216 609933173760 0 609933173760 0 1 3 0 0 0 0 0 0 0 0 pool_props = 158 root_dataset = 2 errlog_last = 0 errlog_scrub = 0 features_for_write = 29 config = 27 empty_bpobj = 43 sync_bplist = 31 free_bpobj = 11 feature_descriptions = 30 features_for_read = 28 creation_version = 5000 deflate = 1 -- Ion-Mihai Tetcu From owner-freebsd-fs@FreeBSD.ORG Mon Dec 16 15:52:36 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D8409903; Mon, 16 Dec 2013 15:52:36 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id 9A9631C90; Mon, 16 Dec 2013 15:52:36 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id 74237615C; Mon, 16 Dec 2013 16:52:16 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id 7243A615B; Mon, 16 Dec 2013 16:52:16 +0100 (CET) Date: Mon, 16 Dec 2013 16:52:16 +0100 (CET) From: krichy@tvnetwork.hu To: pjd@freebsd.org Subject: Re: kern/184677 (fwd) Message-ID: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Dec 2013 15:52:36 -0000 Dear PJD, I am a happy FreeBSD user, I am sure you've read my previous posts regarding some issues in ZFS. Please give some advice for me, where to look for solutions, or how could I help to resolve those issues. Regards, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. ---------- Forwarded message ---------- Date: Mon, 16 Dec 2013 15:23:06 +0100 (CET) From: krichy@tvnetwork.hu To: freebsd-fs@freebsd.org Subject: Re: kern/184677 Seems that pjd did a change which eliminated the zfsdev_state_lock on Fri Aug 12 07:04:16 2011 +0000, which might introduced a new deadlock situation. Any comments on this? Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: > Date: Mon, 16 Dec 2013 11:08:11 +0100 (CET) > From: krichy@tvnetwork.hu > To: freebsd-fs@freebsd.org > Subject: kern/184677 > > Dear devs, > > I've attached a patch, which makes the recursive lockmgr disappear, and makes > the reported bug to disappear. I dont know if I followed any guidelines well, > or not, but at least it works for me. Please some ZFS/FreeBSD fs expert > review it, and fix it where it needed. > > But unfortunately, my original problem is still not solved, maybe the same as > Ryan's: > http://lists.freebsd.org/pipermail/freebsd-fs/2013-December/018707.html > > Tracing the problem down is that zfsctl_snapdir_lookup() tries to acquire > spa_namespace_lock while when finishing a zfs send -R does a zfsdev_close(), > and that also holds the same mutex. And this causes a deadlock scenario. I > looked at illumos's code, and for some reason they use another mutex on > zfsdev_close(), which therefore may not deadlock with > zfsctl_snapdir_lookup(). But I am still investigating the problem. > > I would like to help making ZFS more stable on freebsd also with its whole > functionality. I would be very thankful if some expert would give some > advice, how to solve these bugs. PJD, Steven, Xin? > > Thanks in advance, > > > Kojedzinszky Richard > Euronet Magyarorszag Informatikai Zrt. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 17 13:50:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AA987895; Tue, 17 Dec 2013 13:50:44 +0000 (UTC) Received: from krichy.tvnetwork.hu (krichy.tvnetwork.hu [109.61.101.194]) by mx1.freebsd.org (Postfix) with ESMTP id 653A3140F; Tue, 17 Dec 2013 13:50:43 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id 8F11E313C; Tue, 17 Dec 2013 14:50:16 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id 8D6EB313B; Tue, 17 Dec 2013 14:50:16 +0100 (CET) Date: Tue, 17 Dec 2013 14:50:16 +0100 (CET) From: krichy@tvnetwork.hu To: pjd@freebsd.org Subject: Re: kern/184677 (fwd) In-Reply-To: Message-ID: References: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Dec 2013 13:50:44 -0000 Dear devs, I will sum up my experience regarding the issue: The sympton is that a concurrent 'zfs send -R' and some activity on the snapshot dir (or in the snapshot) may cause a deadlock. After investigating the problem, I found that zfs send umounts the snapshots, and that causes the deadlock, so later I tested only with concurrent umount and the "activity". More later I found that listing the snapshots in .zfs/snapshot/ and unounting them can cause the found deadlock, so I used them for the tests. But for my surprise, instead of a deadlock, a recursive lock panic has arised. The vnode for the ".zfs/snapshot/" directory contains zfs's zfsctl_snapdir_t structure (sdp). This contains a tree of mounted snapshots, and each entry (sep) contains the vnode of entry on which the snapshot is mounted on top (se_root). The strange is that the se_root member does not hold a reference for the vnode, just a simple pointer to it. Upon entry lookup (zfsctl_snapdir_lookup()) the "snapshot" vnode is locked, the zfsctl_snapdir_t's tree is locked, and searched for the mount if it exists already. If it founds no entry, does the mount. In the case of an entry was found, the se_root member contains the vnode which the snapshot is mounted on. Thus, a reference is taken for it, and the traverse() call will resolve to the real root vnode of the mounted snapshot, returning it as locked. (Examining the traverse() code I've found that it did not follow FreeBSD's lock order recommendation described in sys/kern/vfs_subr.c.) On the other way, when an umount is issued, the se_root vnode looses its last reference (as only the mountpoint holds one for it), it goes through the vinactive() path, to zfsctl_snapshot_inactive(). In FreeBSD this is called with a locked vnode, so this is a deadlock race condition. While zfsctl_snapdir_lookup() holds the mutex for the sdp tree, and traverse() tries to acquire the se_root, zfsctl_snapshot_inactive() holds the lock on se_root while tries to access the sdp lock. The zfsctl_snapshot_inactive() has an if statement checking the v_usecount, which is incremented in zfsctl_snapdir_lookup(), but in that context it is not covered by VI_LOCK. And it seems to me that FreeBSD's vinactive() path assumes that the vnode remains inactive (as opposed to illumos, at least how i read the code). So zfsctl_snapshot_inactive() must free resources while in a locked state. I was a bit confused, and probably that is why the previously posted patch is as is. Maybe if I had some clues on the directions of this problem, I could have worked more for a nicer, shorter solution. Please someone comment on my post. Regards, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: > Date: Mon, 16 Dec 2013 16:52:16 +0100 (CET) > From: krichy@tvnetwork.hu > To: pjd@freebsd.org > Cc: freebsd-fs@freebsd.org > Subject: Re: kern/184677 (fwd) > > Dear PJD, > > I am a happy FreeBSD user, I am sure you've read my previous posts regarding > some issues in ZFS. Please give some advice for me, where to look for > solutions, or how could I help to resolve those issues. > > Regards, > Kojedzinszky Richard > Euronet Magyarorszag Informatikai Zrt. > > ---------- Forwarded message ---------- > Date: Mon, 16 Dec 2013 15:23:06 +0100 (CET) > From: krichy@tvnetwork.hu > To: freebsd-fs@freebsd.org > Subject: Re: kern/184677 > > > Seems that pjd did a change which eliminated the zfsdev_state_lock on Fri Aug > 12 07:04:16 2011 +0000, which might introduced a new deadlock situation. Any > comments on this? > > > Kojedzinszky Richard > Euronet Magyarorszag Informatikai Zrt. > > On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: > >> Date: Mon, 16 Dec 2013 11:08:11 +0100 (CET) >> From: krichy@tvnetwork.hu >> To: freebsd-fs@freebsd.org >> Subject: kern/184677 >> >> Dear devs, >> >> I've attached a patch, which makes the recursive lockmgr disappear, and >> makes the reported bug to disappear. I dont know if I followed any >> guidelines well, or not, but at least it works for me. Please some >> ZFS/FreeBSD fs expert review it, and fix it where it needed. >> >> But unfortunately, my original problem is still not solved, maybe the same >> as Ryan's: >> http://lists.freebsd.org/pipermail/freebsd-fs/2013-December/018707.html >> >> Tracing the problem down is that zfsctl_snapdir_lookup() tries to acquire >> spa_namespace_lock while when finishing a zfs send -R does a >> zfsdev_close(), and that also holds the same mutex. And this causes a >> deadlock scenario. I looked at illumos's code, and for some reason they use >> another mutex on zfsdev_close(), which therefore may not deadlock with >> zfsctl_snapdir_lookup(). But I am still investigating the problem. >> >> I would like to help making ZFS more stable on freebsd also with its whole >> functionality. I would be very thankful if some expert would give some >> advice, how to solve these bugs. PJD, Steven, Xin? >> >> Thanks in advance, >> >> >> Kojedzinszky Richard >> Euronet Magyarorszag Informatikai Zrt. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Dec 18 09:19:23 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A0D1179C; Wed, 18 Dec 2013 09:19:23 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id A681E18ED; Wed, 18 Dec 2013 09:19:22 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA08730; Wed, 18 Dec 2013 11:19:20 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VtDHk-000LQk-7s; Wed, 18 Dec 2013 11:19:20 +0200 Message-ID: <52B16847.8090905@FreeBSD.org> Date: Wed, 18 Dec 2013 11:17:59 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs Subject: namecache: numneg > 0 but ncneg is empty X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=X-VIET-VPS Content-Transfer-Encoding: 7bit Cc: Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Dec 2013 09:19:23 -0000 I've been running a test that exercises vfs, fs and namecache code quite a lot and I have run into the following panic: #2 0xffffffff808e9b43 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:637 #3 0xffffffff80ce57dd in trap_fatal (frame=0xc, eva=18446744071578770679) at /usr/src/sys/amd64/amd64/trap.c:879 #4 0xffffffff80ce58a6 in trap_pfault (frame=0xffffff9de1875260, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:700 #5 0xffffffff80ce60d7 in trap (frame=0xffffff9de1875260) at /usr/src/sys/amd64/amd64/trap.c:463 #6 0xffffffff80cce853 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:232 #7 0xffffffff8097b46d in cache_zap (ncp=0x0) at /usr/src/sys/kern/vfs_cache.c:417 #8 0xffffffff8097c22f in cache_enter_time (dvp=0xfffffe031c7215f8, vp=0xfffffe0a684f05f8, cnp=0xffffff9de1875858, tsp=0x0, dtsp=0x0) at /usr/src/sys/kern/vfs_cache.c:902 #9 0xffffffff81b9b26c in zfs_lookup (dvp=0xfffffe031c7215f8, nm=0xffffff9de1875460 "5", vpp=0xffffff9de1875830, cnp=0xffffff9de1875858, nameiop=1, cr=0xfffffe0a8f937800, td=0xfffffe04a80c2490, flags=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:1555 #10 0xffffffff81b9b338 in zfs_freebsd_lookup (ap=0xffffff9de18755d0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:5946 #11 0xffffffff80d8f403 in VOP_CACHEDLOOKUP_APV (vop=0xffffffff81c26ee0, a=0xffffff9de18755d0) at vnode_if.c:193 #12 0xffffffff8097cef3 in vfs_cache_lookup (ap=) at vnode_if.h:80 #13 0xffffffff80d8f623 in VOP_LOOKUP_APV (vop=0xffffffff81c26ee0, a=0xffffff9de18756b0) at vnode_if.c:126 #14 0xffffffff80984bed in lookup (ndp=0xffffff9de18757f0) at vnode_if.h:54 #15 0xffffffff80985a43 in namei (ndp=0xffffff9de18757f0) at /usr/src/sys/kern/vfs_lookup.c:294 #16 0xffffffff809981a2 in kern_mkdirat (td=0xfffffe04a80c2490, fd=-100, path=0x801c19110
, segflg=UIO_USERSPACE, mode=511) at /usr/src/sys/kern/vfs_syscalls.c:3830 #17 0xffffffff809983f6 in kern_mkdir (td=, path=, segflg=, mode=) at /usr/src/sys/kern/vfs_syscalls.c:3810 #18 0xffffffff80998414 in sys_mkdir (td=, uap=) at /usr/src/sys/kern/vfs_syscalls.c:3789 (kgdb) fr 8 #8 0xffffffff8097c22f in cache_enter_time (dvp=0xfffffe031c7215f8, vp=0xfffffe0a684f05f8, cnp=0xffffff9de1875858, tsp=0x0, dtsp=0x0) at /usr/src/sys/kern/vfs_cache.c:902 902 cache_zap(ncp); (kgdb) list 897 zap = 1; 898 } 899 if (hold) 900 vhold(dvp); 901 if (zap) 902 cache_zap(ncp); 903 CACHE_WUNLOCK(); 904 } 905 906 /* (kgdb) i loc ncp = (struct namecache *) 0x0 n2 = (struct namecache *) 0xffffffff8178a740 ncpp = (struct nchashhead *) 0xffffff8ccde4e9b0 hash = flag = 0 hold = 1 zap = 1 len = (kgdb) p numneg $4 = 437 (kgdb) p ncp $7 = (struct namecache *) 0x0 (kgdb) p ncneg $8 = {tqh_first = 0x0, tqh_last = 0xffffffff8178a710} I am not sure that there is a bug in namecache, but if there is one, then the only suspicious place I could find is ".." handling in cache_enter_time(). -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Dec 18 10:48:36 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9A350CC0 for ; Wed, 18 Dec 2013 10:48:36 +0000 (UTC) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 466FE1145 for ; Wed, 18 Dec 2013 10:48:35 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 27C60123793A; Wed, 18 Dec 2013 11:40:52 +0100 (CET) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000005, version=1.2.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 6.8856] X-CRM114-CacheID: sfid-20131218_11404_FF782A04 X-CRM114-Status: Good ( pR: 6.8856 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Wed Dec 18 11:40:52 2013 X-DSPAM-Confidence: 0.6566 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 52b17bb4853007850015789 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00010, NULL, 0.00182, disks, 0.00376, ZFS, 0.00439, From*Attila, 0.00439, To*FreeBSD.org, 0.00670, Received*online.co.hu+[195.228.243.99]), 0.00873, Received*[195.228.243.99]), 0.00873, Received*online.co.hu, 0.00873, From*Attila+Nagy, 0.00873, Received*(japan.t, 0.00873, From*Nagy+; Wed, 18 Dec 2013 11:40:48 +0100 (CET) Message-ID: <52B17BB0.2070209@fsn.hu> Date: Wed, 18 Dec 2013 11:40:48 +0100 From: Attila Nagy MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org Subject: freeing up NULL-led space in tmpfs Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Dec 2013 10:48:36 -0000 Hi, ZFS with compression enabled has a great feature: if you write NULLs(\0) to an already existing file, it will make it sparse and free up the NULL-ed space. I regurarly depend on this feature, but now I would need this on machines with no disks. Creating zpools on md is not too convenient, having this feature in tmpfs would be the best. Any chance that somebody works on it, or finds this a good candidate to work on? Thanks, From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 00:57:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4DD6907; Thu, 19 Dec 2013 00:57:11 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2B9101877; Thu, 19 Dec 2013 00:57:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: X-IronPort-AV: E=Sophos;i="4.95,510,1384318800"; d="scan'208";a="80939377" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 18 Dec 2013 19:57:03 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id CE9FBB4051; Wed, 18 Dec 2013 19:57:03 -0500 (EST) Date: Wed, 18 Dec 2013 19:57:03 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <461272120.32852470.1387414623832.JavaMail.root@uoguelph.ca> In-Reply-To: <52A7E53D.8000002@cse.yorku.ca> Subject: Re: mount ZFS snapshot on Linux system MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 00:57:11 -0000 Jason Keltz wrote: > On 10/12/2013 7:21 PM, Rick Macklem wrote: > > Jason Keltz wrote: > >> I'm running FreeBSD 9.2 with various ZFS datasets. > >> I export a dataset to a Linux system (RHEL64), and mount it. It > >> works > >> fine... > >> When I try to access the ZFS snapshot directory on the Linux NFS > >> client, > >> things go weird. > >> Ok, thanks to Jason's help testing, I've been chasing this down. (I also bumped into the comments in zfs_ctldir.c which are interesting. They include: * File systems mounted ontop of the GFS nodes '.zfs/snapshot/' * (ie: snapshots) are ZFS nodes and have their own unique vfs_t. * However, vnodes within these mounted on file systems have their v_vfsp * fields set to the head filesystem to make NFS happy (see * zfsctl_snapdir_lookup()). We VFS_HOLD the head filesystem's vfs_t * so that it cannot be freed until all snapshots have been unmounted. Is this comment from upstream code or a part of the FreeBSD port? The "make NFS happy" part seems questionable. It appears that it pretends that the automounts of the snapshots are a part of the same file system as .zfs/snapshot. The problem is that the i-node#s (or filenos, if you prefer) are duplicated (the root of each snapshot is 4, for example). This will cause a variety of problems for NFS clients, since filenos are assumed to refer to one and only one file object within a file system. I have a patch that I think does correctly return attributes to a Readdir etc to clients, so that NFSv4 clients see them as separate file systems (different fsids for each snapshot, and mounted_on_fileno != fileno for the snapshot fake mounts). The patch also expands the cases where Readdirplus in the NFS server switches from VFS_VGET() to VOP_LOOKUP() to include Readdir of .zfs/snapshot, so it doesn't get attributes for the fake mounted on vnode. The current patch is at: http://people.freebsd.org/~rmacklem/nfsv4-zfs-snapshot.patch Now, I have no idea what to do with NFSv3. Since NFSv3 can't cross server mount points and expects a mount point to exhibit the fileno only represents one file object property, NFSv3 shouldn't "see" anything in the snapshot directories when .zfs/snapshot is mounted. (ie. .zfs/snapshot/20131209 would just be an empty dir.) To get the contents of .zfs/snapshot/20131209 it would have to mount .zfs/snapshot/20131209. I'm not exactly sure what actually happens, but it isn't the above. Any opinions on what is the correct handling of these for NFS? (Or people willing to test the patch.) Thanks, rick ps: Pawel, I've added you as a cc, since you did the original switch from VFS_VGET()->VOP_LOOKUP() patch. > >> With NFSv4: > >> > >> [jas@archive /]# cd /mnt/.zfs/snapshot > >> [jas@archive snapshot]# ls > >> 20131203 20131205 20131206 20131207 20131208 20131209 > >> 20131210 > >> [jas@archive snapshot]# cd 20131210 > >> 20131210: Not a directory. > >> > >> huh? > >> > >> [jas@archive snapshot]# ls -al > >> total 77 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> [jas@archive snapshot]# stat * > >> [jas@archive snapshot]# ls -al > >> total 292 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 > >> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 > >> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 > >> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 > >> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 > >> > >> But it gets even more fun.. > >> > >> # ls -ali > >> total 205 > >> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> > >> This is not a user id mapping issue because all the files in /mnt > >> have > >> the proper owner/groups, and I can access them there fine. > >> > >> I also tried explicitly exporting .zfs/snapshot. The result isn't > >> any > >> different. > >> > >> If I use nfs v3 it "works", but I'm seeing a whole lot of errors > >> like > >> these in syslog: > >> > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131203: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131209: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131210: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131207: Invalid argument > >> > >> It's not clear to me why this doesn't just "work". > >> > >> Can anyone provide any advice on debugging this? > >> > > As I think you already know, I know nothing about ZFS and never > > use it. > Yup! :) > > Having said that, I suspect that there are filenos (i-node #s) > > that are the same in the snapshot as in the parent file system > > tree. > > > > The basic assumptions are: > > - within a file system, all i-node# are unique (represent one file > > object only) and all file objects have the same fsid > > - when the fsid changes, that indicates a file system boundary and > > fileno (i-node#s) can be reused in the subtree with a different > > fsid > > > > For NFSv3, the server should export single volumes only (all > > objects > > have the same fsid and the filenos are unique). This is indicated > > to > > the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and > > friends. > > > > For NFSv4, the server does export multiple volumes and the boundary > > is indicated by a change in fsid value. > > > > I suspect ZFS snaphots don't obey the above in some way, but that > > is > > just a hunch. > > > > Now, how to narrow this down... > > - Do the above tests (both NFSv4 and NFSv3) and capture the > > packets, > > then look at them in wireshark. In particular, look at the > > fileid numbers > > and fsid values for the various directories under .zfs. > > I gave this a shot, but I haven't used wireshark to capture NFS > traffic > before, so if I need to provide additional details, let me know.. > > NFSv4: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid4.major=1446349656 > fsid4.minor=222 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid4.major=1845998066 > fsid4.minor=222 > > For /mnt/jas: > fileid=144 > fsid4.major=597946950 > fsid4.minor=222 > > For /mnt/jas1: > fileid=338 > fsid4.major=597946950 > fsid4.minor=222 > > So fsid is the same for all the different "data" directories, which > is > what I would expect given what you said. I guess each snapshot is > seen > as a unique filesystem... but then a repeating inode in different > filesystems shouldn't be a problem... > > NFSv3: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid=0x0000000056358b58 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid=0x000000006e07b1f2 > > For /mnt/jas > fileid=144 > fsid=0x0000000023a3f246 > > For /mnt/jas1: > fileid=338 > fsid=0x0000000023a3f246 > > Here, it seems it's the same, even though it's NFSv3... hmm. > > > > - Try mounting the individual snapshot directory, like > > .zfs/snapshot/20131209 and see if that works (for both NFSv3 > > and NFSv4). > > Hmm .. I tried this: > > /local/backup/home9/.zfs/snapshot/20131203 -ro > archive-mrpriv.cs.yorku.ca > V4: / > > ... but syslog reports: > > Dec 10 22:28:22 jungle mountd[85405]: can't export > /local/backup/home9/.zfs/snapshot/20131203 > > ... and of course I can't mount from either v3/v4. > > On the other hand, I kept it as: > > /local/backup/home9 -ro archive-mrpriv.cs.yorku.ca > V4:/ > > ... and was able to NFSv4 mount > /local/backup/home9/.zfs/snapshot/20131203, and this does indeed > work. > > > - Try doing the mounts with a FreeBSD client and see if you get the > > same > > behaviour? > I found this: > http://forums.freenas.org/threads/mounting-snapshot-directory-using-nfs-from-linux-broken.6060/ > .. implies it will work from FreeBSD/Nexenta, just not Linux. > Found this as well: > https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/lKyfYsjPMNM > > Jason. > > From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 07:03:56 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9C9418F; Thu, 19 Dec 2013 07:03:56 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 23D431168; Thu, 19 Dec 2013 07:03:55 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.7/8.14.7) with ESMTP id rBJ73p6c068540; Thu, 19 Dec 2013 09:03:51 +0200 (EET) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua rBJ73p6c068540 Received: (from kostik@localhost) by tom.home (8.14.7/8.14.7/Submit) id rBJ73odS068539; Thu, 19 Dec 2013 09:03:50 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Thu, 19 Dec 2013 09:03:50 +0200 From: Konstantin Belousov To: Andriy Gapon Subject: Re: namecache: numneg > 0 but ncneg is empty Message-ID: <20131219070350.GM59496@kib.kiev.ua> References: <52B16847.8090905@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="XAj4pGex3+LgE3R4" Content-Disposition: inline In-Reply-To: <52B16847.8090905@FreeBSD.org> User-Agent: Mutt/1.5.22 (2013-10-16) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on tom.home Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 07:03:56 -0000 --XAj4pGex3+LgE3R4 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Dec 18, 2013 at 11:17:59AM +0200, Andriy Gapon wrote: >=20 > I've been running a test that exercises vfs, fs and namecache code quite = a lot > and I have run into the following panic: >=20 > #2 0xffffffff808e9b43 in panic (fmt=3D) at > /usr/src/sys/kern/kern_shutdown.c:637 > #3 0xffffffff80ce57dd in trap_fatal (frame=3D0xc, eva=3D1844674407157877= 0679) at > /usr/src/sys/amd64/amd64/trap.c:879 > #4 0xffffffff80ce58a6 in trap_pfault (frame=3D0xffffff9de1875260, usermo= de=3D0) at > /usr/src/sys/amd64/amd64/trap.c:700 > #5 0xffffffff80ce60d7 in trap (frame=3D0xffffff9de1875260) at > /usr/src/sys/amd64/amd64/trap.c:463 > #6 0xffffffff80cce853 in calltrap () at /usr/src/sys/amd64/amd64/excepti= on.S:232 > #7 0xffffffff8097b46d in cache_zap (ncp=3D0x0) at /usr/src/sys/kern/vfs_= cache.c:417 > #8 0xffffffff8097c22f in cache_enter_time (dvp=3D0xfffffe031c7215f8, > vp=3D0xfffffe0a684f05f8, cnp=3D0xffffff9de1875858, tsp=3D0x0, dtsp=3D0x0)= at > /usr/src/sys/kern/vfs_cache.c:902 > #9 0xffffffff81b9b26c in zfs_lookup (dvp=3D0xfffffe031c7215f8, > nm=3D0xffffff9de1875460 "5", vpp=3D0xffffff9de1875830, cnp=3D0xffffff9de1= 875858, > nameiop=3D1, cr=3D0xfffffe0a8f937800, td=3D0xfffffe04a80c2490, flags=3D0) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /zfs_vnops.c:1555 > #10 0xffffffff81b9b338 in zfs_freebsd_lookup (ap=3D0xffffff9de18755d0) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /zfs_vnops.c:5946 > #11 0xffffffff80d8f403 in VOP_CACHEDLOOKUP_APV (vop=3D0xffffffff81c26ee0, > a=3D0xffffff9de18755d0) at vnode_if.c:193 > #12 0xffffffff8097cef3 in vfs_cache_lookup (ap=3D) at > vnode_if.h:80 > #13 0xffffffff80d8f623 in VOP_LOOKUP_APV (vop=3D0xffffffff81c26ee0, > a=3D0xffffff9de18756b0) at vnode_if.c:126 > #14 0xffffffff80984bed in lookup (ndp=3D0xffffff9de18757f0) at vnode_if.h= :54 > #15 0xffffffff80985a43 in namei (ndp=3D0xffffff9de18757f0) at > /usr/src/sys/kern/vfs_lookup.c:294 > #16 0xffffffff809981a2 in kern_mkdirat (td=3D0xfffffe04a80c2490, fd=3D-10= 0, > path=3D0x801c19110
, segflg=3DUIO_USER= SPACE, > mode=3D511) at /usr/src/sys/kern/vfs_syscalls.c:3830 > #17 0xffffffff809983f6 in kern_mkdir (td=3D, path=3D= optimized out>, segflg=3D, mode=3D) at > /usr/src/sys/kern/vfs_syscalls.c:3810 > #18 0xffffffff80998414 in sys_mkdir (td=3D, uap=3D optimized out>) at /usr/src/sys/kern/vfs_syscalls.c:3789 >=20 > (kgdb) fr 8 > #8 0xffffffff8097c22f in cache_enter_time (dvp=3D0xfffffe031c7215f8, > vp=3D0xfffffe0a684f05f8, cnp=3D0xffffff9de1875858, tsp=3D0x0, dtsp=3D0x0)= at > /usr/src/sys/kern/vfs_cache.c:902 > 902 cache_zap(ncp); > (kgdb) list > 897 zap =3D 1; > 898 } > 899 if (hold) > 900 vhold(dvp); > 901 if (zap) > 902 cache_zap(ncp); > 903 CACHE_WUNLOCK(); > 904 } > 905 > 906 /* > (kgdb) i loc > ncp =3D (struct namecache *) 0x0 > n2 =3D (struct namecache *) 0xffffffff8178a740 > ncpp =3D (struct nchashhead *) 0xffffff8ccde4e9b0 > hash =3D > flag =3D 0 > hold =3D 1 > zap =3D 1 > len =3D >=20 > (kgdb) p numneg > $4 =3D 437 > (kgdb) p ncp > $7 =3D (struct namecache *) 0x0 > (kgdb) p ncneg > $8 =3D {tqh_first =3D 0x0, tqh_last =3D 0xffffffff8178a710} >=20 >=20 > I am not sure that there is a bug in namecache, but if there is one, then= the > only suspicious place I could find is ".." handling in cache_enter_time(). >=20 Do you mean that numneg accounting is wrong for the case when the existing ncp retargeted for dd ? This is the only issue I see there, but it looks as the real case for the failure. Testcase would be lot of lookups down the long directory hierarchy, and than walking back through the ".." entries. Even if the thing does not panic, the resulting length of the ncneg tailq should be strictly less than the numneg. diff --git a/sys/kern/vfs_cache.c b/sys/kern/vfs_cache.c index d46ba3d..33f5cce 100644 --- a/sys/kern/vfs_cache.c +++ b/sys/kern/vfs_cache.c @@ -748,16 +748,20 @@ cache_enter_time(dvp, vp, cnp, tsp, dtsp) ncp->nc_flag & NCF_ISDOTDOT) { KASSERT(ncp->nc_dvp =3D=3D dvp, ("wrong isdotdot parent")); - if (ncp->nc_vp !=3D NULL) + if (ncp->nc_vp !=3D NULL) { TAILQ_REMOVE(&ncp->nc_vp->v_cache_dst, ncp, nc_dst); - else + } else { TAILQ_REMOVE(&ncneg, ncp, nc_dst); - if (vp !=3D NULL) + numneg--; + } + if (vp !=3D NULL) { TAILQ_INSERT_HEAD(&vp->v_cache_dst, ncp, nc_dst); - else + } else { TAILQ_INSERT_TAIL(&ncneg, ncp, nc_dst); + numneg++; + } ncp->nc_vp =3D vp; CACHE_WUNLOCK(); return; @@ -893,6 +897,8 @@ cache_enter_time(dvp, vp, cnp, tsp, dtsp) } if (numneg * ncnegfactor > numcache) { ncp =3D TAILQ_FIRST(&ncneg); + KASSERT(ncp->nc_vp =3D=3D NULL, ("ncp %p vp %p on ncneg", + ncp, ncp->nc_vp)); zap =3D 1; } if (hold) --XAj4pGex3+LgE3R4 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQIcBAEBAgAGBQJSsppVAAoJEJDCuSvBvK1Bqk0P/R11cIkQ/5QwZNusRSpwXd7l Q511uJTqSLcu7OlbCRLD1vO5WhCGNbPA6QcjpXd0Ey2JnKoT4ZJfOewxnKpzuqMn Q+39X+WrJJLy7134muhXO+RiOZjf6p3fJeMdtSeMviLGwJUHpe/TKbz6DVq7zoJe w/0Hse2himY62sDu8dMECyzReLaG2g3E9ah4fI9cXZJafmQU2FaxiLpGbJzBwEoq ns65pySBmxZ9PQ6u02xfLZer6Ry0DYKbJ2Z65BgMdFiEKkgOiUYHezvYXXQKwRGC pZ3u913h/IPXsjHrIp6mPh0mLU5rq+PgYoHAj+WpHPaAXagUVV0j8xVq5K1C6jgY ygMt56wiNkVDnprvRsfS11LfJ0Jt2UOl0qeSCZAHGUDu+uRvylJkjtS66jgzvN8a 9l92rPnTFJ+Hp+Vl1vkPAG+Tkf5miuQ7Hwws6tlDJdgGVBBK/SLhBKwBrD0CudNi aXAiDCZ34Jg0fVpVoQaD594UrsMSfN6oMF1SY+gsb4LGgEkKp8pGmZpVZqCJz7Vx xI6YjkT1q39wf59WCfZ7/AeoTLBp/+NWTCLmB2JHApTi0o/6j91SeGNYQM4E3HdY jZzwqcLAsmXkiyCc92YtKDDv8fQOYyV0vPoc9nQLm9b43aK3EdK4nrEYxJw6P0EY N5Nhgi021G3v3Qn+YhZB =Yf3U -----END PGP SIGNATURE----- --XAj4pGex3+LgE3R4-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 07:57:19 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E6CAC5B6 for ; Thu, 19 Dec 2013 07:57:19 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 04E5015F5 for ; Thu, 19 Dec 2013 07:57:18 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id JAA04275; Thu, 19 Dec 2013 09:57:07 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VtYTi-000PtQ-P7; Thu, 19 Dec 2013 09:57:06 +0200 Message-ID: <52B2A6AC.3070902@FreeBSD.org> Date: Thu, 19 Dec 2013 09:56:28 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Konstantin Belousov , peter@holm.cc Subject: Re: namecache: numneg > 0 but ncneg is empty References: <52B16847.8090905@FreeBSD.org> <20131219070350.GM59496@kib.kiev.ua> In-Reply-To: <20131219070350.GM59496@kib.kiev.ua> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 07:57:20 -0000 on 19/12/2013 09:03 Konstantin Belousov said the following: > On Wed, Dec 18, 2013 at 11:17:59AM +0200, Andriy Gapon wrote: >> >> I've been running a test that exercises vfs, fs and namecache code quite a lot >> and I have run into the following panic: [snip] >> (kgdb) fr 8 >> #8 0xffffffff8097c22f in cache_enter_time (dvp=0xfffffe031c7215f8, >> vp=0xfffffe0a684f05f8, cnp=0xffffff9de1875858, tsp=0x0, dtsp=0x0) at >> /usr/src/sys/kern/vfs_cache.c:902 >> 902 cache_zap(ncp); >> (kgdb) list >> 897 zap = 1; >> 898 } >> 899 if (hold) >> 900 vhold(dvp); >> 901 if (zap) >> 902 cache_zap(ncp); >> 903 CACHE_WUNLOCK(); >> 904 } >> 905 >> 906 /* >> (kgdb) i loc >> ncp = (struct namecache *) 0x0 >> n2 = (struct namecache *) 0xffffffff8178a740 >> ncpp = (struct nchashhead *) 0xffffff8ccde4e9b0 >> hash = >> flag = 0 >> hold = 1 >> zap = 1 >> len = >> >> (kgdb) p numneg >> $4 = 437 >> (kgdb) p ncp >> $7 = (struct namecache *) 0x0 >> (kgdb) p ncneg >> $8 = {tqh_first = 0x0, tqh_last = 0xffffffff8178a710} >> >> >> I am not sure that there is a bug in namecache, but if there is one, then the >> only suspicious place I could find is ".." handling in cache_enter_time(). >> > > Do you mean that numneg accounting is wrong for the case when the > existing ncp retargeted for dd ? This is the only issue I see there, but > it looks as the real case for the failure. Yes, this was the case that I suspected. > Testcase would be lot of lookups down the long directory hierarchy, and > than walking back through the ".." entries. Even if the thing does not > panic, the resulting length of the ncneg tailq should be strictly less > than the numneg. Kostik, thank you for the patch! I will test it in my environment. Peter, I am curious about what ideology is behind vfs testing in stress2. I know that I can just look at the code myself, but hope that asking you could be faster. Does stress2 exercise a certain set of scenarios? Or does it have an element of randomness? The reason I am asking is that I have found fsstress (xfsstress) insufficient for finding all the corner cases. I wrote a really simple script that just performs random operations like creating, unlinking, renaming, etc a file or directory using randomly generated paths (with certain constraints). Running a hundred instances of that script on the same hierarchy is surprisingly effective at uncovering bugs that are very hard to reproduce otherwise. So, I am wondering if I've just duplicated what you already had. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 08:19:02 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 33694DC7 for ; Thu, 19 Dec 2013 08:19:02 +0000 (UTC) Received: from relay03.pair.com (relay03.pair.com [209.68.5.17]) by mx1.freebsd.org (Postfix) with SMTP id BEF39178B for ; Thu, 19 Dec 2013 08:19:01 +0000 (UTC) Received: (qmail 5382 invoked from network); 19 Dec 2013 08:12:19 -0000 Received: from 87.58.146.155 (HELO x2.osted.lan) (87.58.146.155) by relay03.pair.com with SMTP; 19 Dec 2013 08:12:19 -0000 X-pair-Authenticated: 87.58.146.155 Received: from x2.osted.lan (localhost [127.0.0.1]) by x2.osted.lan (8.14.5/8.14.5) with ESMTP id rBJ8CI9e012886; Thu, 19 Dec 2013 09:12:18 +0100 (CET) (envelope-from pho@x2.osted.lan) Received: (from pho@localhost) by x2.osted.lan (8.14.5/8.14.5/Submit) id rBJ8CIfs012885; Thu, 19 Dec 2013 09:12:18 +0100 (CET) (envelope-from pho) Date: Thu, 19 Dec 2013 09:12:18 +0100 From: Peter Holm To: Andriy Gapon Subject: Re: namecache: numneg > 0 but ncneg is empty Message-ID: <20131219081218.GA12747@x2.osted.lan> References: <52B16847.8090905@FreeBSD.org> <20131219070350.GM59496@kib.kiev.ua> <52B2A6AC.3070902@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <52B2A6AC.3070902@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 08:19:02 -0000 On Thu, Dec 19, 2013 at 09:56:28AM +0200, Andriy Gapon wrote: > on 19/12/2013 09:03 Konstantin Belousov said the following: > > On Wed, Dec 18, 2013 at 11:17:59AM +0200, Andriy Gapon wrote: > >> > >> I've been running a test that exercises vfs, fs and namecache code quite a lot > >> and I have run into the following panic: > [snip] > >> (kgdb) fr 8 > >> #8 0xffffffff8097c22f in cache_enter_time (dvp=0xfffffe031c7215f8, > >> vp=0xfffffe0a684f05f8, cnp=0xffffff9de1875858, tsp=0x0, dtsp=0x0) at > >> /usr/src/sys/kern/vfs_cache.c:902 > >> 902 cache_zap(ncp); > >> (kgdb) list > >> 897 zap = 1; > >> 898 } > >> 899 if (hold) > >> 900 vhold(dvp); > >> 901 if (zap) > >> 902 cache_zap(ncp); > >> 903 CACHE_WUNLOCK(); > >> 904 } > >> 905 > >> 906 /* > >> (kgdb) i loc > >> ncp = (struct namecache *) 0x0 > >> n2 = (struct namecache *) 0xffffffff8178a740 > >> ncpp = (struct nchashhead *) 0xffffff8ccde4e9b0 > >> hash = > >> flag = 0 > >> hold = 1 > >> zap = 1 > >> len = > >> > >> (kgdb) p numneg > >> $4 = 437 > >> (kgdb) p ncp > >> $7 = (struct namecache *) 0x0 > >> (kgdb) p ncneg > >> $8 = {tqh_first = 0x0, tqh_last = 0xffffffff8178a710} > >> > >> > >> I am not sure that there is a bug in namecache, but if there is one, then the > >> only suspicious place I could find is ".." handling in cache_enter_time(). > >> > > > > Do you mean that numneg accounting is wrong for the case when the > > existing ncp retargeted for dd ? This is the only issue I see there, but > > it looks as the real case for the failure. > > Yes, this was the case that I suspected. > > > Testcase would be lot of lookups down the long directory hierarchy, and > > than walking back through the ".." entries. Even if the thing does not > > panic, the resulting length of the ncneg tailq should be strictly less > > than the numneg. > > Kostik, > > thank you for the patch! I will test it in my environment. > > Peter, > > I am curious about what ideology is behind vfs testing in stress2. I know that > I can just look at the code myself, but hope that asking you could be faster. > Does stress2 exercise a certain set of scenarios? Or does it have an element of > randomness? > The tests found in stress2/testcases does everything in a random fashion. Test found in stress2/misc are for the most part scenarios that has been used for finding specific problems. > The reason I am asking is that I have found fsstress (xfsstress) insufficient > for finding all the corner cases. I wrote a really simple script that just > performs random operations like creating, unlinking, renaming, etc a file or > directory using randomly generated paths (with certain constraints). Running a > hundred instances of that script on the same hierarchy is surprisingly effective > at uncovering bugs that are very hard to reproduce otherwise. > So, I am wondering if I've just duplicated what you already had. > > -- > Andriy Gapon -- Peter From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 10:04:39 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 435AC922 for ; Thu, 19 Dec 2013 10:04:39 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id DE960106A for ; Thu, 19 Dec 2013 10:04:38 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id 18DA4200B13; Thu, 19 Dec 2013 10:55:14 +0100 (CET) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id 8973E405889; Thu, 19 Dec 2013 10:55:15 +0100 (CET) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id 3B586406AF1; Thu, 19 Dec 2013 10:55:15 +0100 (CET) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3) with ESMTP id 2013121910550277-46729 ; Thu, 19 Dec 2013 10:55:02 +0100 Date: Thu, 19 Dec 2013 10:55:03 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: "Steven Hartland" Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-Id: <20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> In-Reply-To: <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/19/2013 10:55:02, Serialize by Router on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/19/2013 10:55:13, Serialize complete at 12/19/2013 10:55:13 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2013.12.19.94815 X-PerlMx-Spam: Gauge=IIIIIIIII, Probability=9%, Report=' MULTIPLE_RCPTS 0.1, HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODY_SIZE_5000_5999 0, BODY_SIZE_7000_LESS 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __C230066_P5 0, __CANPHARM_UNSUB_LINK 0, __CP_NAME_BODY 0, __CP_URI_IN_BODY 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __MULTIPLE_RCPTS_CC_X2 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __URI_NS ' Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 10:04:39 -0000 On Thu, 14 Nov 2013 19:30:33 -0000 "Steven Hartland" wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: Hi, Is there already a solution for this available? I think I am seeing the same issue here (also with 9.2): --- root@shapeshifter:~ # ll /tank/git/.zfs/snapshot/ ls: daily.3: Device busy ls: daily.6: Device busy total 30 drwxr-xr-x 12 211 211 24 Dec 16 00:00 daily.0/ drwxr-xr-x 12 211 211 24 Dec 18 00:00 daily.1/ drwxr-xr-x 12 211 211 24 Dec 17 00:00 daily.2/ drwxr-xr-x 12 211 211 24 Dec 14 00:00 daily.4/ drwxr-xr-x 12 211 211 24 Dec 13 00:00 daily.5/ drwxr-xr-x 12 211 211 24 Dec 15 00:00 weekly.0/ drwxr-xr-x 12 211 211 24 Dec 8 00:00 weekly.1/ drwxr-xr-x 12 211 211 24 Dec 1 00:00 weekly.2/ drwxr-xr-x 12 211 211 24 Nov 17 00:00 weekly.3/ drwxr-xr-x 12 211 211 24 Nov 10 00:00 weekly.4/ drwxr-xr-x 2 root wheel 3 Oct 20 00:00 weekly.5/ drwxr-xr-x 2 root wheel 3 Oct 6 00:00 weekly.6/ --- cu Gerrit SH> This could also be due to the following issue which there SH> will be a fix soon: SH> https://www.illumos.org/issues/4322 SH> SH> Regards SH> Steve SH> SH> ----- Original Message ----- SH> From: "Steven Hartland" SH> To: "Matt Magoffin" ; SH> Sent: Thursday, November 14, 2013 6:43 PM SH> Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 SH> SH> SH> > Sounds like you may have an automatic hold still present. SH> > SH> > What does the following report: SH> > zfs holds -r SH> > SH> > Regards SH> > Steve SH> > ----- Original Message ----- SH> > From: "Matt Magoffin" SH> > To: SH> > Sent: Thursday, November 14, 2013 6:35 PM SH> > Subject: ZFS snapshot renames failing after upgrade to 9.2 SH> > SH> > SH> > Hello, SH> > SH> > I have a system that had been running FreeBSD 9.1 for some time, and SH> > I recently upgraded to 9.2. I've been using this simple script to SH> > create daily, rotating ZFS snapshots via cron: SH> > SH> > http://andyleonard.com/2010/04/07/automatic-zfs-snapshot-rotation-on-freebsd/ SH> > SH> > Essentially the snapshots are renamed and then a new snapshot is SH> > created with the same name as the most recently created snapshot. SH> > Since the upgrade to 9.2, however, the snapshots aren't able to be SH> > renamed. I end up with an error like this: SH> > SH> > cannot rename 'zdata/home': a child dataset already has a snapshot SH> > with the new name cannot create snapshot 'zdata/home@daily.0': SH> > dataset already exists SH> > SH> > Once that happens, zfs will show two snapshots: SH> > SH> > # zfs list -t snapshot -o name,creation,used,referenced |grep SH> > # zdata/home SH> > zdata/home@daily.1 Wed Nov 13 0:00 SH> > 2013 50.5K 91.0G zdata/home@daily.0 SH> > Thu Nov 14 0:00 2013 0 91.0G SH> > SH> > However, trying to list the snapshots results in this error: SH> > SH> > # ls .zfs/snapshot/ SH> > ls: daily.1: Device busy SH> > daily.0 SH> > SH> > I can destroy the daily.1 snapshot: SH> > SH> > # zfs destroy zdata/home@daily.1 SH> > # zfs list -t snapshot -o name,creation,used,referenced |grep SH> > # zdata/home SH> > zdata/home@daily.0 Thu Nov 14 0:00 SH> > 2013 0 91.0G SH> > # ls .zfs/snapshot/ SH> > daily.0 SH> > SH> > Then if I try to rename it like the script would, I end up in the SH> > same "Device busy" state: SH> > SH> > # zfs rename zdata/home@daily.0 zdata/home@daily.1 SH> > # zfs list -t snapshot -o name,creation,used,referenced |grep SH> > # zdata/home SH> > zdata/home@daily.1 Thu Nov 14 0:00 SH> > 2013 0 91.0G SH> > # ls .zfs/snapshot/ SH> > ls: daily.1: Device busy SH> > SH> > Does anyone have any ideas how to get the renames working? SH> > SH> > -- m@ SH> > SH> > SH> > SH> > ================================================ SH> > This e.mail is private and confidential between Multiplay (UK) Ltd. SH> > and the person or entity to whom it is addressed. In the event of SH> > misdirection, the recipient is prohibited from using, copying, SH> > printing or otherwise disseminating it or any information contained SH> > in it. In the event of misdirection, illegible or incomplete SH> > transmission please telephone +44 845 868 1337 or return the E.mail SH> > to postmaster@multiplay.co.uk. SH> > SH> > _______________________________________________ SH> > freebsd-fs@freebsd.org mailing list SH> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs SH> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" SH> > SH> SH> SH> ================================================ SH> This e.mail is private and confidential between Multiplay (UK) Ltd. SH> and the person or entity to whom it is addressed. In the event of SH> misdirection, the recipient is prohibited from using, copying, SH> printing or otherwise disseminating it or any information contained in SH> it. SH> SH> In the event of misdirection, illegible or incomplete transmission SH> please telephone +44 845 868 1337 or return the E.mail to SH> postmaster@multiplay.co.uk. SH> SH> _______________________________________________ SH> freebsd-fs@freebsd.org mailing list SH> http://lists.freebsd.org/mailman/listinfo/freebsd-fs SH> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 11:31:34 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B429AB12; Thu, 19 Dec 2013 11:31:34 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id D57071628; Thu, 19 Dec 2013 11:31:33 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id NAA09033; Thu, 19 Dec 2013 13:31:32 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VtbpD-00007h-OC; Thu, 19 Dec 2013 13:31:31 +0200 Message-ID: <52B2D8D6.8090306@FreeBSD.org> Date: Thu, 19 Dec 2013 13:30:30 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs Subject: l2arc_feed_thread cpu utlization X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=X-VIET-VPS Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 11:31:34 -0000 This is just a heads up, no patch yet. l2arc_feed_thread periodically wakes up and scans certain amount of ARC buffers and writes eligible buffers to a cache device. Number of scanned buffers is limited by a threshold on the amount of data in the buffers seen. The threshold is applied on a per buffer list basis. In upstream there are 4 relevant lists: (data, metadata) X (MFU, MRU). In FreeBSD each of the lists was subdivided into 16 lists. This was done to reduce contention on the locks that protect the lists. But as a side effect l2arc_feed_thread can scan 16 times more data (~ buffers). So, if you have a rather large ARC and L2ARC and your buffers tend to be sufficiently small, then you could observe l2arc_feed_thread burning a noticeable amount of CPU. On some of our systems I observed it using up to 40% of a single core. Scaling back the threshold by factor of 16 makes CPU utilization go down by the same factor. I plan to commit this change to FreeBSD ZFS code. Any comments are welcome. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 13:47:21 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E0F0339F for ; Thu, 19 Dec 2013 13:47:21 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 3BE26121A for ; Thu, 19 Dec 2013 13:47:20 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA12311; Thu, 19 Dec 2013 15:47:17 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Vtdwa-0000IC-PR; Thu, 19 Dec 2013 15:47:16 +0200 Message-ID: <52B2F8BF.9050504@FreeBSD.org> Date: Thu, 19 Dec 2013 15:46:39 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Peter Holm Subject: Re: namecache: numneg > 0 but ncneg is empty References: <52B16847.8090905@FreeBSD.org> <20131219070350.GM59496@kib.kiev.ua> <52B2A6AC.3070902@FreeBSD.org> <20131219081218.GA12747@x2.osted.lan> In-Reply-To: <20131219081218.GA12747@x2.osted.lan> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 13:47:21 -0000 on 19/12/2013 10:12 Peter Holm said the following: > On Thu, Dec 19, 2013 at 09:56:28AM +0200, Andriy Gapon wrote: >> Peter, >> >> I am curious about what ideology is behind vfs testing in stress2. I know that >> I can just look at the code myself, but hope that asking you could be faster. >> Does stress2 exercise a certain set of scenarios? Or does it have an element of >> randomness? >> > > The tests found in stress2/testcases does everything in a random > fashion. Could you please add a few words about what kind of randomness is that? E.g. I looked at testcases/rename and it seems to do pretty predictable and linear renaming of files within the same directory. Also, it seems that the test would be aborted should a rename operation fail. But that would be a valid outcome in a truly random / chaotic testing. > Test found in stress2/misc are for the most part scenarios that has > been used for finding specific problems. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 14:25:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 38E76633 for ; Thu, 19 Dec 2013 14:25:46 +0000 (UTC) Received: from relay00.pair.com (relay00.pair.com [209.68.5.9]) by mx1.freebsd.org (Postfix) with SMTP id E28D5157A for ; Thu, 19 Dec 2013 14:25:45 +0000 (UTC) Received: (qmail 30046 invoked from network); 19 Dec 2013 14:19:02 -0000 Received: from 87.58.146.155 (HELO x2.osted.lan) (87.58.146.155) by relay00.pair.com with SMTP; 19 Dec 2013 14:19:02 -0000 X-pair-Authenticated: 87.58.146.155 Received: from x2.osted.lan (localhost [127.0.0.1]) by x2.osted.lan (8.14.5/8.14.5) with ESMTP id rBJEJ1Wj019706; Thu, 19 Dec 2013 15:19:02 +0100 (CET) (envelope-from pho@x2.osted.lan) Received: (from pho@localhost) by x2.osted.lan (8.14.5/8.14.5/Submit) id rBJEJ1vK019705; Thu, 19 Dec 2013 15:19:01 +0100 (CET) (envelope-from pho) Date: Thu, 19 Dec 2013 15:19:01 +0100 From: Peter Holm To: Andriy Gapon Subject: Re: namecache: numneg > 0 but ncneg is empty Message-ID: <20131219141901.GA19520@x2.osted.lan> References: <52B16847.8090905@FreeBSD.org> <20131219070350.GM59496@kib.kiev.ua> <52B2A6AC.3070902@FreeBSD.org> <20131219081218.GA12747@x2.osted.lan> <52B2F8BF.9050504@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <52B2F8BF.9050504@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 14:25:46 -0000 On Thu, Dec 19, 2013 at 03:46:39PM +0200, Andriy Gapon wrote: > on 19/12/2013 10:12 Peter Holm said the following: > > On Thu, Dec 19, 2013 at 09:56:28AM +0200, Andriy Gapon wrote: > >> Peter, > >> > >> I am curious about what ideology is behind vfs testing in stress2. I know that > >> I can just look at the code myself, but hope that asking you could be faster. > >> Does stress2 exercise a certain set of scenarios? Or does it have an element of > >> randomness? > >> > > > > The tests found in stress2/testcases does everything in a random > > fashion. > > Could you please add a few words about what kind of randomness is that? > E.g. I looked at testcases/rename and it seems to do pretty predictable and > linear renaming of files within the same directory. Also, it seems that the > test would be aborted should a rename operation fail. But that would be a valid > outcome in a truly random / chaotic testing. > > > Test found in stress2/misc are for the most part scenarios that has > > been used for finding specific problems. > > > -- > Andriy Gapon For testcases/rename the number of files to rename is controlled by the random number of invocations of this test. Two new rename scenarios was added recently by jmg@ to address specific SU+J issues. More rename scenarios can be found in stress2/misc/rename* -- Peter From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 14:54:16 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AA16BD36 for ; Thu, 19 Dec 2013 14:54:16 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 047C9175C for ; Thu, 19 Dec 2013 14:54:15 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id QAA13717; Thu, 19 Dec 2013 16:54:12 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VtezM-0000Lw-IL; Thu, 19 Dec 2013 16:54:12 +0200 Message-ID: <52B3085C.6080202@FreeBSD.org> Date: Thu, 19 Dec 2013 16:53:16 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Peter Holm Subject: Re: namecache: numneg > 0 but ncneg is empty References: <52B16847.8090905@FreeBSD.org> <20131219070350.GM59496@kib.kiev.ua> <52B2A6AC.3070902@FreeBSD.org> <20131219081218.GA12747@x2.osted.lan> <52B2F8BF.9050504@FreeBSD.org> <20131219141901.GA19520@x2.osted.lan> In-Reply-To: <20131219141901.GA19520@x2.osted.lan> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 14:54:16 -0000 on 19/12/2013 16:19 Peter Holm said the following: > For testcases/rename the number of files to rename is controlled by > the random number of invocations of this test. Two new rename > scenarios was added recently by jmg@ to address specific SU+J issues. > More rename scenarios can be found in stress2/misc/rename* Thank you for the explanation. Would you be interested in a more chaotic kind of vfs testing? :-) -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 15:05:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8F512317 for ; Thu, 19 Dec 2013 15:05:54 +0000 (UTC) Received: from relay00.pair.com (relay00.pair.com [209.68.5.9]) by mx1.freebsd.org (Postfix) with SMTP id 2B7781864 for ; Thu, 19 Dec 2013 15:05:53 +0000 (UTC) Received: (qmail 46088 invoked from network); 19 Dec 2013 15:05:52 -0000 Received: from 87.58.146.155 (HELO x2.osted.lan) (87.58.146.155) by relay00.pair.com with SMTP; 19 Dec 2013 15:05:52 -0000 X-pair-Authenticated: 87.58.146.155 Received: from x2.osted.lan (localhost [127.0.0.1]) by x2.osted.lan (8.14.5/8.14.5) with ESMTP id rBJF5qcR020551; Thu, 19 Dec 2013 16:05:52 +0100 (CET) (envelope-from pho@x2.osted.lan) Received: (from pho@localhost) by x2.osted.lan (8.14.5/8.14.5/Submit) id rBJF5qm8020550; Thu, 19 Dec 2013 16:05:52 +0100 (CET) (envelope-from pho) Date: Thu, 19 Dec 2013 16:05:52 +0100 From: Peter Holm To: Andriy Gapon Subject: Re: namecache: numneg > 0 but ncneg is empty Message-ID: <20131219150552.GA20522@x2.osted.lan> References: <52B16847.8090905@FreeBSD.org> <20131219070350.GM59496@kib.kiev.ua> <52B2A6AC.3070902@FreeBSD.org> <20131219081218.GA12747@x2.osted.lan> <52B2F8BF.9050504@FreeBSD.org> <20131219141901.GA19520@x2.osted.lan> <52B3085C.6080202@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <52B3085C.6080202@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 15:05:54 -0000 On Thu, Dec 19, 2013 at 04:53:16PM +0200, Andriy Gapon wrote: > on 19/12/2013 16:19 Peter Holm said the following: > > For testcases/rename the number of files to rename is controlled by > > the random number of invocations of this test. Two new rename > > scenarios was added recently by jmg@ to address specific SU+J issues. > > More rename scenarios can be found in stress2/misc/rename* > > Thank you for the explanation. > Would you be interested in a more chaotic kind of vfs testing? :-) > > -- > Andriy Gapon Yes please. I collect test scenarios; one can never get enough of those. :) -- Peter From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 15:46:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 25DC0DCA for ; Thu, 19 Dec 2013 15:46:33 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id 8B7EA1D2F for ; Thu, 19 Dec 2013 15:46:32 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id 7147F7917; Thu, 19 Dec 2013 16:46:11 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id 6E4447916; Thu, 19 Dec 2013 16:46:11 +0100 (CET) Date: Thu, 19 Dec 2013 16:46:11 +0100 (CET) From: krichy@tvnetwork.hu To: freebsd-fs@freebsd.org Subject: Re: kern/184677 / ZFS snapshot handling deadlocks In-Reply-To: Message-ID: References: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="1030603365-686922855-1387467971=:4344" X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: pawel@dawidek.net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 15:46:33 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --1030603365-686922855-1387467971=:4344 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Dear devs, I am attaching a more clear patch/fix for my snapshot handling issues (0002), but I would be happy if some ZFS expert would comment it. I am trying to solve it at least for two weeks now, and an ACK or a NACK would be nice from someone. Also a commit is reverted since it also caused deadlocks. I've read its comments, which also eliminates deadlocks, but I did not find any reference how to produce that deadlock. In my view reverting that makes my issues disappear, but I dont know what new cases will it rise. I've rewritten traverse() to make more like upstream, added two extra VN_HOLD()s to snapdir_lookup() when traverse returned the same vnode what was passed to it (I dont know even that upon creating a snapshot vnode why is that extra two holds needed for the vnode.) And unfortunately, due to FreeBSD calls vop_inactive callbacks with vnodes locked, that could also cause deadlocks, so zfsctl_snapshot_inactive() and zfsctl_snapshot_vptocnp() has been rewritten to work that around. After this, one may also get a deadlock, when a simple access would call into zfsctl_snapshot_lookup(). The documentation says, that those vnodes should always be covered, but after some stress test, sometimes we hit that call, and that can cause again deadlocks. In our environment I've just uncommented that callback, which returns ENODIR on some calls, but at least not a deadlock. The attached script can be used to reproduce my cases (would one confirm that?), and after the patches applied, they disappear (confirm?). Thanks, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Tue, 17 Dec 2013, krichy@tvnetwork.hu wrote: > Date: Tue, 17 Dec 2013 14:50:16 +0100 (CET) > From: krichy@tvnetwork.hu > To: pjd@freebsd.org > Cc: freebsd-fs@freebsd.org > Subject: Re: kern/184677 (fwd) > > Dear devs, > > I will sum up my experience regarding the issue: > > The sympton is that a concurrent 'zfs send -R' and some activity on the > snapshot dir (or in the snapshot) may cause a deadlock. > > After investigating the problem, I found that zfs send umounts the snapshots, > and that causes the deadlock, so later I tested only with concurrent umount > and the "activity". More later I found that listing the snapshots in > .zfs/snapshot/ and unounting them can cause the found deadlock, so I used > them for the tests. But for my surprise, instead of a deadlock, a recursive > lock panic has arised. > > The vnode for the ".zfs/snapshot/" directory contains zfs's zfsctl_snapdir_t > structure (sdp). This contains a tree of mounted snapshots, and each entry > (sep) contains the vnode of entry on which the snapshot is mounted on top > (se_root). The strange is that the se_root member does not hold a reference > for the vnode, just a simple pointer to it. > > Upon entry lookup (zfsctl_snapdir_lookup()) the "snapshot" vnode is locked, > the zfsctl_snapdir_t's tree is locked, and searched for the mount if it > exists already. If it founds no entry, does the mount. In the case of an > entry was found, the se_root member contains the vnode which the snapshot is > mounted on. Thus, a reference is taken for it, and the traverse() call will > resolve to the real root vnode of the mounted snapshot, returning it as > locked. (Examining the traverse() code I've found that it did not follow > FreeBSD's lock order recommendation described in sys/kern/vfs_subr.c.) > > On the other way, when an umount is issued, the se_root vnode looses its last > reference (as only the mountpoint holds one for it), it goes through the > vinactive() path, to zfsctl_snapshot_inactive(). In FreeBSD this is called > with a locked vnode, so this is a deadlock race condition. While > zfsctl_snapdir_lookup() holds the mutex for the sdp tree, and traverse() > tries to acquire the se_root, zfsctl_snapshot_inactive() holds the lock on > se_root while tries to access the sdp lock. > > The zfsctl_snapshot_inactive() has an if statement checking the v_usecount, > which is incremented in zfsctl_snapdir_lookup(), but in that context it is > not covered by VI_LOCK. And it seems to me that FreeBSD's vinactive() path > assumes that the vnode remains inactive (as opposed to illumos, at least how > i read the code). So zfsctl_snapshot_inactive() must free resources while in > a locked state. I was a bit confused, and probably that is why the previously > posted patch is as is. > > Maybe if I had some clues on the directions of this problem, I could have > worked more for a nicer, shorter solution. > > Please someone comment on my post. > > Regards, > > > > Kojedzinszky Richard > Euronet Magyarorszag Informatikai Zrt. > > On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: > >> Date: Mon, 16 Dec 2013 16:52:16 +0100 (CET) >> From: krichy@tvnetwork.hu >> To: pjd@freebsd.org >> Cc: freebsd-fs@freebsd.org >> Subject: Re: kern/184677 (fwd) >> >> Dear PJD, >> >> I am a happy FreeBSD user, I am sure you've read my previous posts >> regarding some issues in ZFS. Please give some advice for me, where to look >> for solutions, or how could I help to resolve those issues. >> >> Regards, >> Kojedzinszky Richard >> Euronet Magyarorszag Informatikai Zrt. >> >> ---------- Forwarded message ---------- >> Date: Mon, 16 Dec 2013 15:23:06 +0100 (CET) >> From: krichy@tvnetwork.hu >> To: freebsd-fs@freebsd.org >> Subject: Re: kern/184677 >> >> >> Seems that pjd did a change which eliminated the zfsdev_state_lock on Fri >> Aug 12 07:04:16 2011 +0000, which might introduced a new deadlock >> situation. Any comments on this? >> >> >> Kojedzinszky Richard >> Euronet Magyarorszag Informatikai Zrt. >> >> On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: >> >>> Date: Mon, 16 Dec 2013 11:08:11 +0100 (CET) >>> From: krichy@tvnetwork.hu >>> To: freebsd-fs@freebsd.org >>> Subject: kern/184677 >>> >>> Dear devs, >>> >>> I've attached a patch, which makes the recursive lockmgr disappear, and >>> makes the reported bug to disappear. I dont know if I followed any >>> guidelines well, or not, but at least it works for me. Please some >>> ZFS/FreeBSD fs expert review it, and fix it where it needed. >>> >>> But unfortunately, my original problem is still not solved, maybe the same >>> as Ryan's: >>> http://lists.freebsd.org/pipermail/freebsd-fs/2013-December/018707.html >>> >>> Tracing the problem down is that zfsctl_snapdir_lookup() tries to acquire >>> spa_namespace_lock while when finishing a zfs send -R does a >>> zfsdev_close(), and that also holds the same mutex. And this causes a >>> deadlock scenario. I looked at illumos's code, and for some reason they >>> use another mutex on zfsdev_close(), which therefore may not deadlock with >>> zfsctl_snapdir_lookup(). But I am still investigating the problem. >>> >>> I would like to help making ZFS more stable on freebsd also with its whole >>> functionality. I would be very thankful if some expert would give some >>> advice, how to solve these bugs. PJD, Steven, Xin? >>> >>> Thanks in advance, >>> >>> >>> Kojedzinszky Richard >>> Euronet Magyarorszag Informatikai Zrt. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > --1030603365-686922855-1387467971=:4344 Content-Type: TEXT/x-diff; name=0001-Revert-Eliminate-the-zfsdev_state_lock-entirely-and-.patch Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=0001-Revert-Eliminate-the-zfsdev_state_lock-entirely-and-.patch RnJvbSAzOTI5OGRhODM4ZDAwNmFkMjI1ZTQxNTI5ZDdiN2YyNDBmY2NmZTcz IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQ0KRnJvbTogUmljaGFyZCBLb2pl ZHppbnN6a3kgPGtyaWNoeUBjZmxpbnV4Lmh1Pg0KRGF0ZTogTW9uLCAxNiBE ZWMgMjAxMyAxNTozOToxMSArMDEwMA0KU3ViamVjdDogW1BBVENIIDEvMl0g UmV2ZXJ0ICJFbGltaW5hdGUgdGhlIHpmc2Rldl9zdGF0ZV9sb2NrIGVudGly ZWx5IGFuZA0KIHJlcGxhY2UgaXQgd2l0aCB0aGUiDQoNClRoaXMgcmV2ZXJ0 cyBjb21taXQgMWQ4OTcyYjNmMzUzZjk4NmViNWI4NWJjMTA4YjFjMGQ5NDZk MzIxOC4NCg0KQ29uZmxpY3RzOg0KCXN5cy9jZGRsL2NvbnRyaWIvb3BlbnNv bGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvenZvbC5jDQotLS0NCiAuLi4vb3Bl bnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvc3lzL3pmc19pb2N0bC5oICB8 ICAgMSArDQogLi4uL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL3Zk ZXZfZ2VvbS5jICAgICAgfCAgMTQgKystDQogLi4uL29wZW5zb2xhcmlzL3V0 cy9jb21tb24vZnMvemZzL3pmc19pb2N0bC5jICAgICAgfCAgMTYgKy0tDQog Li4uL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvenZv bC5jICAgfCAxMTkgKysrKysrKysrLS0tLS0tLS0tLS0tDQogNCBmaWxlcyBj aGFuZ2VkLCA3MCBpbnNlcnRpb25zKCspLCA4MCBkZWxldGlvbnMoLSkNCg0K ZGlmZiAtLWdpdCBhL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRz L2NvbW1vbi9mcy96ZnMvc3lzL3pmc19pb2N0bC5oIGIvc3lzL2NkZGwvY29u dHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9zeXMvemZzX2lv Y3RsLmgNCmluZGV4IGFmMmRlZjIuLjgyNzJjNGQgMTAwNjQ0DQotLS0gYS9z eXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZz L3N5cy96ZnNfaW9jdGwuaA0KKysrIGIvc3lzL2NkZGwvY29udHJpYi9vcGVu c29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy9zeXMvemZzX2lvY3RsLmgNCkBA IC0zODMsNiArMzgzLDcgQEAgZXh0ZXJuIHZvaWQgKnpmc2Rldl9nZXRfc29m dF9zdGF0ZShtaW5vcl90IG1pbm9yLA0KIGV4dGVybiBtaW5vcl90IHpmc2Rl dl9taW5vcl9hbGxvYyh2b2lkKTsNCiANCiBleHRlcm4gdm9pZCAqemZzZGV2 X3N0YXRlOw0KK2V4dGVybiBrbXV0ZXhfdCB6ZnNkZXZfc3RhdGVfbG9jazsN CiANCiAjZW5kaWYJLyogX0tFUk5FTCAqLw0KIA0KZGlmZiAtLWdpdCBhL3N5 cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMv dmRldl9nZW9tLmMgYi9zeXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0 cy9jb21tb24vZnMvemZzL3ZkZXZfZ2VvbS5jDQppbmRleCAxNTY4NWE1Li41 YzNlOWYzIDEwMDY0NA0KLS0tIGEvc3lzL2NkZGwvY29udHJpYi9vcGVuc29s YXJpcy91dHMvY29tbW9uL2ZzL3pmcy92ZGV2X2dlb20uYw0KKysrIGIvc3lz L2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy92 ZGV2X2dlb20uYw0KQEAgLTU4MSw3ICs1ODEsNyBAQCB2ZGV2X2dlb21fb3Bl bih2ZGV2X3QgKnZkLCB1aW50NjRfdCAqcHNpemUsIHVpbnQ2NF90ICptYXhf cHNpemUsDQogCXN0cnVjdCBnX3Byb3ZpZGVyICpwcDsNCiAJc3RydWN0IGdf Y29uc3VtZXIgKmNwOw0KIAlzaXplX3QgYnVmc2l6ZTsNCi0JaW50IGVycm9y Ow0KKwlpbnQgZXJyb3IsIGxvY2s7DQogDQogCS8qDQogCSAqIFdlIG11c3Qg aGF2ZSBhIHBhdGhuYW1lLCBhbmQgaXQgbXVzdCBiZSBhYnNvbHV0ZS4NCkBA IC01OTMsNiArNTkzLDEyIEBAIHZkZXZfZ2VvbV9vcGVuKHZkZXZfdCAqdmQs IHVpbnQ2NF90ICpwc2l6ZSwgdWludDY0X3QgKm1heF9wc2l6ZSwNCiANCiAJ dmQtPnZkZXZfdHNkID0gTlVMTDsNCiANCisJaWYgKG11dGV4X293bmVkKCZz cGFfbmFtZXNwYWNlX2xvY2spKSB7DQorCQltdXRleF9leGl0KCZzcGFfbmFt ZXNwYWNlX2xvY2spOw0KKwkJbG9jayA9IDE7DQorCX0gZWxzZSB7DQorCQls b2NrID0gMDsNCisJfQ0KIAlEUk9QX0dJQU5UKCk7DQogCWdfdG9wb2xvZ3lf bG9jaygpOw0KIAllcnJvciA9IDA7DQpAQCAtNjI0LDcgKzYzMCwxMSBAQCB2 ZGV2X2dlb21fb3Blbih2ZGV2X3QgKnZkLCB1aW50NjRfdCAqcHNpemUsIHVp bnQ2NF90ICptYXhfcHNpemUsDQogCSAgICAhSVNQMihjcC0+cHJvdmlkZXIt PnNlY3RvcnNpemUpKSB7DQogCQlaRlNfTE9HKDEsICJQcm92aWRlciAlcyBo YXMgdW5zdXBwb3J0ZWQgc2VjdG9yc2l6ZS4iLA0KIAkJICAgIHZkLT52ZGV2 X3BhdGgpOw0KKw0KKwkJZ190b3BvbG9neV9sb2NrKCk7DQogCQl2ZGV2X2dl b21fZGV0YWNoKGNwLCAwKTsNCisJCWdfdG9wb2xvZ3lfdW5sb2NrKCk7DQor DQogCQllcnJvciA9IEVJTlZBTDsNCiAJCWNwID0gTlVMTDsNCiAJfSBlbHNl IGlmIChjcC0+YWN3ID09IDAgJiYgKHNwYV9tb2RlKHZkLT52ZGV2X3NwYSkg JiBGV1JJVEUpICE9IDApIHsNCkBAIC02NDcsNiArNjU3LDggQEAgdmRldl9n ZW9tX29wZW4odmRldl90ICp2ZCwgdWludDY0X3QgKnBzaXplLCB1aW50NjRf dCAqbWF4X3BzaXplLA0KIAl9DQogCWdfdG9wb2xvZ3lfdW5sb2NrKCk7DQog CVBJQ0tVUF9HSUFOVCgpOw0KKwlpZiAobG9jaykNCisJCW11dGV4X2VudGVy KCZzcGFfbmFtZXNwYWNlX2xvY2spOw0KIAlpZiAoY3AgPT0gTlVMTCkgew0K IAkJdmQtPnZkZXZfc3RhdC52c19hdXggPSBWREVWX0FVWF9PUEVOX0ZBSUxF RDsNCiAJCXJldHVybiAoZXJyb3IpOw0KZGlmZiAtLWdpdCBhL3N5cy9jZGRs L2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9mcy96ZnMvemZzX2lv Y3RsLmMgYi9zeXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21t b24vZnMvemZzL3pmc19pb2N0bC5jDQppbmRleCBlOWZiYTI2Li45MWJlY2Rl IDEwMDY0NA0KLS0tIGEvc3lzL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91 dHMvY29tbW9uL2ZzL3pmcy96ZnNfaW9jdGwuYw0KKysrIGIvc3lzL2NkZGwv Y29udHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy96ZnNfaW9j dGwuYw0KQEAgLTU2MzUsNyArNTYzNSw3IEBAIHpmc2Rldl9taW5vcl9hbGxv Yyh2b2lkKQ0KIAlzdGF0aWMgbWlub3JfdCBsYXN0X21pbm9yOw0KIAltaW5v cl90IG07DQogDQotCUFTU0VSVChNVVRFWF9IRUxEKCZzcGFfbmFtZXNwYWNl X2xvY2spKTsNCisJQVNTRVJUKE1VVEVYX0hFTEQoJnpmc2Rldl9zdGF0ZV9s b2NrKSk7DQogDQogCWZvciAobSA9IGxhc3RfbWlub3IgKyAxOyBtICE9IGxh c3RfbWlub3I7IG0rKykgew0KIAkJaWYgKG0gPiBaRlNERVZfTUFYX01JTk9S KQ0KQEAgLTU2NTUsNyArNTY1NSw3IEBAIHpmc19jdGxkZXZfaW5pdChzdHJ1 Y3QgY2RldiAqZGV2cCkNCiAJbWlub3JfdCBtaW5vcjsNCiAJemZzX3NvZnRf c3RhdGVfdCAqenM7DQogDQotCUFTU0VSVChNVVRFWF9IRUxEKCZzcGFfbmFt ZXNwYWNlX2xvY2spKTsNCisJQVNTRVJUKE1VVEVYX0hFTEQoJnpmc2Rldl9z dGF0ZV9sb2NrKSk7DQogDQogCW1pbm9yID0gemZzZGV2X21pbm9yX2FsbG9j KCk7DQogCWlmIChtaW5vciA9PSAwKQ0KQEAgLTU2NzYsNyArNTY3Niw3IEBA IHpmc19jdGxkZXZfaW5pdChzdHJ1Y3QgY2RldiAqZGV2cCkNCiBzdGF0aWMg dm9pZA0KIHpmc19jdGxkZXZfZGVzdHJveSh6ZnNfb25leGl0X3QgKnpvLCBt aW5vcl90IG1pbm9yKQ0KIHsNCi0JQVNTRVJUKE1VVEVYX0hFTEQoJnNwYV9u YW1lc3BhY2VfbG9jaykpOw0KKwlBU1NFUlQoTVVURVhfSEVMRCgmemZzZGV2 X3N0YXRlX2xvY2spKTsNCiANCiAJemZzX29uZXhpdF9kZXN0cm95KHpvKTsN CiAJZGRpX3NvZnRfc3RhdGVfZnJlZSh6ZnNkZXZfc3RhdGUsIG1pbm9yKTsN CkBAIC01NzA2LDkgKzU3MDYsOSBAQCB6ZnNkZXZfb3BlbihzdHJ1Y3QgY2Rl diAqZGV2cCwgaW50IGZsYWcsIGludCBtb2RlLCBzdHJ1Y3QgdGhyZWFkICp0 ZCkNCiANCiAJLyogVGhpcyBpcyB0aGUgY29udHJvbCBkZXZpY2UuIEFsbG9j YXRlIGEgbmV3IG1pbm9yIGlmIHJlcXVlc3RlZC4gKi8NCiAJaWYgKGZsYWcg JiBGRVhDTCkgew0KLQkJbXV0ZXhfZW50ZXIoJnNwYV9uYW1lc3BhY2VfbG9j ayk7DQorCQltdXRleF9lbnRlcigmemZzZGV2X3N0YXRlX2xvY2spOw0KIAkJ ZXJyb3IgPSB6ZnNfY3RsZGV2X2luaXQoZGV2cCk7DQotCQltdXRleF9leGl0 KCZzcGFfbmFtZXNwYWNlX2xvY2spOw0KKwkJbXV0ZXhfZXhpdCgmemZzZGV2 X3N0YXRlX2xvY2spOw0KIAl9DQogDQogCXJldHVybiAoZXJyb3IpOw0KQEAg LTU3MjMsMTQgKzU3MjMsMTQgQEAgemZzZGV2X2Nsb3NlKHZvaWQgKmRhdGEp DQogCWlmIChtaW5vciA9PSAwKQ0KIAkJcmV0dXJuOw0KIA0KLQltdXRleF9l bnRlcigmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJbXV0ZXhfZW50ZXIoJnpm c2Rldl9zdGF0ZV9sb2NrKTsNCiAJem8gPSB6ZnNkZXZfZ2V0X3NvZnRfc3Rh dGUobWlub3IsIFpTU1RfQ1RMREVWKTsNCiAJaWYgKHpvID09IE5VTEwpIHsN Ci0JCW11dGV4X2V4aXQoJnNwYV9uYW1lc3BhY2VfbG9jayk7DQorCQltdXRl eF9leGl0KCZ6ZnNkZXZfc3RhdGVfbG9jayk7DQogCQlyZXR1cm47DQogCX0N CiAJemZzX2N0bGRldl9kZXN0cm95KHpvLCBtaW5vcik7DQotCW11dGV4X2V4 aXQoJnNwYV9uYW1lc3BhY2VfbG9jayk7DQorCW11dGV4X2V4aXQoJnpmc2Rl dl9zdGF0ZV9sb2NrKTsNCiB9DQogDQogc3RhdGljIGludA0KZGlmZiAtLWdp dCBhL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRzL2NvbW1vbi9m cy96ZnMvenZvbC5jIGIvc3lzL2NkZGwvY29udHJpYi9vcGVuc29sYXJpcy91 dHMvY29tbW9uL2ZzL3pmcy96dm9sLmMNCmluZGV4IDcyZDQ1MDIuLmFlYzUy MTkgMTAwNjQ0DQotLS0gYS9zeXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlz L3V0cy9jb21tb24vZnMvemZzL3p2b2wuYw0KKysrIGIvc3lzL2NkZGwvY29u dHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy96dm9sLmMNCkBA IC0xMDQsMTEgKzEwNCwxMiBAQCBzdGF0aWMgY2hhciAqenZvbF90YWcgPSAi enZvbF90YWciOw0KICNkZWZpbmUJWlZPTF9EVU1QU0laRQkJImR1bXBzaXpl Ig0KIA0KIC8qDQotICogVGhlIHNwYV9uYW1lc3BhY2VfbG9jayBwcm90ZWN0 cyB0aGUgemZzZGV2X3N0YXRlIHN0cnVjdHVyZSBmcm9tIGJlaW5nDQotICog bW9kaWZpZWQgd2hpbGUgaXQncyBiZWluZyB1c2VkLCBlLmcuIGFuIG9wZW4g dGhhdCBjb21lcyBpbiBiZWZvcmUgYQ0KLSAqIGNyZWF0ZSBmaW5pc2hlcy4g IEl0IGFsc28gcHJvdGVjdHMgdGVtcG9yYXJ5IG9wZW5zIG9mIHRoZSBkYXRh c2V0IHNvIHRoYXQsDQorICogVGhpcyBsb2NrIHByb3RlY3RzIHRoZSB6ZnNk ZXZfc3RhdGUgc3RydWN0dXJlIGZyb20gYmVpbmcgbW9kaWZpZWQNCisgKiB3 aGlsZSBpdCdzIGJlaW5nIHVzZWQsIGUuZy4gYW4gb3BlbiB0aGF0IGNvbWVz IGluIGJlZm9yZSBhIGNyZWF0ZQ0KKyAqIGZpbmlzaGVzLiAgSXQgYWxzbyBw cm90ZWN0cyB0ZW1wb3Jhcnkgb3BlbnMgb2YgdGhlIGRhdGFzZXQgc28gdGhh dCwNCiAgKiBlLmcuLCBhbiBvcGVuIGRvZXNuJ3QgZ2V0IGEgc3B1cmlvdXMg RUJVU1kuDQogICovDQora211dGV4X3QgemZzZGV2X3N0YXRlX2xvY2s7DQog c3RhdGljIHVpbnQzMl90IHp2b2xfbWlub3JzOw0KIA0KIHR5cGVkZWYgc3Ry dWN0IHp2b2xfZXh0ZW50IHsNCkBAIC0yNDksNyArMjUwLDcgQEAgenZvbF9t aW5vcl9sb29rdXAoY29uc3QgY2hhciAqbmFtZSkNCiAJc3RydWN0IGdfZ2Vv bSAqZ3A7DQogCXp2b2xfc3RhdGVfdCAqenYgPSBOVUxMOw0KIA0KLQlBU1NF UlQoTVVURVhfSEVMRCgmc3BhX25hbWVzcGFjZV9sb2NrKSk7DQorCUFTU0VS VChNVVRFWF9IRUxEKCZ6ZnNkZXZfc3RhdGVfbG9jaykpOw0KIA0KIAlnX3Rv cG9sb2d5X2xvY2soKTsNCiAJTElTVF9GT1JFQUNIKGdwLCAmemZzX3p2b2xf Y2xhc3MuZ2VvbSwgZ2VvbSkgew0KQEAgLTQ2NSwxMSArNDY2LDExIEBAIHp2 b2xfbmFtZTJtaW5vcihjb25zdCBjaGFyICpuYW1lLCBtaW5vcl90ICptaW5v cikNCiB7DQogCXp2b2xfc3RhdGVfdCAqenY7DQogDQotCW11dGV4X2VudGVy KCZzcGFfbmFtZXNwYWNlX2xvY2spOw0KKwltdXRleF9lbnRlcigmemZzZGV2 X3N0YXRlX2xvY2spOw0KIAl6diA9IHp2b2xfbWlub3JfbG9va3VwKG5hbWUp Ow0KIAlpZiAobWlub3IgJiYgenYpDQogCQkqbWlub3IgPSB6di0+enZfbWlu b3I7DQotCW11dGV4X2V4aXQoJnNwYV9uYW1lc3BhY2VfbG9jayk7DQorCW11 dGV4X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJcmV0dXJuICh6diA/ IDAgOiAtMSk7DQogfQ0KICNlbmRpZgkvKiBzdW4gKi8NCkBAIC00ODksMTAg KzQ5MCwxMCBAQCB6dm9sX2NyZWF0ZV9taW5vcihjb25zdCBjaGFyICpuYW1l KQ0KIA0KIAlaRlNfTE9HKDEsICJDcmVhdGluZyBaVk9MICVzLi4uIiwgbmFt ZSk7DQogDQotCW11dGV4X2VudGVyKCZzcGFfbmFtZXNwYWNlX2xvY2spOw0K KwltdXRleF9lbnRlcigmemZzZGV2X3N0YXRlX2xvY2spOw0KIA0KIAlpZiAo enZvbF9taW5vcl9sb29rdXAobmFtZSkgIT0gTlVMTCkgew0KLQkJbXV0ZXhf ZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJCW11dGV4X2V4aXQoJnpm c2Rldl9zdGF0ZV9sb2NrKTsNCiAJCXJldHVybiAoU0VUX0VSUk9SKEVFWElT VCkpOw0KIAl9DQogDQpAQCAtNTAwLDIwICs1MDEsMjAgQEAgenZvbF9jcmVh dGVfbWlub3IoY29uc3QgY2hhciAqbmFtZSkNCiAJZXJyb3IgPSBkbXVfb2Jq c2V0X293bihuYW1lLCBETVVfT1NUX1pWT0wsIEJfVFJVRSwgRlRBRywgJm9z KTsNCiANCiAJaWYgKGVycm9yKSB7DQotCQltdXRleF9leGl0KCZzcGFfbmFt ZXNwYWNlX2xvY2spOw0KKwkJbXV0ZXhfZXhpdCgmemZzZGV2X3N0YXRlX2xv Y2spOw0KIAkJcmV0dXJuIChlcnJvcik7DQogCX0NCiANCiAjaWZkZWYgc3Vu DQogCWlmICgobWlub3IgPSB6ZnNkZXZfbWlub3JfYWxsb2MoKSkgPT0gMCkg ew0KIAkJZG11X29ianNldF9kaXNvd24ob3MsIEZUQUcpOw0KLQkJbXV0ZXhf ZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJCW11dGV4X2V4aXQoJnpm c2Rldl9zdGF0ZV9sb2NrKTsNCiAJCXJldHVybiAoU0VUX0VSUk9SKEVOWElP KSk7DQogCX0NCiANCiAJaWYgKGRkaV9zb2Z0X3N0YXRlX3phbGxvYyh6ZnNk ZXZfc3RhdGUsIG1pbm9yKSAhPSBERElfU1VDQ0VTUykgew0KIAkJZG11X29i anNldF9kaXNvd24ob3MsIEZUQUcpOw0KLQkJbXV0ZXhfZXhpdCgmc3BhX25h bWVzcGFjZV9sb2NrKTsNCisJCW11dGV4X2V4aXQoJnpmc2Rldl9zdGF0ZV9s b2NrKTsNCiAJCXJldHVybiAoU0VUX0VSUk9SKEVBR0FJTikpOw0KIAl9DQog CSh2b2lkKSBkZGlfcHJvcF91cGRhdGVfc3RyaW5nKG1pbm9yLCB6ZnNfZGlw LCBaVk9MX1BST1BfTkFNRSwNCkBAIC01MjUsNyArNTI2LDcgQEAgenZvbF9j cmVhdGVfbWlub3IoY29uc3QgY2hhciAqbmFtZSkNCiAJICAgIG1pbm9yLCBE RElfUFNFVURPLCAwKSA9PSBERElfRkFJTFVSRSkgew0KIAkJZGRpX3NvZnRf c3RhdGVfZnJlZSh6ZnNkZXZfc3RhdGUsIG1pbm9yKTsNCiAJCWRtdV9vYmpz ZXRfZGlzb3duKG9zLCBGVEFHKTsNCi0JCW11dGV4X2V4aXQoJnNwYV9uYW1l c3BhY2VfbG9jayk7DQorCQltdXRleF9leGl0KCZ6ZnNkZXZfc3RhdGVfbG9j ayk7DQogCQlyZXR1cm4gKFNFVF9FUlJPUihFQUdBSU4pKTsNCiAJfQ0KIA0K QEAgLTUzNiw3ICs1MzcsNyBAQCB6dm9sX2NyZWF0ZV9taW5vcihjb25zdCBj aGFyICpuYW1lKQ0KIAkJZGRpX3JlbW92ZV9taW5vcl9ub2RlKHpmc19kaXAs IGNocmJ1Zik7DQogCQlkZGlfc29mdF9zdGF0ZV9mcmVlKHpmc2Rldl9zdGF0 ZSwgbWlub3IpOw0KIAkJZG11X29ianNldF9kaXNvd24ob3MsIEZUQUcpOw0K LQkJbXV0ZXhfZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJCW11dGV4 X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJCXJldHVybiAoU0VUX0VS Uk9SKEVBR0FJTikpOw0KIAl9DQogDQpAQCAtNTg3LDcgKzU4OCw3IEBAIHp2 b2xfY3JlYXRlX21pbm9yKGNvbnN0IGNoYXIgKm5hbWUpDQogDQogCXp2b2xf bWlub3JzKys7DQogDQotCW11dGV4X2V4aXQoJnNwYV9uYW1lc3BhY2VfbG9j ayk7DQorCW11dGV4X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiANCiAJ enZvbF9nZW9tX3J1bih6dik7DQogDQpAQCAtNjA5LDcgKzYxMCw3IEBAIHp2 b2xfcmVtb3ZlX3p2KHp2b2xfc3RhdGVfdCAqenYpDQogCW1pbm9yX3QgbWlu b3IgPSB6di0+enZfbWlub3I7DQogI2VuZGlmDQogDQotCUFTU0VSVChNVVRF WF9IRUxEKCZzcGFfbmFtZXNwYWNlX2xvY2spKTsNCisJQVNTRVJUKE1VVEVY X0hFTEQoJnpmc2Rldl9zdGF0ZV9sb2NrKSk7DQogCWlmICh6di0+enZfdG90 YWxfb3BlbnMgIT0gMCkNCiAJCXJldHVybiAoU0VUX0VSUk9SKEVCVVNZKSk7 DQogDQpAQCAtNjM1LDE1ICs2MzYsMTUgQEAgenZvbF9yZW1vdmVfbWlub3Io Y29uc3QgY2hhciAqbmFtZSkNCiAJenZvbF9zdGF0ZV90ICp6djsNCiAJaW50 IHJjOw0KIA0KLQltdXRleF9lbnRlcigmc3BhX25hbWVzcGFjZV9sb2NrKTsN CisJbXV0ZXhfZW50ZXIoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJaWYgKCh6 diA9IHp2b2xfbWlub3JfbG9va3VwKG5hbWUpKSA9PSBOVUxMKSB7DQotCQlt dXRleF9leGl0KCZzcGFfbmFtZXNwYWNlX2xvY2spOw0KKwkJbXV0ZXhfZXhp dCgmemZzZGV2X3N0YXRlX2xvY2spOw0KIAkJcmV0dXJuIChTRVRfRVJST1Io RU5YSU8pKTsNCiAJfQ0KIAlnX3RvcG9sb2d5X2xvY2soKTsNCiAJcmMgPSB6 dm9sX3JlbW92ZV96dih6dik7DQogCWdfdG9wb2xvZ3lfdW5sb2NrKCk7DQot CW11dGV4X2V4aXQoJnNwYV9uYW1lc3BhY2VfbG9jayk7DQorCW11dGV4X2V4 aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJcmV0dXJuIChyYyk7DQogfQ0K IA0KQEAgLTc1NSw3ICs3NTYsNyBAQCB6dm9sX3VwZGF0ZV92b2xzaXplKG9i anNldF90ICpvcywgdWludDY0X3Qgdm9sc2l6ZSkNCiAJZG11X3R4X3QgKnR4 Ow0KIAlpbnQgZXJyb3I7DQogDQotCUFTU0VSVChNVVRFWF9IRUxEKCZzcGFf bmFtZXNwYWNlX2xvY2spKTsNCisJQVNTRVJUKE1VVEVYX0hFTEQoJnpmc2Rl dl9zdGF0ZV9sb2NrKSk7DQogDQogCXR4ID0gZG11X3R4X2NyZWF0ZShvcyk7 DQogCWRtdV90eF9ob2xkX3phcCh0eCwgWlZPTF9aQVBfT0JKLCBUUlVFLCBO VUxMKTsNCkBAIC03ODYsNyArNzg3LDcgQEAgenZvbF9yZW1vdmVfbWlub3Jz KGNvbnN0IGNoYXIgKm5hbWUpDQogCW5hbWVsZW4gPSBzdHJsZW4obmFtZSk7 DQogDQogCURST1BfR0lBTlQoKTsNCi0JbXV0ZXhfZW50ZXIoJnNwYV9uYW1l c3BhY2VfbG9jayk7DQorCW11dGV4X2VudGVyKCZ6ZnNkZXZfc3RhdGVfbG9j ayk7DQogCWdfdG9wb2xvZ3lfbG9jaygpOw0KIA0KIAlMSVNUX0ZPUkVBQ0hf U0FGRShncCwgJnpmc196dm9sX2NsYXNzLmdlb20sIGdlb20sIGdwdG1wKSB7 DQpAQCAtODA0LDcgKzgwNSw3IEBAIHp2b2xfcmVtb3ZlX21pbm9ycyhjb25z dCBjaGFyICpuYW1lKQ0KIAl9DQogDQogCWdfdG9wb2xvZ3lfdW5sb2NrKCk7 DQotCW11dGV4X2V4aXQoJnNwYV9uYW1lc3BhY2VfbG9jayk7DQorCW11dGV4 X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJUElDS1VQX0dJQU5UKCk7 DQogfQ0KIA0KQEAgLTgxOCwxMCArODE5LDEwIEBAIHp2b2xfc2V0X3ZvbHNp emUoY29uc3QgY2hhciAqbmFtZSwgbWFqb3JfdCBtYWosIHVpbnQ2NF90IHZv bHNpemUpDQogCXVpbnQ2NF90IG9sZF92b2xzaXplID0gMFVMTDsNCiAJdWlu dDY0X3QgcmVhZG9ubHk7DQogDQotCW11dGV4X2VudGVyKCZzcGFfbmFtZXNw YWNlX2xvY2spOw0KKwltdXRleF9lbnRlcigmemZzZGV2X3N0YXRlX2xvY2sp Ow0KIAl6diA9IHp2b2xfbWlub3JfbG9va3VwKG5hbWUpOw0KIAlpZiAoKGVy cm9yID0gZG11X29ianNldF9ob2xkKG5hbWUsIEZUQUcsICZvcykpICE9IDAp IHsNCi0JCW11dGV4X2V4aXQoJnNwYV9uYW1lc3BhY2VfbG9jayk7DQorCQlt dXRleF9leGl0KCZ6ZnNkZXZfc3RhdGVfbG9jayk7DQogCQlyZXR1cm4gKGVy cm9yKTsNCiAJfQ0KIA0KQEAgLTg4OCw3ICs4ODksNyBAQCB6dm9sX3NldF92 b2xzaXplKGNvbnN0IGNoYXIgKm5hbWUsIG1ham9yX3QgbWFqLCB1aW50NjRf dCB2b2xzaXplKQ0KIG91dDoNCiAJZG11X29ianNldF9yZWxlKG9zLCBGVEFH KTsNCiANCi0JbXV0ZXhfZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJ bXV0ZXhfZXhpdCgmemZzZGV2X3N0YXRlX2xvY2spOw0KIA0KIAlyZXR1cm4g KGVycm9yKTsNCiB9DQpAQCAtODk5LDM2ICs5MDAsMTkgQEAgenZvbF9vcGVu KHN0cnVjdCBnX3Byb3ZpZGVyICpwcCwgaW50IGZsYWcsIGludCBjb3VudCkN CiB7DQogCXp2b2xfc3RhdGVfdCAqenY7DQogCWludCBlcnIgPSAwOw0KLQli b29sZWFuX3QgbG9ja2VkID0gQl9GQUxTRTsNCiANCi0JLyoNCi0JICogUHJv dGVjdCBhZ2FpbnN0IHJlY3Vyc2l2ZWx5IGVudGVyaW5nIHNwYV9uYW1lc3Bh Y2VfbG9jaw0KLQkgKiB3aGVuIHNwYV9vcGVuKCkgaXMgdXNlZCBmb3IgYSBw b29sIG9uIGEgKGxvY2FsKSBaVk9MKHMpLg0KLQkgKiBUaGlzIGlzIG5lZWRl ZCBzaW5jZSB3ZSByZXBsYWNlZCB1cHN0cmVhbSB6ZnNkZXZfc3RhdGVfbG9j aw0KLQkgKiB3aXRoIHNwYV9uYW1lc3BhY2VfbG9jayBpbiB0aGUgWlZPTCBj b2RlLg0KLQkgKiBXZSBhcmUgdXNpbmcgdGhlIHNhbWUgdHJpY2sgYXMgc3Bh X29wZW4oKS4NCi0JICogTm90ZSB0aGF0IGNhbGxzIGluIHp2b2xfZmlyc3Rf b3BlbiB3aGljaCBuZWVkIHRvIHJlc29sdmUNCi0JICogcG9vbCBuYW1lIHRv IGEgc3BhIG9iamVjdCB3aWxsIGVudGVyIHNwYV9vcGVuKCkNCi0JICogcmVj dXJzaXZlbHksIGJ1dCB0aGF0IGZ1bmN0aW9uIGFscmVhZHkgaGFzIGFsbCB0 aGUNCi0JICogbmVjZXNzYXJ5IHByb3RlY3Rpb24uDQotCSAqLw0KLQlpZiAo IU1VVEVYX0hFTEQoJnNwYV9uYW1lc3BhY2VfbG9jaykpIHsNCi0JCW11dGV4 X2VudGVyKCZzcGFfbmFtZXNwYWNlX2xvY2spOw0KLQkJbG9ja2VkID0gQl9U UlVFOw0KLQl9DQorCW11dGV4X2VudGVyKCZ6ZnNkZXZfc3RhdGVfbG9jayk7 DQogDQogCXp2ID0gcHAtPnByaXZhdGU7DQogCWlmICh6diA9PSBOVUxMKSB7 DQotCQlpZiAobG9ja2VkKQ0KLQkJCW11dGV4X2V4aXQoJnNwYV9uYW1lc3Bh Y2VfbG9jayk7DQorCQltdXRleF9leGl0KCZ6ZnNkZXZfc3RhdGVfbG9jayk7 DQogCQlyZXR1cm4gKFNFVF9FUlJPUihFTlhJTykpOw0KIAl9DQogDQogCWlm ICh6di0+enZfdG90YWxfb3BlbnMgPT0gMCkNCiAJCWVyciA9IHp2b2xfZmly c3Rfb3Blbih6dik7DQogCWlmIChlcnIpIHsNCi0JCWlmIChsb2NrZWQpDQot CQkJbXV0ZXhfZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJCW11dGV4 X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJCXJldHVybiAoZXJyKTsN CiAJfQ0KIAlpZiAoKGZsYWcgJiBGV1JJVEUpICYmICh6di0+enZfZmxhZ3Mg JiBaVk9MX1JET05MWSkpIHsNCkBAIC05NTAsMTUgKzkzNCwxMyBAQCB6dm9s X29wZW4oc3RydWN0IGdfcHJvdmlkZXIgKnBwLCBpbnQgZmxhZywgaW50IGNv dW50KQ0KICNlbmRpZg0KIA0KIAl6di0+enZfdG90YWxfb3BlbnMgKz0gY291 bnQ7DQotCWlmIChsb2NrZWQpDQotCQltdXRleF9leGl0KCZzcGFfbmFtZXNw YWNlX2xvY2spOw0KKwltdXRleF9leGl0KCZ6ZnNkZXZfc3RhdGVfbG9jayk7 DQogDQogCXJldHVybiAoZXJyKTsNCiBvdXQ6DQogCWlmICh6di0+enZfdG90 YWxfb3BlbnMgPT0gMCkNCiAJCXp2b2xfbGFzdF9jbG9zZSh6dik7DQotCWlm IChsb2NrZWQpDQotCQltdXRleF9leGl0KCZzcGFfbmFtZXNwYWNlX2xvY2sp Ow0KKwltdXRleF9leGl0KCZ6ZnNkZXZfc3RhdGVfbG9jayk7DQogCXJldHVy biAoZXJyKTsNCiB9DQogDQpAQCAtOTY4LDE4ICs5NTAsMTIgQEAgenZvbF9j bG9zZShzdHJ1Y3QgZ19wcm92aWRlciAqcHAsIGludCBmbGFnLCBpbnQgY291 bnQpDQogew0KIAl6dm9sX3N0YXRlX3QgKnp2Ow0KIAlpbnQgZXJyb3IgPSAw Ow0KLQlib29sZWFuX3QgbG9ja2VkID0gQl9GQUxTRTsNCiANCi0JLyogU2Vl IGNvbW1lbnQgaW4genZvbF9vcGVuKCkuICovDQotCWlmICghTVVURVhfSEVM RCgmc3BhX25hbWVzcGFjZV9sb2NrKSkgew0KLQkJbXV0ZXhfZW50ZXIoJnNw YV9uYW1lc3BhY2VfbG9jayk7DQotCQlsb2NrZWQgPSBCX1RSVUU7DQotCX0N CisJbXV0ZXhfZW50ZXIoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiANCiAJenYg PSBwcC0+cHJpdmF0ZTsNCiAJaWYgKHp2ID09IE5VTEwpIHsNCi0JCWlmIChs b2NrZWQpDQotCQkJbXV0ZXhfZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsN CisJCW11dGV4X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJCXJldHVy biAoU0VUX0VSUk9SKEVOWElPKSk7DQogCX0NCiANCkBAIC0xMDAyLDggKzk3 OCw3IEBAIHp2b2xfY2xvc2Uoc3RydWN0IGdfcHJvdmlkZXIgKnBwLCBpbnQg ZmxhZywgaW50IGNvdW50KQ0KIAlpZiAoenYtPnp2X3RvdGFsX29wZW5zID09 IDApDQogCQl6dm9sX2xhc3RfY2xvc2UoenYpOw0KIA0KLQlpZiAobG9ja2Vk KQ0KLQkJbXV0ZXhfZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJbXV0 ZXhfZXhpdCgmemZzZGV2X3N0YXRlX2xvY2spOw0KIAlyZXR1cm4gKGVycm9y KTsNCiB9DQogDQpAQCAtMTY1OCwxMiArMTYzMywxMiBAQCB6dm9sX2lvY3Rs KGRldl90IGRldiwgaW50IGNtZCwgaW50cHRyX3QgYXJnLCBpbnQgZmxhZywg Y3JlZF90ICpjciwgaW50ICpydmFscCkNCiAJaW50IGVycm9yID0gMDsNCiAJ cmxfdCAqcmw7DQogDQotCW11dGV4X2VudGVyKCZzcGFfbmFtZXNwYWNlX2xv Y2spOw0KKwltdXRleF9lbnRlcigmemZzZGV2X3N0YXRlX2xvY2spOw0KIA0K IAl6diA9IHpmc2Rldl9nZXRfc29mdF9zdGF0ZShnZXRtaW5vcihkZXYpLCBa U1NUX1pWT0wpOw0KIA0KIAlpZiAoenYgPT0gTlVMTCkgew0KLQkJbXV0ZXhf ZXhpdCgmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJCW11dGV4X2V4aXQoJnpm c2Rldl9zdGF0ZV9sb2NrKTsNCiAJCXJldHVybiAoU0VUX0VSUk9SKEVOWElP KSk7DQogCX0NCiAJQVNTRVJUKHp2LT56dl90b3RhbF9vcGVucyA+IDApOw0K QEAgLTE2NzcsNyArMTY1Miw3IEBAIHp2b2xfaW9jdGwoZGV2X3QgZGV2LCBp bnQgY21kLCBpbnRwdHJfdCBhcmcsIGludCBmbGFnLCBjcmVkX3QgKmNyLCBp bnQgKnJ2YWxwKQ0KIAkJZGtpLmRraV9jdHlwZSA9IERLQ19VTktOT1dOOw0K IAkJZGtpLmRraV91bml0ID0gZ2V0bWlub3IoZGV2KTsNCiAJCWRraS5ka2lf bWF4dHJhbnNmZXIgPSAxIDw8IChTUEFfTUFYQkxPQ0tTSElGVCAtIHp2LT56 dl9taW5fYnMpOw0KLQkJbXV0ZXhfZXhpdCgmc3BhX25hbWVzcGFjZV9sb2Nr KTsNCisJCW11dGV4X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJCWlm IChkZGlfY29weW91dCgmZGtpLCAodm9pZCAqKWFyZywgc2l6ZW9mIChka2kp LCBmbGFnKSkNCiAJCQllcnJvciA9IFNFVF9FUlJPUihFRkFVTFQpOw0KIAkJ cmV0dXJuIChlcnJvcik7DQpAQCAtMTY4Nyw3ICsxNjYyLDcgQEAgenZvbF9p b2N0bChkZXZfdCBkZXYsIGludCBjbWQsIGludHB0cl90IGFyZywgaW50IGZs YWcsIGNyZWRfdCAqY3IsIGludCAqcnZhbHApDQogCQlka20uZGtpX2xic2l6 ZSA9IDFVIDw8IHp2LT56dl9taW5fYnM7DQogCQlka20uZGtpX2NhcGFjaXR5 ID0genYtPnp2X3ZvbHNpemUgPj4genYtPnp2X21pbl9iczsNCiAJCWRrbS5k a2lfbWVkaWFfdHlwZSA9IERLX1VOS05PV047DQotCQltdXRleF9leGl0KCZz cGFfbmFtZXNwYWNlX2xvY2spOw0KKwkJbXV0ZXhfZXhpdCgmemZzZGV2X3N0 YXRlX2xvY2spOw0KIAkJaWYgKGRkaV9jb3B5b3V0KCZka20sICh2b2lkICop YXJnLCBzaXplb2YgKGRrbSksIGZsYWcpKQ0KIAkJCWVycm9yID0gU0VUX0VS Uk9SKEVGQVVMVCk7DQogCQlyZXR1cm4gKGVycm9yKTsNCkBAIC0xNjk3LDE0 ICsxNjcyLDE0IEBAIHp2b2xfaW9jdGwoZGV2X3QgZGV2LCBpbnQgY21kLCBp bnRwdHJfdCBhcmcsIGludCBmbGFnLCBjcmVkX3QgKmNyLCBpbnQgKnJ2YWxw KQ0KIAkJCXVpbnQ2NF90IHZzID0genYtPnp2X3ZvbHNpemU7DQogCQkJdWlu dDhfdCBicyA9IHp2LT56dl9taW5fYnM7DQogDQotCQkJbXV0ZXhfZXhpdCgm c3BhX25hbWVzcGFjZV9sb2NrKTsNCisJCQltdXRleF9leGl0KCZ6ZnNkZXZf c3RhdGVfbG9jayk7DQogCQkJZXJyb3IgPSB6dm9sX2dldGVmaSgodm9pZCAq KWFyZywgZmxhZywgdnMsIGJzKTsNCiAJCQlyZXR1cm4gKGVycm9yKTsNCiAJ CX0NCiANCiAJY2FzZSBES0lPQ0ZMVVNIV1JJVEVDQUNIRToNCiAJCWRrYyA9 IChzdHJ1Y3QgZGtfY2FsbGJhY2sgKilhcmc7DQotCQltdXRleF9leGl0KCZz cGFfbmFtZXNwYWNlX2xvY2spOw0KKwkJbXV0ZXhfZXhpdCgmemZzZGV2X3N0 YXRlX2xvY2spOw0KIAkJemlsX2NvbW1pdCh6di0+enZfemlsb2csIFpWT0xf T0JKKTsNCiAJCWlmICgoZmxhZyAmIEZLSU9DVEwpICYmIGRrYyAhPSBOVUxM ICYmIGRrYy0+ZGtjX2NhbGxiYWNrKSB7DQogCQkJKCpka2MtPmRrY19jYWxs YmFjaykoZGtjLT5ka2NfY29va2llLCBlcnJvcik7DQpAQCAtMTczMCwxMCAr MTcwNSwxMCBAQCB6dm9sX2lvY3RsKGRldl90IGRldiwgaW50IGNtZCwgaW50 cHRyX3QgYXJnLCBpbnQgZmxhZywgY3JlZF90ICpjciwgaW50ICpydmFscCkN CiAJCQl9DQogCQkJaWYgKHdjZSkgew0KIAkJCQl6di0+enZfZmxhZ3MgfD0g WlZPTF9XQ0U7DQotCQkJCW11dGV4X2V4aXQoJnNwYV9uYW1lc3BhY2VfbG9j ayk7DQorCQkJCW11dGV4X2V4aXQoJnpmc2Rldl9zdGF0ZV9sb2NrKTsNCiAJ CQl9IGVsc2Ugew0KIAkJCQl6di0+enZfZmxhZ3MgJj0gflpWT0xfV0NFOw0K LQkJCQltdXRleF9leGl0KCZzcGFfbmFtZXNwYWNlX2xvY2spOw0KKwkJCQlt dXRleF9leGl0KCZ6ZnNkZXZfc3RhdGVfbG9jayk7DQogCQkJCXppbF9jb21t aXQoenYtPnp2X3ppbG9nLCBaVk9MX09CSik7DQogCQkJfQ0KIAkJCXJldHVy biAoMCk7DQpAQCAtMTgyOCw3ICsxODAzLDcgQEAgenZvbF9pb2N0bChkZXZf dCBkZXYsIGludCBjbWQsIGludHB0cl90IGFyZywgaW50IGZsYWcsIGNyZWRf dCAqY3IsIGludCAqcnZhbHApDQogCQlicmVhazsNCiANCiAJfQ0KLQltdXRl eF9leGl0KCZzcGFfbmFtZXNwYWNlX2xvY2spOw0KKwltdXRleF9leGl0KCZ6 ZnNkZXZfc3RhdGVfbG9jayk7DQogCXJldHVybiAoZXJyb3IpOw0KIH0NCiAj ZW5kaWYJLyogc3VuICovDQpAQCAtMTg0NCwxMiArMTgxOSwxNCBAQCB6dm9s X2luaXQodm9pZCkNCiB7DQogCVZFUklGWShkZGlfc29mdF9zdGF0ZV9pbml0 KCZ6ZnNkZXZfc3RhdGUsIHNpemVvZiAoemZzX3NvZnRfc3RhdGVfdCksDQog CSAgICAxKSA9PSAwKTsNCisJbXV0ZXhfaW5pdCgmemZzZGV2X3N0YXRlX2xv Y2ssIE5VTEwsIE1VVEVYX0RFRkFVTFQsIE5VTEwpOw0KIAlaRlNfTE9HKDEs ICJaVk9MIEluaXRpYWxpemVkLiIpOw0KIH0NCiANCiB2b2lkDQogenZvbF9m aW5pKHZvaWQpDQogew0KKwltdXRleF9kZXN0cm95KCZ6ZnNkZXZfc3RhdGVf bG9jayk7DQogCWRkaV9zb2Z0X3N0YXRlX2ZpbmkoJnpmc2Rldl9zdGF0ZSk7 DQogCVpGU19MT0coMSwgIlpWT0wgRGVpbml0aWFsaXplZC4iKTsNCiB9DQpA QCAtMTg4OSw3ICsxODY2LDcgQEAgenZvbF9kdW1wX2luaXQoenZvbF9zdGF0 ZV90ICp6diwgYm9vbGVhbl90IHJlc2l6ZSkNCiAJdWludDY0X3QgdmVyc2lv biA9IHNwYV92ZXJzaW9uKHNwYSk7DQogCWVudW0gemlvX2NoZWNrc3VtIGNo ZWNrc3VtOw0KIA0KLQlBU1NFUlQoTVVURVhfSEVMRCgmc3BhX25hbWVzcGFj ZV9sb2NrKSk7DQorCUFTU0VSVChNVVRFWF9IRUxEKCZ6ZnNkZXZfc3RhdGVf bG9jaykpOw0KIAlBU1NFUlQodmQtPnZkZXZfb3BzID09ICZ2ZGV2X3Jvb3Rf b3BzKTsNCiANCiAJZXJyb3IgPSBkbXVfZnJlZV9sb25nX3JhbmdlKHp2LT56 dl9vYmpzZXQsIFpWT0xfT0JKLCAwLA0KQEAgLTI0MzcsNyArMjQxNCw3IEBA IHp2b2xfcmVuYW1lX21pbm9yKHN0cnVjdCBnX2dlb20gKmdwLCBjb25zdCBj aGFyICpuZXduYW1lKQ0KIAlzdHJ1Y3QgZ19wcm92aWRlciAqcHA7DQogCXp2 b2xfc3RhdGVfdCAqenY7DQogDQotCUFTU0VSVChNVVRFWF9IRUxEKCZzcGFf bmFtZXNwYWNlX2xvY2spKTsNCisJQVNTRVJUKE1VVEVYX0hFTEQoJnpmc2Rl dl9zdGF0ZV9sb2NrKSk7DQogCWdfdG9wb2xvZ3lfYXNzZXJ0KCk7DQogDQog CXBwID0gTElTVF9GSVJTVCgmZ3AtPnByb3ZpZGVyKTsNCkBAIC0yNDcxLDcg KzI0NDgsNyBAQCB6dm9sX3JlbmFtZV9taW5vcnMoY29uc3QgY2hhciAqb2xk bmFtZSwgY29uc3QgY2hhciAqbmV3bmFtZSkNCiAJbmV3bmFtZWxlbiA9IHN0 cmxlbihuZXduYW1lKTsNCiANCiAJRFJPUF9HSUFOVCgpOw0KLQltdXRleF9l bnRlcigmc3BhX25hbWVzcGFjZV9sb2NrKTsNCisJbXV0ZXhfZW50ZXIoJnpm c2Rldl9zdGF0ZV9sb2NrKTsNCiAJZ190b3BvbG9neV9sb2NrKCk7DQogDQog CUxJU1RfRk9SRUFDSChncCwgJnpmc196dm9sX2NsYXNzLmdlb20sIGdlb20p IHsNCkBAIC0yNDk0LDYgKzI0NzEsNiBAQCB6dm9sX3JlbmFtZV9taW5vcnMo Y29uc3QgY2hhciAqb2xkbmFtZSwgY29uc3QgY2hhciAqbmV3bmFtZSkNCiAJ fQ0KIA0KIAlnX3RvcG9sb2d5X3VubG9jaygpOw0KLQltdXRleF9leGl0KCZz cGFfbmFtZXNwYWNlX2xvY2spOw0KKwltdXRleF9leGl0KCZ6ZnNkZXZfc3Rh dGVfbG9jayk7DQogCVBJQ0tVUF9HSUFOVCgpOw0KIH0NCi0tIA0KMS44LjQu Mg0KDQo= --1030603365-686922855-1387467971=:4344 Content-Type: TEXT/x-diff; name=0002-ZFS-snapshot-handling-fix.patch Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=0002-ZFS-snapshot-handling-fix.patch RnJvbSA1N2Q1YTM4M2I1ODVjMzJjNzdhZjU0ZThjZGFjYWRkZjhjZTc1ODRm IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQ0KRnJvbTogUmljaGFyZCBLb2pl ZHppbnN6a3kgPGtyaWNoeUBjZmxpbnV4Lmh1Pg0KRGF0ZTogV2VkLCAxOCBE ZWMgMjAxMyAyMjoxMToyMSArMDEwMA0KU3ViamVjdDogW1BBVENIIDIvMl0g WkZTIHNuYXBzaG90IGhhbmRsaW5nIGZpeA0KDQotLS0NCiAuLi4vY29tcGF0 L29wZW5zb2xhcmlzL2tlcm4vb3BlbnNvbGFyaXNfbG9va3VwLmMgICB8IDEz ICsrKy0tLQ0KIC4uLi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy96 ZnNfY3RsZGlyLmMgICAgIHwgNTMgKysrKysrKysrKysrKysrLS0tLS0tLQ0K IDIgZmlsZXMgY2hhbmdlZCwgNDIgaW5zZXJ0aW9ucygrKSwgMjQgZGVsZXRp b25zKC0pDQoNCmRpZmYgLS1naXQgYS9zeXMvY2RkbC9jb21wYXQvb3BlbnNv bGFyaXMva2Vybi9vcGVuc29sYXJpc19sb29rdXAuYyBiL3N5cy9jZGRsL2Nv bXBhdC9vcGVuc29sYXJpcy9rZXJuL29wZW5zb2xhcmlzX2xvb2t1cC5jDQpp bmRleCA5NDM4M2Q2Li40Y2FjMDUzIDEwMDY0NA0KLS0tIGEvc3lzL2NkZGwv Y29tcGF0L29wZW5zb2xhcmlzL2tlcm4vb3BlbnNvbGFyaXNfbG9va3VwLmMN CisrKyBiL3N5cy9jZGRsL2NvbXBhdC9vcGVuc29sYXJpcy9rZXJuL29wZW5z b2xhcmlzX2xvb2t1cC5jDQpAQCAtODEsNiArODEsOCBAQCB0cmF2ZXJzZSh2 bm9kZV90ICoqY3ZwcCwgaW50IGxrdHlwZSkNCiAJICogcHJvZ3Jlc3Mgb24g dGhpcyB2bm9kZS4NCiAJICovDQogDQorCXZuX2xvY2soY3ZwLCBsa3R5cGUp Ow0KKw0KIAlmb3IgKDs7KSB7DQogCQkvKg0KIAkJICogUmVhY2hlZCB0aGUg ZW5kIG9mIHRoZSBtb3VudCBjaGFpbj8NCkBAIC04OSwxMyArOTEsNyBAQCB0 cmF2ZXJzZSh2bm9kZV90ICoqY3ZwcCwgaW50IGxrdHlwZSkNCiAJCWlmICh2 ZnNwID09IE5VTEwpDQogCQkJYnJlYWs7DQogCQllcnJvciA9IHZmc19idXN5 KHZmc3AsIDApOw0KLQkJLyoNCi0JCSAqIHR2cCBpcyBOVUxMIGZvciAqY3Zw cCB2bm9kZSwgd2hpY2ggd2UgY2FuJ3QgdW5sb2NrLg0KLQkJICovDQotCQlp ZiAodHZwICE9IE5VTEwpDQotCQkJdnB1dChjdnApOw0KLQkJZWxzZQ0KLQkJ CXZyZWxlKGN2cCk7DQorCQlWT1BfVU5MT0NLKGN2cCwgMCk7DQogCQlpZiAo ZXJyb3IpDQogCQkJcmV0dXJuIChlcnJvcik7DQogDQpAQCAtMTA3LDYgKzEw Myw5IEBAIHRyYXZlcnNlKHZub2RlX3QgKipjdnBwLCBpbnQgbGt0eXBlKQ0K IAkJdmZzX3VuYnVzeSh2ZnNwKTsNCiAJCWlmIChlcnJvciAhPSAwKQ0KIAkJ CXJldHVybiAoZXJyb3IpOw0KKw0KKwkJVk5fUkVMRShjdnApOw0KKw0KIAkJ Y3ZwID0gdHZwOw0KIAl9DQogDQpkaWZmIC0tZ2l0IGEvc3lzL2NkZGwvY29u dHJpYi9vcGVuc29sYXJpcy91dHMvY29tbW9uL2ZzL3pmcy96ZnNfY3RsZGly LmMgYi9zeXMvY2RkbC9jb250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24v ZnMvemZzL3pmc19jdGxkaXIuYw0KaW5kZXggMjhhYjFmYS4uZDM0NjRiNCAx MDA2NDQNCi0tLSBhL3N5cy9jZGRsL2NvbnRyaWIvb3BlbnNvbGFyaXMvdXRz L2NvbW1vbi9mcy96ZnMvemZzX2N0bGRpci5jDQorKysgYi9zeXMvY2RkbC9j b250cmliL29wZW5zb2xhcmlzL3V0cy9jb21tb24vZnMvemZzL3pmc19jdGxk aXIuYw0KQEAgLTExMiw2ICsxMTIsMjUgQEAgc25hcGVudHJ5X2NvbXBhcmUo Y29uc3Qgdm9pZCAqYSwgY29uc3Qgdm9pZCAqYikNCiAJCXJldHVybiAoMCk7 DQogfQ0KIA0KKy8qIFJldHVybiB0aGUgemZzY3RsX3NuYXBkaXJfdCBvYmpl Y3QgZnJvbSBjdXJyZW50IHZub2RlLCBmb2xsb3dpbmcNCisgKiB0aGUgbG9j ayBvcmRlcnMgaW4gemZzY3RsX3NuYXBkaXJfbG9va3VwKCkgdG8gYXZvaWQg ZGVhZGxvY2tzLg0KKyAqIE9uIHJldHVybiB0aGUgcGFzc2VkIGluIHZwIGlz IHVubG9ja2VkICovDQorc3RhdGljIHpmc2N0bF9zbmFwZGlyX3QgKg0KK3pm c2N0bF9zbmFwc2hvdF9nZXRfc25hcGRpcih2bm9kZV90ICp2cCwgdm5vZGVf dCAqKmR2cHApDQorew0KKwlnZnNfZGlyX3QgKmRwID0gdnAtPnZfZGF0YTsN CisJKmR2cHAgPSBkcC0+Z2ZzZF9maWxlLmdmc19wYXJlbnQ7DQorCXpmc2N0 bF9zbmFwZGlyX3QgKnNkcDsNCisNCisJVk5fSE9MRCgqZHZwcCk7DQorCVZP UF9VTkxPQ0sodnAsIDApOw0KKwl2bl9sb2NrKCpkdnBwLCBMS19TSEFSRUQg fCBMS19SRVRSWSB8IExLX0NBTlJFQ1VSU0UpOw0KKwlzZHAgPSAoKmR2cHAp LT52X2RhdGE7DQorCVZPUF9VTkxPQ0soKmR2cHAsIDApOw0KKw0KKwlyZXR1 cm4gKHNkcCk7DQorfQ0KKw0KICNpZmRlZiBzdW4NCiB2bm9kZW9wc190ICp6 ZnNjdGxfb3BzX3Jvb3Q7DQogdm5vZGVvcHNfdCAqemZzY3RsX29wc19zbmFw ZGlyOw0KQEAgLTEwMTMsNiArMTAzMiw4IEBAIHpmc2N0bF9zbmFwZGlyX2xv b2t1cChhcCkNCiAJCQkgKiBUaGUgc25hcHNob3Qgd2FzIHVubW91bnRlZCBi ZWhpbmQgb3VyIGJhY2tzLA0KIAkJCSAqIHRyeSB0byByZW1vdW50IGl0Lg0K IAkJCSAqLw0KKwkJCVZPUF9VTkxPQ0soKnZwcCwgMCk7DQorCQkJVk5fSE9M RCgqdnBwKTsNCiAJCQlWRVJJRlkoemZzY3RsX3NuYXBzaG90X3puYW1lKGR2 cCwgbm0sIE1BWE5BTUVMRU4sIHNuYXBuYW1lKSA9PSAwKTsNCiAJCQlnb3Rv IGRvbW91bnQ7DQogCQl9IGVsc2Ugew0KQEAgLTEwNjQsNyArMTA4NSw2IEBA IHpmc2N0bF9zbmFwZGlyX2xvb2t1cChhcCkNCiAJc2VwLT5zZV9uYW1lID0g a21lbV9hbGxvYyhzdHJsZW4obm0pICsgMSwgS01fU0xFRVApOw0KIAkodm9p ZCkgc3RyY3B5KHNlcC0+c2VfbmFtZSwgbm0pOw0KIAkqdnBwID0gc2VwLT5z ZV9yb290ID0gemZzY3RsX3NuYXBzaG90X21rbm9kZShkdnAsIGRtdV9vYmpz ZXRfaWQoc25hcCkpOw0KLQlWTl9IT0xEKCp2cHApOw0KIAlhdmxfaW5zZXJ0 KCZzZHAtPnNkX3NuYXBzLCBzZXAsIHdoZXJlKTsNCiANCiAJZG11X29ianNl dF9yZWxlKHNuYXAsIEZUQUcpOw0KQEAgLTEwNzUsNiArMTA5NSw3IEBAIGRv bW91bnQ6DQogCSh2b2lkKSBzbnByaW50Zihtb3VudHBvaW50LCBtb3VudHBv aW50X2xlbiwNCiAJICAgICIlcy8iIFpGU19DVExESVJfTkFNRSAiL3NuYXBz aG90LyVzIiwNCiAJICAgIGR2cC0+dl92ZnNwLT5tbnRfc3RhdC5mX21udG9u bmFtZSwgbm0pOw0KKwlWTl9IT0xEKCp2cHApOw0KIAllcnIgPSBtb3VudF9z bmFwc2hvdChjdXJ0aHJlYWQsIHZwcCwgInpmcyIsIG1vdW50cG9pbnQsIHNu YXBuYW1lLCAwKTsNCiAJa21lbV9mcmVlKG1vdW50cG9pbnQsIG1vdW50cG9p bnRfbGVuKTsNCiAJaWYgKGVyciA9PSAwKSB7DQpAQCAtMTQ2NCwxNiArMTQ4 NSwxOCBAQCB6ZnNjdGxfc25hcHNob3RfaW5hY3RpdmUoYXApDQogCWludCBs b2NrZWQ7DQogCXZub2RlX3QgKmR2cDsNCiANCi0JaWYgKHZwLT52X2NvdW50 ID4gMCkNCi0JCWdvdG8gZW5kOw0KLQ0KLQlWRVJJRlkoZ2ZzX2Rpcl9sb29r dXAodnAsICIuLiIsICZkdnAsIGNyLCAwLCBOVUxMLCBOVUxMKSA9PSAwKTsN Ci0Jc2RwID0gZHZwLT52X2RhdGE7DQotCVZPUF9VTkxPQ0soZHZwLCAwKTsN CisJc2RwID0gemZzY3RsX3NuYXBzaG90X2dldF9zbmFwZGlyKHZwLCAmZHZw KTsNCiANCiAJaWYgKCEobG9ja2VkID0gTVVURVhfSEVMRCgmc2RwLT5zZF9s b2NrKSkpDQogCQltdXRleF9lbnRlcigmc2RwLT5zZF9sb2NrKTsNCiANCisJ dm5fbG9jayh2cCwgTEtfRVhDTFVTSVZFIHwgTEtfUkVUUlkpOw0KKw0KKwlp ZiAodnAtPnZfY291bnQgPiAwKSB7DQorCQltdXRleF9leGl0KCZzZHAtPnNk X2xvY2spOw0KKwkJcmV0dXJuICgwKTsNCisJfQ0KKw0KIAlBU1NFUlQoIXZu X2lzbW50cHQodnApKTsNCiANCiAJc2VwID0gYXZsX2ZpcnN0KCZzZHAtPnNk X3NuYXBzKTsNCkBAIC0xNDk0LDcgKzE1MTcsNiBAQCB6ZnNjdGxfc25hcHNo b3RfaW5hY3RpdmUoYXApDQogCQltdXRleF9leGl0KCZzZHAtPnNkX2xvY2sp Ow0KIAlWTl9SRUxFKGR2cCk7DQogDQotZW5kOg0KIAkvKg0KIAkgKiBEaXNw b3NlIG9mIHRoZSB2bm9kZSBmb3IgdGhlIHNuYXBzaG90IG1vdW50IHBvaW50 Lg0KIAkgKiBUaGlzIGlzIHNhZmUgdG8gZG8gYmVjYXVzZSBvbmNlIHRoaXMg ZW50cnkgaGFzIGJlZW4gcmVtb3ZlZA0KQEAgLTE1OTUsMjAgKzE2MTcsMTcg QEAgemZzY3RsX3NuYXBzaG90X2xvb2t1cChhcCkNCiBzdGF0aWMgaW50DQog emZzY3RsX3NuYXBzaG90X3ZwdG9jbnAoc3RydWN0IHZvcF92cHRvY25wX2Fy Z3MgKmFwKQ0KIHsNCi0JemZzdmZzX3QgKnpmc3ZmcyA9IGFwLT5hX3ZwLT52 X3Zmc3AtPnZmc19kYXRhOw0KLQl2bm9kZV90ICpkdnAsICp2cDsNCisJdm5v ZGVfdCAqZHZwLCAqdnAgPSBhcC0+YV92cDsNCiAJemZzY3RsX3NuYXBkaXJf dCAqc2RwOw0KIAl6ZnNfc25hcGVudHJ5X3QgKnNlcDsNCi0JaW50IGVycm9y Ow0KKwlpbnQgZXJyb3IgPSAwOw0KIA0KLQlBU1NFUlQoemZzdmZzLT56X2N0 bGRpciAhPSBOVUxMKTsNCi0JZXJyb3IgPSB6ZnNjdGxfcm9vdF9sb29rdXAo emZzdmZzLT56X2N0bGRpciwgInNuYXBzaG90IiwgJmR2cCwNCi0JICAgIE5V TEwsIDAsIE5VTEwsIGtjcmVkLCBOVUxMLCBOVUxMLCBOVUxMKTsNCi0JaWYg KGVycm9yICE9IDApDQotCQlyZXR1cm4gKGVycm9yKTsNCi0Jc2RwID0gZHZw LT52X2RhdGE7DQorCXNkcCA9IHpmc2N0bF9zbmFwc2hvdF9nZXRfc25hcGRp cih2cCwgJmR2cCk7DQogDQogCW11dGV4X2VudGVyKCZzZHAtPnNkX2xvY2sp Ow0KKw0KKwl2bl9sb2NrKHZwLCBMS19TSEFSRUQgfCBMS19SRVRSWSk7DQor DQogCXNlcCA9IGF2bF9maXJzdCgmc2RwLT5zZF9zbmFwcyk7DQogCXdoaWxl IChzZXAgIT0gTlVMTCkgew0KIAkJdnAgPSBzZXAtPnNlX3Jvb3Q7DQotLSAN CjEuOC40LjINCg0K --1030603365-686922855-1387467971=:4344-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 15:56:02 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3473526E for ; Thu, 19 Dec 2013 15:56:02 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id D59E31E04 for ; Thu, 19 Dec 2013 15:56:01 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id 528882003AC; Thu, 19 Dec 2013 16:56:00 +0100 (CET) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id 1DD68405889; Thu, 19 Dec 2013 16:56:02 +0100 (CET) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id F0F42406AF1; Thu, 19 Dec 2013 16:56:01 +0100 (CET) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3) with ESMTP id 2013121916554975-47727 ; Thu, 19 Dec 2013 16:55:49 +0100 Date: Thu, 19 Dec 2013 16:55:49 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: Gerrit =?ISO-8859-1?Q?K=FChn?= Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-Id: <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> In-Reply-To: <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/19/2013 16:55:49, Serialize by Router on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/19/2013 16:55:59, Serialize complete at 12/19/2013 16:55:59 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=ISO-8859-1 X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2013.12.19.154814 X-PerlMx-Spam: Gauge=IIIIIIIII, Probability=9%, Report=' MULTIPLE_RCPTS 0.1, FROM_SAME_AS_TO 0.05, HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODYTEXTP_SIZE_3000_LESS 0, BODY_SIZE_1000_LESS 0, BODY_SIZE_2000_LESS 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_500_599 0, BODY_SIZE_7000_LESS 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __CP_URI_IN_BODY 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __FROM_SAME_AS_TO2 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __MULTIPLE_RCPTS_CC_X2 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __URI_NO_WWW 0, __URI_NS ' Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 15:56:02 -0000 On Thu, 19 Dec 2013 10:55:03 +0100 Gerrit K=FChn wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: Sorry for replaying to myself... GK> Is there already a solution for this available? I think I am seeing the GK> same issue here (also with 9.2): It looks like this is a fix for this: This has been mfc'ed to 10-stable from current in November. Can I use this for 9-stable, too? Or do I have to upgrade to 10 to get rid of the issue? cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 16:21:32 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E36FAC76 for ; Thu, 19 Dec 2013 16:21:32 +0000 (UTC) Received: from krichy.tvnetwork.hu (krichy.tvnetwork.hu [109.61.101.194]) by mx1.freebsd.org (Postfix) with ESMTP id 986521076 for ; Thu, 19 Dec 2013 16:21:32 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id AB76B7988; Thu, 19 Dec 2013 17:21:04 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id A73BD7987; Thu, 19 Dec 2013 17:21:04 +0100 (CET) Date: Thu, 19 Dec 2013 17:21:04 +0100 (CET) From: krichy@tvnetwork.hu To: =?ISO-8859-15?Q?Gerrit_K=FChn?= Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 In-Reply-To: <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> Message-ID: References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=iso-8859-1; format=flowed Content-Transfer-Encoding: 8BIT X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 16:21:33 -0000 Dear Gerrit, I have some similar issues with snapshot handling, and I was told to use that patch, but unfortunately that did not solve my issues. As I tracked down things, my issues have nothing to do with snapshot sending. Regards, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Thu, 19 Dec 2013, Gerrit Khn wrote: > Date: Thu, 19 Dec 2013 16:55:49 +0100 > From: Gerrit Khn > To: Gerrit Khn > Cc: freebsd-fs@freebsd.org > Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 > > On Thu, 19 Dec 2013 10:55:03 +0100 Gerrit Khn > wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: > > > Sorry for replaying to myself... > > GK> Is there already a solution for this available? I think I am seeing the > GK> same issue here (also with 9.2): > > It looks like this is a fix for this: > > This has been mfc'ed to 10-stable from current in November. Can I use this > for 9-stable, too? Or do I have to upgrade to 10 to get rid of the issue? > > > cu > Gerrit > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 16:41:06 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 02B21FF9 for ; Thu, 19 Dec 2013 16:41:06 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id A3C8411D1 for ; Thu, 19 Dec 2013 16:41:05 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id B05B8200902; Thu, 19 Dec 2013 17:41:04 +0100 (CET) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id 8BCF3405889; Thu, 19 Dec 2013 17:41:06 +0100 (CET) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id 5ED6C406AF1; Thu, 19 Dec 2013 17:41:06 +0100 (CET) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3) with ESMTP id 2013121917405365-47780 ; Thu, 19 Dec 2013 17:40:53 +0100 Date: Thu, 19 Dec 2013 17:40:54 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: krichy@tvnetwork.hu Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-Id: <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> In-Reply-To: References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/19/2013 17:40:53, Serialize by Router on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/19/2013 17:41:03, Serialize complete at 12/19/2013 17:41:03 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2013.12.19.163315 X-PerlMx-Spam: Gauge=IIIIIIII, Probability=8%, Report=' HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODYTEXTP_SIZE_3000_LESS 0, BODY_SIZE_1000_LESS 0, BODY_SIZE_2000_LESS 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_7000_LESS 0, BODY_SIZE_800_899 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __TO_NO_NAME 0, __URI_NO_PATH 0, __URI_NO_WWW 0, __URI_NS ' Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 16:41:06 -0000 On Thu, 19 Dec 2013 17:21:04 +0100 (CET) krichy@tvnetwork.hu wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: Dear Richard, KH> I have some similar issues with snapshot handling, and I was told to KH> use that patch, but unfortunately that did not solve my issues. As I KH> tracked down things, my issues have nothing to do with snapshot KH> sending. That sounds not too promising. I do not send snapshots, either, I just need to rotate (rename) them like to original poster. I rebooted the machine which made the issue go away for now. Snapshots and subsequent backups will be running this night, I'm curious how it looks tomorrow morning. Unusable snapshot-renaming would be very bad, my backups and several other things rely on that. Do you still see the issue on your system? Are you using 9.2 or 10? cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 19:08:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DBD2F3F6 for ; Thu, 19 Dec 2013 19:08:44 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id 946401EBA for ; Thu, 19 Dec 2013 19:08:44 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id B78C97A4A; Thu, 19 Dec 2013 20:08:22 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id B4D857A49; Thu, 19 Dec 2013 20:08:22 +0100 (CET) Date: Thu, 19 Dec 2013 20:08:22 +0100 (CET) From: krichy@tvnetwork.hu To: =?ISO-8859-15?Q?Gerrit_K=FChn?= Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 In-Reply-To: <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> Message-ID: References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 8BIT X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 19:08:44 -0000 Dear Gerrit, I am testing my issues on 10, but will apply them for 9.2, as my stable system is using that. So a simple renaming can cause your system to hang? Regards, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Thu, 19 Dec 2013, Gerrit Khn wrote: > Date: Thu, 19 Dec 2013 17:40:54 +0100 > From: Gerrit Khn > To: krichy@tvnetwork.hu > Cc: freebsd-fs@freebsd.org > Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 > > On Thu, 19 Dec 2013 17:21:04 +0100 (CET) krichy@tvnetwork.hu wrote about > Re: ZFS snapshot renames failing after upgrade to 9.2: > > Dear Richard, > > KH> I have some similar issues with snapshot handling, and I was told to > KH> use that patch, but unfortunately that did not solve my issues. As I > KH> tracked down things, my issues have nothing to do with snapshot > KH> sending. > > That sounds not too promising. I do not send snapshots, either, I just > need to rotate (rename) them like to original poster. > I rebooted the machine which made the issue go away for now. Snapshots and > subsequent backups will be running this night, I'm curious how it looks > tomorrow morning. > Unusable snapshot-renaming would be very bad, my backups and several other > things rely on that. Do you still see the issue on your system? Are you > using 9.2 or 10? > > > cu > Gerrit > From owner-freebsd-fs@FreeBSD.ORG Thu Dec 19 23:49:06 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 57B9D69 for ; Thu, 19 Dec 2013 23:49:06 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id E2B711509 for ; Thu, 19 Dec 2013 23:49:05 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: X-IronPort-AV: E=Sophos;i="4.95,516,1384318800"; d="scan'208";a="81382145" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 19 Dec 2013 18:49:04 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 1FC6DB4026; Thu, 19 Dec 2013 18:49:04 -0500 (EST) Date: Thu, 19 Dec 2013 18:49:04 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <1717165737.33441446.1387496944120.JavaMail.root@uoguelph.ca> In-Reply-To: <52A7E53D.8000002@cse.yorku.ca> Subject: Re: mount ZFS snapshot on Linux system MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 23:49:06 -0000 Jason Keltz wrote: > On 10/12/2013 7:21 PM, Rick Macklem wrote: > > Jason Keltz wrote: > >> I'm running FreeBSD 9.2 with various ZFS datasets. > >> I export a dataset to a Linux system (RHEL64), and mount it. It > >> works > >> fine... > >> When I try to access the ZFS snapshot directory on the Linux NFS > >> client, > >> things go weird. > >> For NFSv4, I found two problems w.r.t. handling ZFS snapshots. 1 - Since a NFSv4 Readdir skips "." and "..", the check for VFS_VGET() returning ENOTSUPP wasn't happening, so it wasn't switching to use VOP_LOOKUP(). { As I understand it, that means that VFS_VGET() gets bogus stuff if the auto pseudo-mount of the snapshot hasn't happened. } 2 - Since the pseudo-mount of a snapshot doesn't set v_mountedhere in the mounted on vnode, I needed to add a check for a different vp->v_mount to recognize the "mount point" crossing. I think this patch fixes both problems: http://people.freebsd.org/~rmacklem/nfsv4-zfs-snapshot.patch Thanks goes to Jason for helping with testing this. W.r.t. NFSv3, access to the snapshots is somewhat bogus, in that an NFSv3 is never supposed to cross mount point boundaries. However, I don't know how an auto pseudo-mount could be safely exported and mounted as a separate volume, so all I can think of doing is documenting "in man nfsd(8)?" that it doesn't quite work. The breakage will depend on how the NFSv3 client handles st_dev. { The FreeBSD client sets st_dev to the client NFS mount's fsid and doesn't use the fsid returned by the server, so it doesn't change. As such, for FreeBSD, it will see one file system, but with duplicated filenos. For example, fts(3) might complain about a loop in the directory structure. } I hope to commit the above patch to head soon, once I get it reviewed and tested, rick > >> With NFSv4: > >> > >> [jas@archive /]# cd /mnt/.zfs/snapshot > >> [jas@archive snapshot]# ls > >> 20131203 20131205 20131206 20131207 20131208 20131209 > >> 20131210 > >> [jas@archive snapshot]# cd 20131210 > >> 20131210: Not a directory. > >> > >> huh? > >> > >> [jas@archive snapshot]# ls -al > >> total 77 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> [jas@archive snapshot]# stat * > >> [jas@archive snapshot]# ls -al > >> total 292 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 > >> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 > >> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 > >> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 > >> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 > >> > >> But it gets even more fun.. > >> > >> # ls -ali > >> total 205 > >> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> > >> This is not a user id mapping issue because all the files in /mnt > >> have > >> the proper owner/groups, and I can access them there fine. > >> > >> I also tried explicitly exporting .zfs/snapshot. The result isn't > >> any > >> different. > >> > >> If I use nfs v3 it "works", but I'm seeing a whole lot of errors > >> like > >> these in syslog: > >> > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131203: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131209: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131210: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131207: Invalid argument > >> > >> It's not clear to me why this doesn't just "work". > >> > >> Can anyone provide any advice on debugging this? > >> > > As I think you already know, I know nothing about ZFS and never > > use it. > Yup! :) > > Having said that, I suspect that there are filenos (i-node #s) > > that are the same in the snapshot as in the parent file system > > tree. > > > > The basic assumptions are: > > - within a file system, all i-node# are unique (represent one file > > object only) and all file objects have the same fsid > > - when the fsid changes, that indicates a file system boundary and > > fileno (i-node#s) can be reused in the subtree with a different > > fsid > > > > For NFSv3, the server should export single volumes only (all > > objects > > have the same fsid and the filenos are unique). This is indicated > > to > > the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and > > friends. > > > > For NFSv4, the server does export multiple volumes and the boundary > > is indicated by a change in fsid value. > > > > I suspect ZFS snaphots don't obey the above in some way, but that > > is > > just a hunch. > > > > Now, how to narrow this down... > > - Do the above tests (both NFSv4 and NFSv3) and capture the > > packets, > > then look at them in wireshark. In particular, look at the > > fileid numbers > > and fsid values for the various directories under .zfs. > > I gave this a shot, but I haven't used wireshark to capture NFS > traffic > before, so if I need to provide additional details, let me know.. > > NFSv4: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid4.major=1446349656 > fsid4.minor=222 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid4.major=1845998066 > fsid4.minor=222 > > For /mnt/jas: > fileid=144 > fsid4.major=597946950 > fsid4.minor=222 > > For /mnt/jas1: > fileid=338 > fsid4.major=597946950 > fsid4.minor=222 > > So fsid is the same for all the different "data" directories, which > is > what I would expect given what you said. I guess each snapshot is > seen > as a unique filesystem... but then a repeating inode in different > filesystems shouldn't be a problem... > > NFSv3: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid=0x0000000056358b58 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid=0x000000006e07b1f2 > > For /mnt/jas > fileid=144 > fsid=0x0000000023a3f246 > > For /mnt/jas1: > fileid=338 > fsid=0x0000000023a3f246 > > Here, it seems it's the same, even though it's NFSv3... hmm. > > > > - Try mounting the individual snapshot directory, like > > .zfs/snapshot/20131209 and see if that works (for both NFSv3 > > and NFSv4). > > Hmm .. I tried this: > > /local/backup/home9/.zfs/snapshot/20131203 -ro > archive-mrpriv.cs.yorku.ca > V4: / > > ... but syslog reports: > > Dec 10 22:28:22 jungle mountd[85405]: can't export > /local/backup/home9/.zfs/snapshot/20131203 > > ... and of course I can't mount from either v3/v4. > > On the other hand, I kept it as: > > /local/backup/home9 -ro archive-mrpriv.cs.yorku.ca > V4:/ > > ... and was able to NFSv4 mount > /local/backup/home9/.zfs/snapshot/20131203, and this does indeed > work. > > > - Try doing the mounts with a FreeBSD client and see if you get the > > same > > behaviour? > I found this: > http://forums.freenas.org/threads/mounting-snapshot-directory-using-nfs-from-linux-broken.6060/ > .. implies it will work from FreeBSD/Nexenta, just not Linux. > Found this as well: > https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/lKyfYsjPMNM > > Jason. > > From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 09:05:36 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20574DB2 for ; Fri, 20 Dec 2013 09:05:36 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id A2D8D1E73 for ; Fri, 20 Dec 2013 09:05:35 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id E6E51200C15; Fri, 20 Dec 2013 10:05:33 +0100 (CET) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id B96FC405889; Fri, 20 Dec 2013 10:05:36 +0100 (CET) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id 7412B406AF1; Fri, 20 Dec 2013 10:05:36 +0100 (CET) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3) with ESMTP id 2013122010052276-48615 ; Fri, 20 Dec 2013 10:05:22 +0100 Date: Fri, 20 Dec 2013 10:05:22 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: krichy@tvnetwork.hu Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-Id: <20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> In-Reply-To: References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 10:05:22, Serialize by Router on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 10:05:32, Serialize complete at 12/20/2013 10:05:32 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2013.12.20.90016 X-PerlMx-Spam: Gauge=IIIIIIII, Probability=8%, Report=' HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODYTEXTP_SIZE_3000_LESS 0, BODY_SIZE_2000_2999 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_7000_LESS 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __RUS_OBFU_PHONE 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __TO_NO_NAME 0, __URI_NO_PATH 0, __URI_NO_WWW 0, __URI_NS ' Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 09:05:36 -0000 On Thu, 19 Dec 2013 20:08:22 +0100 (CET) krichy@tvnetwork.hu wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: KH> So a simple renaming can cause your system to hang? No, it does not hang completely. Just the snapshots become unusable. This night, it happened again: --- root@shapeshifter:~ # ll /tank/git/.zfs/snapshot/ ls: daily.6: Device busy total 33 drwxr-xr-x 12 211 211 25 Dec 19 09:18 daily.0/ drwxr-xr-x 12 211 211 25 Dec 19 00:00 daily.1/ drwxr-xr-x 12 211 211 24 Dec 18 00:00 daily.2/ drwxr-xr-x 12 211 211 24 Dec 17 00:00 daily.3/ drwxr-xr-x 12 211 211 24 Dec 16 00:00 daily.4/ drwxr-xr-x 12 211 211 24 Dec 14 00:00 daily.5/ drwxr-xr-x 12 211 211 24 Dec 15 00:00 weekly.0/ drwxr-xr-x 12 211 211 24 Dec 8 00:00 weekly.1/ drwxr-xr-x 12 211 211 24 Dec 1 00:00 weekly.2/ drwxr-xr-x 12 211 211 24 Nov 17 00:00 weekly.3/ drwxr-xr-x 12 211 211 24 Nov 10 00:00 weekly.4/ drwxr-xr-x 2 root wheel 3 Oct 20 00:00 weekly.5/ drwxr-xr-x 2 root wheel 3 Oct 6 00:00 weekly.6/ --- root@shapeshifter:~ # zfs list -r -t snapshot -o name,creation,used,referenced tank/git NAME CREATION USED REFER tank/git@weekly.6 Sun Oct 6 0:00 2013 42.6K 62.8K tank/git@weekly.5 Sun Oct 20 0:00 2013 42.6K 62.8K tank/git@weekly.4 Sun Nov 10 0:00 2013 29.5M 146G tank/git@weekly.3 Sun Nov 17 0:00 2013 27.1M 146G tank/git@weekly.2 Sun Dec 1 0:00 2013 26.3M 146G tank/git@weekly.1 Sun Dec 8 0:00 2013 27.3M 146G tank/git@daily.6 Sat Dec 14 0:00 2013 26.5M 147G tank/git@weekly.0 Sun Dec 15 0:00 2013 25.2M 147G tank/git@daily.5 Mon Dec 16 0:00 2013 24.7M 147G tank/git@daily.4 Tue Dec 17 0:00 2013 24.9M 147G tank/git@daily.3 Wed Dec 18 0:00 2013 25.7M 147G tank/git@daily.2 Thu Dec 19 0:00 2013 25.8M 147G tank/git@daily.1 Thu Dec 19 9:19 2013 25.0M 147G tank/git@daily.0 Fri Dec 20 0:00 2013 26.8M 147G --- As you can see, the snapshot rotating got stuck somewhere. What is displayed under .zfs/snapshot does not reflect what zfs is really seeing: daily.6 is inaccessible, and the rotation that happened so far is not reflected under .zfs/snapshot, either. cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 09:14:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8F1149C for ; Fri, 20 Dec 2013 09:14:07 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id 1F1EE1F40 for ; Fri, 20 Dec 2013 09:14:05 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id 2D6E0200E23; Fri, 20 Dec 2013 10:14:05 +0100 (CET) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id EEEC4405889; Fri, 20 Dec 2013 10:14:07 +0100 (CET) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id C7AA7406AF1; Fri, 20 Dec 2013 10:14:07 +0100 (CET) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3) with ESMTP id 2013122010135471-48626 ; Fri, 20 Dec 2013 10:13:54 +0100 Date: Fri, 20 Dec 2013 10:13:54 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: krichy@tvnetwork.hu Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-Id: <20131220101354.5ed7b282.gerrit.kuehn@aei.mpg.de> In-Reply-To: <24389_1387530351_52B4086F_24389_271_1_20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> <24389_1387530351_52B4086F_24389_271_1_20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 10:13:54, Serialize by Router on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 10:14:04, Serialize complete at 12/20/2013 10:14:04 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=ISO-8859-1 X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2013.12.20.90614 X-PerlMx-Spam: Gauge=IIIIIIII, Probability=8%, Report=' HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODYTEXTP_SIZE_3000_LESS 0, BODY_SIZE_1600_1699 0, BODY_SIZE_2000_LESS 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_7000_LESS 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __TO_NO_NAME 0, __URI_NO_PATH 0, __URI_NO_WWW 0, __URI_NS ' Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 09:14:07 -0000 On Fri, 20 Dec 2013 10:05:22 +0100 Gerrit K=FChn wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: GK> As you can see, the snapshot rotating got stuck somewhere. What is GK> displayed under .zfs/snapshot does not reflect what zfs is really GK> seeing: daily.6 is inaccessible, and the rotation that happened so far GK> is not reflected under .zfs/snapshot, either. And just as a sidenote: Reboot is not necessary to fis this, exporting and importing the zpool does that, too: --- root@shapeshifter:~ # ll /tank/git/.zfs/snapshot/ total 35 drwxr-xr-x 12 211 211 25 Dec 20 00:00 daily.0/ drwxr-xr-x 12 211 211 25 Dec 19 09:18 daily.1/ drwxr-xr-x 12 211 211 25 Dec 19 00:00 daily.2/ drwxr-xr-x 12 211 211 24 Dec 18 00:00 daily.3/ drwxr-xr-x 12 211 211 24 Dec 17 00:00 daily.4/ drwxr-xr-x 12 211 211 24 Dec 16 00:00 daily.5/ drwxr-xr-x 12 211 211 24 Dec 14 00:00 daily.6/ drwxr-xr-x 12 211 211 24 Dec 15 00:00 weekly.0/ drwxr-xr-x 12 211 211 24 Dec 8 00:00 weekly.1/ drwxr-xr-x 12 211 211 24 Dec 1 00:00 weekly.2/ drwxr-xr-x 12 211 211 24 Nov 17 00:00 weekly.3/ drwxr-xr-x 12 211 211 24 Nov 10 00:00 weekly.4/ drwxr-xr-x 2 root wheel 3 Oct 20 00:00 weekly.5/ drwxr-xr-x 2 root wheel 3 Oct 6 00:00 weekly.6/ --- This has worked flawlessly for me for years. I would really appreciate any help in fixing this. Trouble started with 9.2 as far as I can tell (and the original poster said the same). I have systems with 9.1 that do not show this behaviour. cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 09:19:38 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1E0B82D3 for ; Fri, 20 Dec 2013 09:19:38 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id C75951FBF for ; Fri, 20 Dec 2013 09:19:37 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id E2FF27C78; Fri, 20 Dec 2013 10:19:15 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id DFE1C7C74; Fri, 20 Dec 2013 10:19:15 +0100 (CET) Date: Fri, 20 Dec 2013 10:19:15 +0100 (CET) From: krichy@tvnetwork.hu To: =?ISO-8859-15?Q?Gerrit_K=FChn?= Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 In-Reply-To: <20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> Message-ID: References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> <20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="1030603365-785960615-1387531155=:12244" Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 09:19:38 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --1030603365-785960615-1387531155=:12244 Content-Type: TEXT/PLAIN; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 8BIT Dear Gerrit, It is not a solution, but I use different snapshot handling mechanisms. I wrote a simple script which handles that, and the snapshots are named by its creation timestamp. I think it is more usable to see that when that snapshot was exactly taken, and the script thus only does snapshot creation, and deletion, no renames. That scripts only limitation is that it is planned to run hourly, creating hourly snapshots, and when run again, it queries the existing one's list, and decides which to keep or remove. Thus you have to run it hourly in cron like: # crontab -l 0 * * * * /usr/local/sbin/zfs-snapshot Regards, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Fri, 20 Dec 2013, Gerrit Khn wrote: > Date: Fri, 20 Dec 2013 10:05:22 +0100 > From: Gerrit Khn > To: krichy@tvnetwork.hu > Cc: freebsd-fs@freebsd.org > Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 > > On Thu, 19 Dec 2013 20:08:22 +0100 (CET) krichy@tvnetwork.hu wrote about > Re: ZFS snapshot renames failing after upgrade to 9.2: > > KH> So a simple renaming can cause your system to hang? > > No, it does not hang completely. > Just the snapshots become unusable. This night, it happened again: > > --- > root@shapeshifter:~ # ll /tank/git/.zfs/snapshot/ > ls: daily.6: Device busy > total 33 > drwxr-xr-x 12 211 211 25 Dec 19 09:18 daily.0/ > drwxr-xr-x 12 211 211 25 Dec 19 00:00 daily.1/ > drwxr-xr-x 12 211 211 24 Dec 18 00:00 daily.2/ > drwxr-xr-x 12 211 211 24 Dec 17 00:00 daily.3/ > drwxr-xr-x 12 211 211 24 Dec 16 00:00 daily.4/ > drwxr-xr-x 12 211 211 24 Dec 14 00:00 daily.5/ > drwxr-xr-x 12 211 211 24 Dec 15 00:00 weekly.0/ > drwxr-xr-x 12 211 211 24 Dec 8 00:00 weekly.1/ > drwxr-xr-x 12 211 211 24 Dec 1 00:00 weekly.2/ > drwxr-xr-x 12 211 211 24 Nov 17 00:00 weekly.3/ > drwxr-xr-x 12 211 211 24 Nov 10 00:00 weekly.4/ > drwxr-xr-x 2 root wheel 3 Oct 20 00:00 weekly.5/ > drwxr-xr-x 2 root wheel 3 Oct 6 00:00 weekly.6/ > --- > > root@shapeshifter:~ # zfs list -r -t snapshot -o > name,creation,used,referenced tank/git NAME > CREATION USED REFER tank/git@weekly.6 Sun Oct 6 0:00 > 2013 42.6K 62.8K tank/git@weekly.5 Sun Oct 20 0:00 2013 42.6K 62.8K > tank/git@weekly.4 Sun Nov 10 0:00 2013 29.5M 146G > tank/git@weekly.3 Sun Nov 17 0:00 2013 27.1M 146G > tank/git@weekly.2 Sun Dec 1 0:00 2013 26.3M 146G > tank/git@weekly.1 Sun Dec 8 0:00 2013 27.3M 146G > tank/git@daily.6 Sat Dec 14 0:00 2013 26.5M 147G > tank/git@weekly.0 Sun Dec 15 0:00 2013 25.2M 147G > tank/git@daily.5 Mon Dec 16 0:00 2013 24.7M 147G > tank/git@daily.4 Tue Dec 17 0:00 2013 24.9M 147G > tank/git@daily.3 Wed Dec 18 0:00 2013 25.7M 147G > tank/git@daily.2 Thu Dec 19 0:00 2013 25.8M 147G > tank/git@daily.1 Thu Dec 19 9:19 2013 25.0M 147G > tank/git@daily.0 Fri Dec 20 0:00 2013 26.8M 147G > --- > > > As you can see, the snapshot rotating got stuck somewhere. What is > displayed under .zfs/snapshot does not reflect what zfs is really seeing: > daily.6 is inaccessible, and the rotation that happened so far is not > reflected under .zfs/snapshot, either. > > > cu > Gerrit > --1030603365-785960615-1387531155=:12244 Content-Type: TEXT/PLAIN; charset=US-ASCII; name=zfs-snapshot Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: attachment; filename=zfs-snapshot IyEvYmluL3NoDQoNCnVuc2V0IExBTkcNCmV4cG9ydCBQQVRIPS9iaW46L3Ni aW46L3Vzci9iaW46L3Vzci9zYmluDQoNCiMgY29uZmlndXJhdGlvbg0KDQpk YXRhc2V0cz0icm9vdC9yb290IHBvb2wvaG9tZSBwb29sL3VzciINCg0KaG91 cmx5X2tlZXA9MTY4DQpkYWlseV9rZWVwPTkwDQp3ZWVrbHlfa2VlcD0xMDYN Cg0KIyBvdmVycmlkZSBkYXRhc2V0LXNwZWNpZmljIGtlZXAgdGltZXMNCnJv b3Rfcm9vdF9ob3VybHlfa2VlcD00OA0Kcm9vdF9yb290X2RhaWx5X2tlZXA9 MzANCnJvb3Rfcm9vdF93ZWVrbHlfa2VlcD04DQoNCiMgY29uZmlndXJhdGlv biBlbmRzDQoNCnN0YW1wPSQoZGF0ZSArJVklbSVkJUgwMCkNCg0KZWUgKCkN CnsNCglsb2dnZXIgLXQgInpmcy1zbmFwc2hvdCIgImV4ZWN1dGluZyAkQCIN CglldmFsICIkQCINCn0NCg0KZ2V0X2tlZXAgKCkNCnsNCglsb2NhbCBkc2V0 PSIkMSINCglsb2NhbCBwZXJpb2Q9IiQyIg0KCWxvY2FsIHINCg0KCWV2YWwg InI9XCQkKGVjaG8gIiRkc2V0IiB8IHNlZCAtZSAicyNbLy1dI18jZyIpXyR7 cGVyaW9kfV9rZWVwIg0KCWlmIFsgLXogIiRyIiBdOyB0aGVuDQoJCWV2YWwg InI9XCQke3BlcmlvZH1fa2VlcCINCglmaQ0KDQoJZWNobyAiJHIiDQp9DQoN CmZvciBzZXQgaW4gJGRhdGFzZXRzIDsgZG8NCglzbj0iJHNldEBhdXRvLSRz dGFtcCINCgllZSB6ZnMgc25hcHNob3QgIiRzbiINCmRvbmUNCg0KcHJldj0N CnpmcyBsaXN0IC10IHNuYXBzaG90IC1IIC1vIG5hbWUgLVMgbmFtZSB8IGdy ZXAgLUUgIkBhdXRvLVswLTldezEyfSQiIHwgd2hpbGUgcmVhZCBuYW1lIDsg ZG8NCglkc2V0PSR7bmFtZSVAKn0NCglzdGFtcD0ke25hbWUjKmF1dG8tfQ0K DQoJaWYgWyAiJGRzZXQiICE9ICIkcHJldiIgXSA7IHRoZW4NCgkJaD0kKGdl dF9rZWVwICIkZHNldCIgaG91cmx5KQ0KCQlkPSQoZ2V0X2tlZXAgIiRkc2V0 IiBkYWlseSkNCgkJdz0kKGdldF9rZWVwICIkZHNldCIgd2Vla2x5KQ0KCQlw cmV2PSIkZHNldCINCglmaQ0KDQoJa2VlcD0wDQoNCglpZiBbICQoZGF0ZSAt aiAkc3RhbXAgKyV3JUglTSkgPSAwMDAwMCBdIDsgdGhlbg0KCQlpZiBbICR3 IC1ndCAwIF0gOyB0aGVuDQoJCQl3PSQoKHcgLSAxKSkNCgkJCWtlZXA9MQ0K CQlmaQ0KCWZpDQoNCglpZiBbICQoZGF0ZSAtaiAkc3RhbXAgKyVIJU0pID0g MDAwMCBdIDsgdGhlbg0KCQlpZiBbICRkIC1ndCAwIF0gOyB0aGVuDQoJCQlk PSQoKGQgLSAxKSkNCgkJCWtlZXA9MQ0KCQlmaQ0KCWZpDQoNCglpZiBbICRo IC1ndCAwIF0gOyB0aGVuDQoJCWg9JCgoaCAtIDEpKQ0KCQlrZWVwPTENCglm aQ0KDQoJaWYgWyAka2VlcCAtZXEgMCBdOyB0aGVuDQoJCWVlIHpmcyBkZXN0 cm95ICIkbmFtZSINCglmaQ0KZG9uZQ0K --1030603365-785960615-1387531155=:12244-- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 09:30:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2CE97602 for ; Fri, 20 Dec 2013 09:30:46 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id CAC3810DA for ; Fri, 20 Dec 2013 09:30:45 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id 11788200E59; Fri, 20 Dec 2013 10:30:45 +0100 (CET) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id DB972406AF1; Fri, 20 Dec 2013 10:30:47 +0100 (CET) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id 6A189405889; Fri, 20 Dec 2013 10:30:46 +0100 (CET) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3) with ESMTP id 2013122010304141-48643 ; Fri, 20 Dec 2013 10:30:41 +0100 Date: Fri, 20 Dec 2013 10:30:41 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: krichy@tvnetwork.hu Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-Id: <20131220103041.0ab3d02a.gerrit.kuehn@aei.mpg.de> In-Reply-To: References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> <20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 10:30:41, Serialize by Router on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 10:30:43, Serialize complete at 12/20/2013 10:30:43 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2013.12.20.91514 X-PerlMx-Spam: Gauge=IIIIIIII, Probability=8%, Report=' HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODYTEXTP_SIZE_3000_LESS 0, BODY_SIZE_1100_1199 0, BODY_SIZE_2000_LESS 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_7000_LESS 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __TO_NO_NAME 0, __URI_NO_PATH 0, __URI_NO_WWW 0, __URI_NS ' Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 09:30:46 -0000 On Fri, 20 Dec 2013 10:19:15 +0100 (CET) krichy@tvnetwork.hu wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: Dear Richard, KH> It is not a solution, but I use different snapshot handling KH> mechanisms. I wrote a simple script which handles that, and the KH> snapshots are named by its creation timestamp. I think it is more KH> usable to see that when that snapshot was exactly taken, and the KH> script thus only does snapshot creation, and deletion, no renames. I see. I am using sysutils/freebsd-snapshot which does renaming for rotation. Well, if there is not fix for this in sight, I will probably have to look into a different snapshotting strategy and adopt backup scripts and stuff accordingly. KH> That scripts only limitation is that it is planned to run hourly, KH> creating hourly snapshots, and when run again, it queries the existing KH> one's list, and decides which to keep or remove. Thus you have to run KH> it hourly in cron like: KH> # crontab -l KH> 0 * * * * /usr/local/sbin/zfs-snapshot I think freebsd-snapshot does the same, so this would not change anything for me. Thanks for the hint. cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 10:05:41 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6A830EB8 for ; Fri, 20 Dec 2013 10:05:41 +0000 (UTC) Received: from thyme.infocus-llc.com (server.infocus-llc.com [206.156.254.44]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3977513A2 for ; Fri, 20 Dec 2013 10:05:40 +0000 (UTC) Received: from draco.over-yonder.net (c-75-65-60-66.hsd1.ms.comcast.net [75.65.60.66]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by thyme.infocus-llc.com (Postfix) with ESMTPSA id 0883437B4A0; Fri, 20 Dec 2013 03:58:57 -0600 (CST) Received: by draco.over-yonder.net (Postfix, from userid 100) id 3dm55D0dD7z2Jd; Fri, 20 Dec 2013 03:58:56 -0600 (CST) Date: Fri, 20 Dec 2013 03:58:56 -0600 From: "Matthew D. Fuller" To: Gerrit =?iso-8859-1?Q?K=FChn?= Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-ID: <20131220095856.GF86097@over-yonder.net> References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> <24389_1387530351_52B4086F_24389_271_1_20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> <20131220101354.5ed7b282.gerrit.kuehn@aei.mpg.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20131220101354.5ed7b282.gerrit.kuehn@aei.mpg.de> X-Editor: vi X-OS: FreeBSD User-Agent: Mutt/1.5.22 (2013-10-16) X-Virus-Scanned: clamav-milter 0.98 at thyme.infocus-llc.com X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 10:05:41 -0000 On Fri, Dec 20, 2013 at 10:13:54AM +0100 I heard the voice of Gerrit Khn, and lo! it spake thus: > > This has worked flawlessly for me for years. I would really > appreciate any help in fixing this. Trouble started with 9.2 as far > as I can tell (and the original poster said the same). I have > systems with 9.1 that do not show this behaviour. As a data point, I'm renaming snapshots hourly on a system running 9.2-PRERELEASE #0 r254474: Sat Aug 17 22:49:00 CDT 2013 -- Matthew Fuller (MF4839) | fullermd@over-yonder.net Systems/Network Administrator | http://www.over-yonder.net/~fullermd/ On the Internet, nobody can hear you scream. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 10:57:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AA955A61 for ; Fri, 20 Dec 2013 10:57:54 +0000 (UTC) Received: from umail.aei.mpg.de (umail.aei.mpg.de [194.94.224.6]) by mx1.freebsd.org (Postfix) with ESMTP id 52F561781 for ; Fri, 20 Dec 2013 10:57:53 +0000 (UTC) Received: from mailgate.aei.mpg.de (mailgate.aei.mpg.de [194.94.224.5]) by umail.aei.mpg.de (Postfix) with ESMTP id 2088D200E7C; Fri, 20 Dec 2013 11:57:47 +0100 (CET) Received: from mailgate.aei.mpg.de (localhost [127.0.0.1]) by localhost (Postfix) with SMTP id 0EA5F405889; Fri, 20 Dec 2013 11:57:50 +0100 (CET) Received: from intranet.aei.uni-hannover.de (ahin1.aei.uni-hannover.de [130.75.117.40]) by mailgate.aei.mpg.de (Postfix) with ESMTP id E1617406AF1; Fri, 20 Dec 2013 11:57:49 +0100 (CET) Received: from cascade.aei.uni-hannover.de ([10.117.15.111]) by intranet.aei.uni-hannover.de (Lotus Domino Release 8.5.3) with ESMTP id 2013122011573629-48709 ; Fri, 20 Dec 2013 11:57:36 +0100 Date: Fri, 20 Dec 2013 11:57:36 +0100 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: "Matthew D. Fuller" Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 Message-Id: <20131220115736.d7f727d1.gerrit.kuehn@aei.mpg.de> In-Reply-To: <20131220095856.GF86097@over-yonder.net> References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> <24389_1387530351_52B4086F_24389_271_1_20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> <20131220101354.5ed7b282.gerrit.kuehn@aei.mpg.de> <20131220095856.GF86097@over-yonder.net> Organization: Max Planck Gesellschaft X-Mailer: Sylpheed 3.1.3 (GTK+ 2.24.19; amd64-portbld-freebsd8.2) Mime-Version: 1.0 X-MIMETrack: Itemize by SMTP Server on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 11:57:36, Serialize by Router on intranet/aei-hannover(Release 8.5.3|September 15, 2011) at 12/20/2013 11:57:46, Serialize complete at 12/20/2013 11:57:46 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-PMX-Version: 6.0.2.2308539, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2013.12.20.104815 X-PerlMx-Spam: Gauge=IIIIIIIII, Probability=9%, Report=' MULTIPLE_RCPTS 0.1, HTML_00_01 0.05, HTML_00_10 0.05, MIME_LOWER_CASE 0.05, BODYTEXTP_SIZE_3000_LESS 0, BODY_SIZE_1000_LESS 0, BODY_SIZE_2000_LESS 0, BODY_SIZE_5000_LESS 0, BODY_SIZE_600_699 0, BODY_SIZE_7000_LESS 0, __ANY_URI 0, __BOUNCE_CHALLENGE_SUBJ 0, __BOUNCE_NDR_SUBJ_EXEMPT 0, __CT 0, __CTE 0, __CT_TEXT_PLAIN 0, __FORWARDED_MSG 0, __HAS_FROM 0, __HAS_MSGID 0, __HAS_X_MAILER 0, __IN_REP_TO 0, __MIME_TEXT_ONLY 0, __MIME_VERSION 0, __MULTIPLE_RCPTS_CC_X2 0, __SANE_MSGID 0, __SUBJ_ALPHA_NEGATE 0, __TO_MALFORMED_2 0, __URI_NO_PATH 0, __URI_NO_WWW 0, __URI_NS ' Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 10:57:54 -0000 On Fri, 20 Dec 2013 03:58:56 -0600 "Matthew D. Fuller" wrote about Re: ZFS snapshot renames failing after upgrade to 9.2: > > This has worked flawlessly for me for years. I would really > > appreciate any help in fixing this. Trouble started with 9.2 as far > > as I can tell (and the original poster said the same). I have > > systems with 9.1 that do not show this behaviour. MDF> As a data point, I'm renaming snapshots hourly on a system running MDF> 9.2-PRERELEASE #0 r254474: Sat Aug 17 22:49:00 CDT 2013 And you are not seeing this issue, I guess? I have 9.2-STABLE FreeBSD 9.2-STABLE #4: Thu Oct 24 15:19:54 UTC 2013 cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 13:43:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 92DD4EFB for ; Fri, 20 Dec 2013 13:43:26 +0000 (UTC) Received: from mail.dawidek.net (garage.dawidek.net [91.121.88.72]) by mx1.freebsd.org (Postfix) with ESMTP id B054015EF for ; Fri, 20 Dec 2013 13:43:25 +0000 (UTC) Received: from localhost (58.wheelsystems.com [83.12.187.58]) by mail.dawidek.net (Postfix) with ESMTPSA id 0BA8969D; Fri, 20 Dec 2013 14:36:38 +0100 (CET) Date: Fri, 20 Dec 2013 14:44:05 +0100 From: Pawel Jakub Dawidek To: krichy@tvnetwork.hu Subject: Re: kern/184677 / ZFS snapshot handling deadlocks Message-ID: <20131220134405.GE1658@garage.freebsd.pl> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="LKTjZJSUETSlgu2t" Content-Disposition: inline In-Reply-To: X-OS: FreeBSD 11.0-CURRENT amd64 User-Agent: Mutt/1.5.22 (2013-10-16) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 13:43:26 -0000 --LKTjZJSUETSlgu2t Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Dec 19, 2013 at 04:46:11PM +0100, krichy@tvnetwork.hu wrote: > Dear devs, >=20 > I am attaching a more clear patch/fix for my snapshot handling issues=20 > (0002), but I would be happy if some ZFS expert would comment it. I am=20 > trying to solve it at least for two weeks now, and an ACK or a NACK would= =20 > be nice from someone. Also a commit is reverted since it also caused=20 > deadlocks. I've read its comments, which also eliminates deadlocks, but I= =20 > did not find any reference how to produce that deadlock. In my view=20 > reverting that makes my issues disappear, but I dont know what new cases= =20 > will it rise. Richard, I won't be able to analyse it myself anytime soon, because of other obligations, but I forwarded you e-mail to the zfs-devel@ mailing list (it is closed, but gathers FreeBSD ZFS devs). Hopefully someone from there will be able to help you. > I've rewritten traverse() to make more like upstream, added two extra=20 > VN_HOLD()s to snapdir_lookup() when traverse returned the same vnode what= =20 > was passed to it (I dont know even that upon creating a snapshot vnode wh= y=20 > is that extra two holds needed for the vnode.) And unfortunately, due to= =20 > FreeBSD calls vop_inactive callbacks with vnodes locked, that could also= =20 > cause deadlocks, so zfsctl_snapshot_inactive() and=20 > zfsctl_snapshot_vptocnp() has been rewritten to work that around. >=20 > After this, one may also get a deadlock, when a simple access would call= =20 > into zfsctl_snapshot_lookup(). The documentation says, that those vnodes= =20 > should always be covered, but after some stress test, sometimes we hit=20 > that call, and that can cause again deadlocks. In our environment I've=20 > just uncommented that callback, which returns ENODIR on some calls, but a= t=20 > least not a deadlock. >=20 > The attached script can be used to reproduce my cases (would one confirm= =20 > that?), and after the patches applied, they disappear (confirm?). >=20 > Thanks, >=20 >=20 > Kojedzinszky Richard > Euronet Magyarorszag Informatikai Zrt. >=20 > On Tue, 17 Dec 2013, krichy@tvnetwork.hu wrote: >=20 > > Date: Tue, 17 Dec 2013 14:50:16 +0100 (CET) > > From: krichy@tvnetwork.hu > > To: pjd@freebsd.org > > Cc: freebsd-fs@freebsd.org > > Subject: Re: kern/184677 (fwd) > >=20 > > Dear devs, > > > > I will sum up my experience regarding the issue: > > > > The sympton is that a concurrent 'zfs send -R' and some activity on the= =20 > > snapshot dir (or in the snapshot) may cause a deadlock. > > > > After investigating the problem, I found that zfs send umounts the snap= shots,=20 > > and that causes the deadlock, so later I tested only with concurrent um= ount=20 > > and the "activity". More later I found that listing the snapshots in=20 > > .zfs/snapshot/ and unounting them can cause the found deadlock, so I us= ed=20 > > them for the tests. But for my surprise, instead of a deadlock, a recur= sive=20 > > lock panic has arised. > > > > The vnode for the ".zfs/snapshot/" directory contains zfs's zfsctl_snap= dir_t=20 > > structure (sdp). This contains a tree of mounted snapshots, and each en= try=20 > > (sep) contains the vnode of entry on which the snapshot is mounted on t= op=20 > > (se_root). The strange is that the se_root member does not hold a refer= ence=20 > > for the vnode, just a simple pointer to it. > > > > Upon entry lookup (zfsctl_snapdir_lookup()) the "snapshot" vnode is loc= ked,=20 > > the zfsctl_snapdir_t's tree is locked, and searched for the mount if it= =20 > > exists already. If it founds no entry, does the mount. In the case of a= n=20 > > entry was found, the se_root member contains the vnode which the snapsh= ot is=20 > > mounted on. Thus, a reference is taken for it, and the traverse() call = will=20 > > resolve to the real root vnode of the mounted snapshot, returning it as= =20 > > locked. (Examining the traverse() code I've found that it did not follo= w=20 > > FreeBSD's lock order recommendation described in sys/kern/vfs_subr.c.) > > > > On the other way, when an umount is issued, the se_root vnode looses it= s last=20 > > reference (as only the mountpoint holds one for it), it goes through th= e=20 > > vinactive() path, to zfsctl_snapshot_inactive(). In FreeBSD this is cal= led=20 > > with a locked vnode, so this is a deadlock race condition. While=20 > > zfsctl_snapdir_lookup() holds the mutex for the sdp tree, and traverse(= )=20 > > tries to acquire the se_root, zfsctl_snapshot_inactive() holds the lock= on=20 > > se_root while tries to access the sdp lock. > > > > The zfsctl_snapshot_inactive() has an if statement checking the v_useco= unt,=20 > > which is incremented in zfsctl_snapdir_lookup(), but in that context it= is=20 > > not covered by VI_LOCK. And it seems to me that FreeBSD's vinactive() p= ath=20 > > assumes that the vnode remains inactive (as opposed to illumos, at leas= t how=20 > > i read the code). So zfsctl_snapshot_inactive() must free resources whi= le in=20 > > a locked state. I was a bit confused, and probably that is why the prev= iously=20 > > posted patch is as is. > > > > Maybe if I had some clues on the directions of this problem, I could ha= ve=20 > > worked more for a nicer, shorter solution. > > > > Please someone comment on my post. > > > > Regards, > > > > > > > > Kojedzinszky Richard > > Euronet Magyarorszag Informatikai Zrt. > > > > On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: > > > >> Date: Mon, 16 Dec 2013 16:52:16 +0100 (CET) > >> From: krichy@tvnetwork.hu > >> To: pjd@freebsd.org > >> Cc: freebsd-fs@freebsd.org > >> Subject: Re: kern/184677 (fwd) > >>=20 > >> Dear PJD, > >>=20 > >> I am a happy FreeBSD user, I am sure you've read my previous posts=20 > >> regarding some issues in ZFS. Please give some advice for me, where to= look=20 > >> for solutions, or how could I help to resolve those issues. > >>=20 > >> Regards, > >> Kojedzinszky Richard > >> Euronet Magyarorszag Informatikai Zrt. > >>=20 > >> ---------- Forwarded message ---------- > >> Date: Mon, 16 Dec 2013 15:23:06 +0100 (CET) > >> From: krichy@tvnetwork.hu > >> To: freebsd-fs@freebsd.org > >> Subject: Re: kern/184677 > >>=20 > >>=20 > >> Seems that pjd did a change which eliminated the zfsdev_state_lock on = Fri=20 > >> Aug 12 07:04:16 2011 +0000, which might introduced a new deadlock=20 > >> situation. Any comments on this? > >>=20 > >>=20 > >> Kojedzinszky Richard > >> Euronet Magyarorszag Informatikai Zrt. > >>=20 > >> On Mon, 16 Dec 2013, krichy@tvnetwork.hu wrote: > >>=20 > >>> Date: Mon, 16 Dec 2013 11:08:11 +0100 (CET) > >>> From: krichy@tvnetwork.hu > >>> To: freebsd-fs@freebsd.org > >>> Subject: kern/184677 > >>>=20 > >>> Dear devs, > >>>=20 > >>> I've attached a patch, which makes the recursive lockmgr disappear, a= nd=20 > >>> makes the reported bug to disappear. I dont know if I followed any=20 > >>> guidelines well, or not, but at least it works for me. Please some=20 > >>> ZFS/FreeBSD fs expert review it, and fix it where it needed. > >>>=20 > >>> But unfortunately, my original problem is still not solved, maybe the= same=20 > >>> as Ryan's:=20 > >>> http://lists.freebsd.org/pipermail/freebsd-fs/2013-December/018707.ht= ml > >>>=20 > >>> Tracing the problem down is that zfsctl_snapdir_lookup() tries to acq= uire=20 > >>> spa_namespace_lock while when finishing a zfs send -R does a=20 > >>> zfsdev_close(), and that also holds the same mutex. And this causes a= =20 > >>> deadlock scenario. I looked at illumos's code, and for some reason th= ey=20 > >>> use another mutex on zfsdev_close(), which therefore may not deadlock= with=20 > >>> zfsctl_snapdir_lookup(). But I am still investigating the problem. > >>>=20 > >>> I would like to help making ZFS more stable on freebsd also with its = whole=20 > >>> functionality. I would be very thankful if some expert would give som= e=20 > >>> advice, how to solve these bugs. PJD, Steven, Xin? > >>>=20 > >>> Thanks in advance, > >>>=20 > >>>=20 > >>> Kojedzinszky Richard > >>> Euronet Magyarorszag Informatikai Zrt. > >> _______________________________________________ > >> freebsd-fs@freebsd.org mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >>=20 > > > From 39298da838d006ad225e41529d7b7f240fccfe73 Mon Sep 17 00:00:00 2001 > From: Richard Kojedzinszky > Date: Mon, 16 Dec 2013 15:39:11 +0100 > Subject: [PATCH 1/2] Revert "Eliminate the zfsdev_state_lock entirely and > replace it with the" >=20 > This reverts commit 1d8972b3f353f986eb5b85bc108b1c0d946d3218. >=20 > Conflicts: > sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c > --- > .../opensolaris/uts/common/fs/zfs/sys/zfs_ioctl.h | 1 + > .../opensolaris/uts/common/fs/zfs/vdev_geom.c | 14 ++- > .../opensolaris/uts/common/fs/zfs/zfs_ioctl.c | 16 +-- > .../contrib/opensolaris/uts/common/fs/zfs/zvol.c | 119 +++++++++------= ------ > 4 files changed, 70 insertions(+), 80 deletions(-) >=20 > diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ioctl= =2Eh b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ioctl.h > index af2def2..8272c4d 100644 > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ioctl.h > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ioctl.h > @@ -383,6 +383,7 @@ extern void *zfsdev_get_soft_state(minor_t minor, > extern minor_t zfsdev_minor_alloc(void); > =20 > extern void *zfsdev_state; > +extern kmutex_t zfsdev_state_lock; > =20 > #endif /* _KERNEL */ > =20 > diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c b= /sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c > index 15685a5..5c3e9f3 100644 > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c > @@ -581,7 +581,7 @@ vdev_geom_open(vdev_t *vd, uint64_t *psize, uint64_t = *max_psize, > struct g_provider *pp; > struct g_consumer *cp; > size_t bufsize; > - int error; > + int error, lock; > =20 > /* > * We must have a pathname, and it must be absolute. > @@ -593,6 +593,12 @@ vdev_geom_open(vdev_t *vd, uint64_t *psize, uint64_t= *max_psize, > =20 > vd->vdev_tsd =3D NULL; > =20 > + if (mutex_owned(&spa_namespace_lock)) { > + mutex_exit(&spa_namespace_lock); > + lock =3D 1; > + } else { > + lock =3D 0; > + } > DROP_GIANT(); > g_topology_lock(); > error =3D 0; > @@ -624,7 +630,11 @@ vdev_geom_open(vdev_t *vd, uint64_t *psize, uint64_t= *max_psize, > !ISP2(cp->provider->sectorsize)) { > ZFS_LOG(1, "Provider %s has unsupported sectorsize.", > vd->vdev_path); > + > + g_topology_lock(); > vdev_geom_detach(cp, 0); > + g_topology_unlock(); > + > error =3D EINVAL; > cp =3D NULL; > } else if (cp->acw =3D=3D 0 && (spa_mode(vd->vdev_spa) & FWRITE) !=3D 0= ) { > @@ -647,6 +657,8 @@ vdev_geom_open(vdev_t *vd, uint64_t *psize, uint64_t = *max_psize, > } > g_topology_unlock(); > PICKUP_GIANT(); > + if (lock) > + mutex_enter(&spa_namespace_lock); > if (cp =3D=3D NULL) { > vd->vdev_stat.vs_aux =3D VDEV_AUX_OPEN_FAILED; > return (error); > diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c b= /sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c > index e9fba26..91becde 100644 > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c > @@ -5635,7 +5635,7 @@ zfsdev_minor_alloc(void) > static minor_t last_minor; > minor_t m; > =20 > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > =20 > for (m =3D last_minor + 1; m !=3D last_minor; m++) { > if (m > ZFSDEV_MAX_MINOR) > @@ -5655,7 +5655,7 @@ zfs_ctldev_init(struct cdev *devp) > minor_t minor; > zfs_soft_state_t *zs; > =20 > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > =20 > minor =3D zfsdev_minor_alloc(); > if (minor =3D=3D 0) > @@ -5676,7 +5676,7 @@ zfs_ctldev_init(struct cdev *devp) > static void > zfs_ctldev_destroy(zfs_onexit_t *zo, minor_t minor) > { > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > =20 > zfs_onexit_destroy(zo); > ddi_soft_state_free(zfsdev_state, minor); > @@ -5706,9 +5706,9 @@ zfsdev_open(struct cdev *devp, int flag, int mode, = struct thread *td) > =20 > /* This is the control device. Allocate a new minor if requested. */ > if (flag & FEXCL) { > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > error =3D zfs_ctldev_init(devp); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > } > =20 > return (error); > @@ -5723,14 +5723,14 @@ zfsdev_close(void *data) > if (minor =3D=3D 0) > return; > =20 > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > zo =3D zfsdev_get_soft_state(minor, ZSST_CTLDEV); > if (zo =3D=3D NULL) { > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return; > } > zfs_ctldev_destroy(zo, minor); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > } > =20 > static int > diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c b/sys/= cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c > index 72d4502..aec5219 100644 > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c > @@ -104,11 +104,12 @@ static char *zvol_tag =3D "zvol_tag"; > #define ZVOL_DUMPSIZE "dumpsize" > =20 > /* > - * The spa_namespace_lock protects the zfsdev_state structure from being > - * modified while it's being used, e.g. an open that comes in before a > - * create finishes. It also protects temporary opens of the dataset so = that, > + * This lock protects the zfsdev_state structure from being modified > + * while it's being used, e.g. an open that comes in before a create > + * finishes. It also protects temporary opens of the dataset so that, > * e.g., an open doesn't get a spurious EBUSY. > */ > +kmutex_t zfsdev_state_lock; > static uint32_t zvol_minors; > =20 > typedef struct zvol_extent { > @@ -249,7 +250,7 @@ zvol_minor_lookup(const char *name) > struct g_geom *gp; > zvol_state_t *zv =3D NULL; > =20 > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > =20 > g_topology_lock(); > LIST_FOREACH(gp, &zfs_zvol_class.geom, geom) { > @@ -465,11 +466,11 @@ zvol_name2minor(const char *name, minor_t *minor) > { > zvol_state_t *zv; > =20 > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > zv =3D zvol_minor_lookup(name); > if (minor && zv) > *minor =3D zv->zv_minor; > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (zv ? 0 : -1); > } > #endif /* sun */ > @@ -489,10 +490,10 @@ zvol_create_minor(const char *name) > =20 > ZFS_LOG(1, "Creating ZVOL %s...", name); > =20 > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > =20 > if (zvol_minor_lookup(name) !=3D NULL) { > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(EEXIST)); > } > =20 > @@ -500,20 +501,20 @@ zvol_create_minor(const char *name) > error =3D dmu_objset_own(name, DMU_OST_ZVOL, B_TRUE, FTAG, &os); > =20 > if (error) { > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (error); > } > =20 > #ifdef sun > if ((minor =3D zfsdev_minor_alloc()) =3D=3D 0) { > dmu_objset_disown(os, FTAG); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(ENXIO)); > } > =20 > if (ddi_soft_state_zalloc(zfsdev_state, minor) !=3D DDI_SUCCESS) { > dmu_objset_disown(os, FTAG); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(EAGAIN)); > } > (void) ddi_prop_update_string(minor, zfs_dip, ZVOL_PROP_NAME, > @@ -525,7 +526,7 @@ zvol_create_minor(const char *name) > minor, DDI_PSEUDO, 0) =3D=3D DDI_FAILURE) { > ddi_soft_state_free(zfsdev_state, minor); > dmu_objset_disown(os, FTAG); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(EAGAIN)); > } > =20 > @@ -536,7 +537,7 @@ zvol_create_minor(const char *name) > ddi_remove_minor_node(zfs_dip, chrbuf); > ddi_soft_state_free(zfsdev_state, minor); > dmu_objset_disown(os, FTAG); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(EAGAIN)); > } > =20 > @@ -587,7 +588,7 @@ zvol_create_minor(const char *name) > =20 > zvol_minors++; > =20 > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > =20 > zvol_geom_run(zv); > =20 > @@ -609,7 +610,7 @@ zvol_remove_zv(zvol_state_t *zv) > minor_t minor =3D zv->zv_minor; > #endif > =20 > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > if (zv->zv_total_opens !=3D 0) > return (SET_ERROR(EBUSY)); > =20 > @@ -635,15 +636,15 @@ zvol_remove_minor(const char *name) > zvol_state_t *zv; > int rc; > =20 > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > if ((zv =3D zvol_minor_lookup(name)) =3D=3D NULL) { > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(ENXIO)); > } > g_topology_lock(); > rc =3D zvol_remove_zv(zv); > g_topology_unlock(); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (rc); > } > =20 > @@ -755,7 +756,7 @@ zvol_update_volsize(objset_t *os, uint64_t volsize) > dmu_tx_t *tx; > int error; > =20 > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > =20 > tx =3D dmu_tx_create(os); > dmu_tx_hold_zap(tx, ZVOL_ZAP_OBJ, TRUE, NULL); > @@ -786,7 +787,7 @@ zvol_remove_minors(const char *name) > namelen =3D strlen(name); > =20 > DROP_GIANT(); > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > g_topology_lock(); > =20 > LIST_FOREACH_SAFE(gp, &zfs_zvol_class.geom, geom, gptmp) { > @@ -804,7 +805,7 @@ zvol_remove_minors(const char *name) > } > =20 > g_topology_unlock(); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > PICKUP_GIANT(); > } > =20 > @@ -818,10 +819,10 @@ zvol_set_volsize(const char *name, major_t maj, uin= t64_t volsize) > uint64_t old_volsize =3D 0ULL; > uint64_t readonly; > =20 > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > zv =3D zvol_minor_lookup(name); > if ((error =3D dmu_objset_hold(name, FTAG, &os)) !=3D 0) { > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (error); > } > =20 > @@ -888,7 +889,7 @@ zvol_set_volsize(const char *name, major_t maj, uint6= 4_t volsize) > out: > dmu_objset_rele(os, FTAG); > =20 > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > =20 > return (error); > } > @@ -899,36 +900,19 @@ zvol_open(struct g_provider *pp, int flag, int coun= t) > { > zvol_state_t *zv; > int err =3D 0; > - boolean_t locked =3D B_FALSE; > =20 > - /* > - * Protect against recursively entering spa_namespace_lock > - * when spa_open() is used for a pool on a (local) ZVOL(s). > - * This is needed since we replaced upstream zfsdev_state_lock > - * with spa_namespace_lock in the ZVOL code. > - * We are using the same trick as spa_open(). > - * Note that calls in zvol_first_open which need to resolve > - * pool name to a spa object will enter spa_open() > - * recursively, but that function already has all the > - * necessary protection. > - */ > - if (!MUTEX_HELD(&spa_namespace_lock)) { > - mutex_enter(&spa_namespace_lock); > - locked =3D B_TRUE; > - } > + mutex_enter(&zfsdev_state_lock); > =20 > zv =3D pp->private; > if (zv =3D=3D NULL) { > - if (locked) > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(ENXIO)); > } > =20 > if (zv->zv_total_opens =3D=3D 0) > err =3D zvol_first_open(zv); > if (err) { > - if (locked) > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (err); > } > if ((flag & FWRITE) && (zv->zv_flags & ZVOL_RDONLY)) { > @@ -950,15 +934,13 @@ zvol_open(struct g_provider *pp, int flag, int coun= t) > #endif > =20 > zv->zv_total_opens +=3D count; > - if (locked) > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > =20 > return (err); > out: > if (zv->zv_total_opens =3D=3D 0) > zvol_last_close(zv); > - if (locked) > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (err); > } > =20 > @@ -968,18 +950,12 @@ zvol_close(struct g_provider *pp, int flag, int cou= nt) > { > zvol_state_t *zv; > int error =3D 0; > - boolean_t locked =3D B_FALSE; > =20 > - /* See comment in zvol_open(). */ > - if (!MUTEX_HELD(&spa_namespace_lock)) { > - mutex_enter(&spa_namespace_lock); > - locked =3D B_TRUE; > - } > + mutex_enter(&zfsdev_state_lock); > =20 > zv =3D pp->private; > if (zv =3D=3D NULL) { > - if (locked) > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(ENXIO)); > } > =20 > @@ -1002,8 +978,7 @@ zvol_close(struct g_provider *pp, int flag, int coun= t) > if (zv->zv_total_opens =3D=3D 0) > zvol_last_close(zv); > =20 > - if (locked) > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (error); > } > =20 > @@ -1658,12 +1633,12 @@ zvol_ioctl(dev_t dev, int cmd, intptr_t arg, int = flag, cred_t *cr, int *rvalp) > int error =3D 0; > rl_t *rl; > =20 > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > =20 > zv =3D zfsdev_get_soft_state(getminor(dev), ZSST_ZVOL); > =20 > if (zv =3D=3D NULL) { > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (SET_ERROR(ENXIO)); > } > ASSERT(zv->zv_total_opens > 0); > @@ -1677,7 +1652,7 @@ zvol_ioctl(dev_t dev, int cmd, intptr_t arg, int fl= ag, cred_t *cr, int *rvalp) > dki.dki_ctype =3D DKC_UNKNOWN; > dki.dki_unit =3D getminor(dev); > dki.dki_maxtransfer =3D 1 << (SPA_MAXBLOCKSHIFT - zv->zv_min_bs); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > if (ddi_copyout(&dki, (void *)arg, sizeof (dki), flag)) > error =3D SET_ERROR(EFAULT); > return (error); > @@ -1687,7 +1662,7 @@ zvol_ioctl(dev_t dev, int cmd, intptr_t arg, int fl= ag, cred_t *cr, int *rvalp) > dkm.dki_lbsize =3D 1U << zv->zv_min_bs; > dkm.dki_capacity =3D zv->zv_volsize >> zv->zv_min_bs; > dkm.dki_media_type =3D DK_UNKNOWN; > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > if (ddi_copyout(&dkm, (void *)arg, sizeof (dkm), flag)) > error =3D SET_ERROR(EFAULT); > return (error); > @@ -1697,14 +1672,14 @@ zvol_ioctl(dev_t dev, int cmd, intptr_t arg, int = flag, cred_t *cr, int *rvalp) > uint64_t vs =3D zv->zv_volsize; > uint8_t bs =3D zv->zv_min_bs; > =20 > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > error =3D zvol_getefi((void *)arg, flag, vs, bs); > return (error); > } > =20 > case DKIOCFLUSHWRITECACHE: > dkc =3D (struct dk_callback *)arg; > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > zil_commit(zv->zv_zilog, ZVOL_OBJ); > if ((flag & FKIOCTL) && dkc !=3D NULL && dkc->dkc_callback) { > (*dkc->dkc_callback)(dkc->dkc_cookie, error); > @@ -1730,10 +1705,10 @@ zvol_ioctl(dev_t dev, int cmd, intptr_t arg, int = flag, cred_t *cr, int *rvalp) > } > if (wce) { > zv->zv_flags |=3D ZVOL_WCE; > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > } else { > zv->zv_flags &=3D ~ZVOL_WCE; > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > zil_commit(zv->zv_zilog, ZVOL_OBJ); > } > return (0); > @@ -1828,7 +1803,7 @@ zvol_ioctl(dev_t dev, int cmd, intptr_t arg, int fl= ag, cred_t *cr, int *rvalp) > break; > =20 > } > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > return (error); > } > #endif /* sun */ > @@ -1844,12 +1819,14 @@ zvol_init(void) > { > VERIFY(ddi_soft_state_init(&zfsdev_state, sizeof (zfs_soft_state_t), > 1) =3D=3D 0); > + mutex_init(&zfsdev_state_lock, NULL, MUTEX_DEFAULT, NULL); > ZFS_LOG(1, "ZVOL Initialized."); > } > =20 > void > zvol_fini(void) > { > + mutex_destroy(&zfsdev_state_lock); > ddi_soft_state_fini(&zfsdev_state); > ZFS_LOG(1, "ZVOL Deinitialized."); > } > @@ -1889,7 +1866,7 @@ zvol_dump_init(zvol_state_t *zv, boolean_t resize) > uint64_t version =3D spa_version(spa); > enum zio_checksum checksum; > =20 > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > ASSERT(vd->vdev_ops =3D=3D &vdev_root_ops); > =20 > error =3D dmu_free_long_range(zv->zv_objset, ZVOL_OBJ, 0, > @@ -2437,7 +2414,7 @@ zvol_rename_minor(struct g_geom *gp, const char *ne= wname) > struct g_provider *pp; > zvol_state_t *zv; > =20 > - ASSERT(MUTEX_HELD(&spa_namespace_lock)); > + ASSERT(MUTEX_HELD(&zfsdev_state_lock)); > g_topology_assert(); > =20 > pp =3D LIST_FIRST(&gp->provider); > @@ -2471,7 +2448,7 @@ zvol_rename_minors(const char *oldname, const char = *newname) > newnamelen =3D strlen(newname); > =20 > DROP_GIANT(); > - mutex_enter(&spa_namespace_lock); > + mutex_enter(&zfsdev_state_lock); > g_topology_lock(); > =20 > LIST_FOREACH(gp, &zfs_zvol_class.geom, geom) { > @@ -2494,6 +2471,6 @@ zvol_rename_minors(const char *oldname, const char = *newname) > } > =20 > g_topology_unlock(); > - mutex_exit(&spa_namespace_lock); > + mutex_exit(&zfsdev_state_lock); > PICKUP_GIANT(); > } > --=20 > 1.8.4.2 >=20 > From 57d5a383b585c32c77af54e8cdacaddf8ce7584f Mon Sep 17 00:00:00 2001 > From: Richard Kojedzinszky > Date: Wed, 18 Dec 2013 22:11:21 +0100 > Subject: [PATCH 2/2] ZFS snapshot handling fix >=20 > --- > .../compat/opensolaris/kern/opensolaris_lookup.c | 13 +++--- > .../opensolaris/uts/common/fs/zfs/zfs_ctldir.c | 53 +++++++++++++++-= ------ > 2 files changed, 42 insertions(+), 24 deletions(-) >=20 > diff --git a/sys/cddl/compat/opensolaris/kern/opensolaris_lookup.c b/sys/= cddl/compat/opensolaris/kern/opensolaris_lookup.c > index 94383d6..4cac053 100644 > --- a/sys/cddl/compat/opensolaris/kern/opensolaris_lookup.c > +++ b/sys/cddl/compat/opensolaris/kern/opensolaris_lookup.c > @@ -81,6 +81,8 @@ traverse(vnode_t **cvpp, int lktype) > * progress on this vnode. > */ > =20 > + vn_lock(cvp, lktype); > + > for (;;) { > /* > * Reached the end of the mount chain? > @@ -89,13 +91,7 @@ traverse(vnode_t **cvpp, int lktype) > if (vfsp =3D=3D NULL) > break; > error =3D vfs_busy(vfsp, 0); > - /* > - * tvp is NULL for *cvpp vnode, which we can't unlock. > - */ > - if (tvp !=3D NULL) > - vput(cvp); > - else > - vrele(cvp); > + VOP_UNLOCK(cvp, 0); > if (error) > return (error); > =20 > @@ -107,6 +103,9 @@ traverse(vnode_t **cvpp, int lktype) > vfs_unbusy(vfsp); > if (error !=3D 0) > return (error); > + > + VN_RELE(cvp); > + > cvp =3D tvp; > } > =20 > diff --git a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c = b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c > index 28ab1fa..d3464b4 100644 > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c > @@ -112,6 +112,25 @@ snapentry_compare(const void *a, const void *b) > return (0); > } > =20 > +/* Return the zfsctl_snapdir_t object from current vnode, following > + * the lock orders in zfsctl_snapdir_lookup() to avoid deadlocks. > + * On return the passed in vp is unlocked */ > +static zfsctl_snapdir_t * > +zfsctl_snapshot_get_snapdir(vnode_t *vp, vnode_t **dvpp) > +{ > + gfs_dir_t *dp =3D vp->v_data; > + *dvpp =3D dp->gfsd_file.gfs_parent; > + zfsctl_snapdir_t *sdp; > + > + VN_HOLD(*dvpp); > + VOP_UNLOCK(vp, 0); > + vn_lock(*dvpp, LK_SHARED | LK_RETRY | LK_CANRECURSE); > + sdp =3D (*dvpp)->v_data; > + VOP_UNLOCK(*dvpp, 0); > + > + return (sdp); > +} > + > #ifdef sun > vnodeops_t *zfsctl_ops_root; > vnodeops_t *zfsctl_ops_snapdir; > @@ -1013,6 +1032,8 @@ zfsctl_snapdir_lookup(ap) > * The snapshot was unmounted behind our backs, > * try to remount it. > */ > + VOP_UNLOCK(*vpp, 0); > + VN_HOLD(*vpp); > VERIFY(zfsctl_snapshot_zname(dvp, nm, MAXNAMELEN, snapname) =3D=3D 0); > goto domount; > } else { > @@ -1064,7 +1085,6 @@ zfsctl_snapdir_lookup(ap) > sep->se_name =3D kmem_alloc(strlen(nm) + 1, KM_SLEEP); > (void) strcpy(sep->se_name, nm); > *vpp =3D sep->se_root =3D zfsctl_snapshot_mknode(dvp, dmu_objset_id(sna= p)); > - VN_HOLD(*vpp); > avl_insert(&sdp->sd_snaps, sep, where); > =20 > dmu_objset_rele(snap, FTAG); > @@ -1075,6 +1095,7 @@ domount: > (void) snprintf(mountpoint, mountpoint_len, > "%s/" ZFS_CTLDIR_NAME "/snapshot/%s", > dvp->v_vfsp->mnt_stat.f_mntonname, nm); > + VN_HOLD(*vpp); > err =3D mount_snapshot(curthread, vpp, "zfs", mountpoint, snapname, 0); > kmem_free(mountpoint, mountpoint_len); > if (err =3D=3D 0) { > @@ -1464,16 +1485,18 @@ zfsctl_snapshot_inactive(ap) > int locked; > vnode_t *dvp; > =20 > - if (vp->v_count > 0) > - goto end; > - > - VERIFY(gfs_dir_lookup(vp, "..", &dvp, cr, 0, NULL, NULL) =3D=3D 0); > - sdp =3D dvp->v_data; > - VOP_UNLOCK(dvp, 0); > + sdp =3D zfsctl_snapshot_get_snapdir(vp, &dvp); > =20 > if (!(locked =3D MUTEX_HELD(&sdp->sd_lock))) > mutex_enter(&sdp->sd_lock); > =20 > + vn_lock(vp, LK_EXCLUSIVE | LK_RETRY); > + > + if (vp->v_count > 0) { > + mutex_exit(&sdp->sd_lock); > + return (0); > + } > + > ASSERT(!vn_ismntpt(vp)); > =20 > sep =3D avl_first(&sdp->sd_snaps); > @@ -1494,7 +1517,6 @@ zfsctl_snapshot_inactive(ap) > mutex_exit(&sdp->sd_lock); > VN_RELE(dvp); > =20 > -end: > /* > * Dispose of the vnode for the snapshot mount point. > * This is safe to do because once this entry has been removed > @@ -1595,20 +1617,17 @@ zfsctl_snapshot_lookup(ap) > static int > zfsctl_snapshot_vptocnp(struct vop_vptocnp_args *ap) > { > - zfsvfs_t *zfsvfs =3D ap->a_vp->v_vfsp->vfs_data; > - vnode_t *dvp, *vp; > + vnode_t *dvp, *vp =3D ap->a_vp; > zfsctl_snapdir_t *sdp; > zfs_snapentry_t *sep; > - int error; > + int error =3D 0; > =20 > - ASSERT(zfsvfs->z_ctldir !=3D NULL); > - error =3D zfsctl_root_lookup(zfsvfs->z_ctldir, "snapshot", &dvp, > - NULL, 0, NULL, kcred, NULL, NULL, NULL); > - if (error !=3D 0) > - return (error); > - sdp =3D dvp->v_data; > + sdp =3D zfsctl_snapshot_get_snapdir(vp, &dvp); > =20 > mutex_enter(&sdp->sd_lock); > + > + vn_lock(vp, LK_SHARED | LK_RETRY); > + > sep =3D avl_first(&sdp->sd_snaps); > while (sep !=3D NULL) { > vp =3D sep->se_root; > --=20 > 1.8.4.2 >=20 --=20 Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://mobter.com --LKTjZJSUETSlgu2t Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iEYEARECAAYFAlK0SaUACgkQForvXbEpPzSzLACfddI+1gydBrna/vXLdDwR4+DW M2EAnROvev3FqMsIlPHznalQ1EyeeVXw =v+Ou -----END PGP SIGNATURE----- --LKTjZJSUETSlgu2t-- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 20 16:18:40 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D1601C78 for ; Fri, 20 Dec 2013 16:18:40 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id 882BF11C0 for ; Fri, 20 Dec 2013 16:18:40 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id 686397F48; Fri, 20 Dec 2013 17:18:18 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id 638D37F47; Fri, 20 Dec 2013 17:18:18 +0100 (CET) Date: Fri, 20 Dec 2013 17:18:18 +0100 (CET) From: krichy@tvnetwork.hu To: =?ISO-8859-15?Q?Gerrit_K=FChn?= Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 In-Reply-To: Message-ID: References: <0C9FD4E1-0549-4849-BFC5-D8C5D4A34D64@msqr.us> <54D3B3C002184A52BEC9B1543854B87F@multiplay.co.uk> <333D57C6A4544067880D9CFC04F02312@multiplay.co.uk> <26053_1387447492_52B2C4C4_26053_331_1_20131219105503.3a8d1df3.gerrit.kuehn@aei.mpg.de> <20131219165549.9f2ca709.gerrit.kuehn@aei.mpg.de> <20131219174054.91ac617a.gerrit.kuehn@aei.mpg.de> <20131220100522.382a39ac.gerrit.kuehn@aei.mpg.de> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 8BIT X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Dec 2013 16:18:40 -0000 Dear Gerrit, The problem is that when renaming snapshots, the mounted snapshot is tried to be mounted again under a new name. Maybe, the rename code does not remount them to their proper position, that may be the problem. I will look if I can check or fix it. Did this work in 9.1? Regards, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Fri, 20 Dec 2013, krichy@tvnetwork.hu wrote: > Date: Fri, 20 Dec 2013 10:19:15 +0100 (CET) > From: krichy@tvnetwork.hu > To: Gerrit Khn > Cc: freebsd-fs@freebsd.org > Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 > > Dear Gerrit, > > It is not a solution, but I use different snapshot handling mechanisms. I > wrote a simple script which handles that, and the snapshots are named by its > creation timestamp. I think it is more usable to see that when that snapshot > was exactly taken, and the script thus only does snapshot creation, and > deletion, no renames. > > That scripts only limitation is that it is planned to run hourly, creating > hourly snapshots, and when run again, it queries the existing one's list, and > decides which to keep or remove. Thus you have to run it hourly in cron like: > # crontab -l > 0 * * * * /usr/local/sbin/zfs-snapshot > > Regards, > > > Kojedzinszky Richard > Euronet Magyarorszag Informatikai Zrt. > > On Fri, 20 Dec 2013, Gerrit Khn wrote: > >> Date: Fri, 20 Dec 2013 10:05:22 +0100 >> From: Gerrit Khn >> To: krichy@tvnetwork.hu >> Cc: freebsd-fs@freebsd.org >> Subject: Re: ZFS snapshot renames failing after upgrade to 9.2 >> >> On Thu, 19 Dec 2013 20:08:22 +0100 (CET) krichy@tvnetwork.hu wrote about >> Re: ZFS snapshot renames failing after upgrade to 9.2: >> >> KH> So a simple renaming can cause your system to hang? >> >> No, it does not hang completely. >> Just the snapshots become unusable. This night, it happened again: >> >> --- >> root@shapeshifter:~ # ll /tank/git/.zfs/snapshot/ >> ls: daily.6: Device busy >> total 33 >> drwxr-xr-x 12 211 211 25 Dec 19 09:18 daily.0/ >> drwxr-xr-x 12 211 211 25 Dec 19 00:00 daily.1/ >> drwxr-xr-x 12 211 211 24 Dec 18 00:00 daily.2/ >> drwxr-xr-x 12 211 211 24 Dec 17 00:00 daily.3/ >> drwxr-xr-x 12 211 211 24 Dec 16 00:00 daily.4/ >> drwxr-xr-x 12 211 211 24 Dec 14 00:00 daily.5/ >> drwxr-xr-x 12 211 211 24 Dec 15 00:00 weekly.0/ >> drwxr-xr-x 12 211 211 24 Dec 8 00:00 weekly.1/ >> drwxr-xr-x 12 211 211 24 Dec 1 00:00 weekly.2/ >> drwxr-xr-x 12 211 211 24 Nov 17 00:00 weekly.3/ >> drwxr-xr-x 12 211 211 24 Nov 10 00:00 weekly.4/ >> drwxr-xr-x 2 root wheel 3 Oct 20 00:00 weekly.5/ >> drwxr-xr-x 2 root wheel 3 Oct 6 00:00 weekly.6/ >> --- >> >> root@shapeshifter:~ # zfs list -r -t snapshot -o >> name,creation,used,referenced tank/git NAME >> CREATION USED REFER tank/git@weekly.6 Sun Oct 6 0:00 >> 2013 42.6K 62.8K tank/git@weekly.5 Sun Oct 20 0:00 2013 42.6K 62.8K >> tank/git@weekly.4 Sun Nov 10 0:00 2013 29.5M 146G >> tank/git@weekly.3 Sun Nov 17 0:00 2013 27.1M 146G >> tank/git@weekly.2 Sun Dec 1 0:00 2013 26.3M 146G >> tank/git@weekly.1 Sun Dec 8 0:00 2013 27.3M 146G >> tank/git@daily.6 Sat Dec 14 0:00 2013 26.5M 147G >> tank/git@weekly.0 Sun Dec 15 0:00 2013 25.2M 147G >> tank/git@daily.5 Mon Dec 16 0:00 2013 24.7M 147G >> tank/git@daily.4 Tue Dec 17 0:00 2013 24.9M 147G >> tank/git@daily.3 Wed Dec 18 0:00 2013 25.7M 147G >> tank/git@daily.2 Thu Dec 19 0:00 2013 25.8M 147G >> tank/git@daily.1 Thu Dec 19 9:19 2013 25.0M 147G >> tank/git@daily.0 Fri Dec 20 0:00 2013 26.8M 147G >> --- >> >> >> As you can see, the snapshot rotating got stuck somewhere. What is >> displayed under .zfs/snapshot does not reflect what zfs is really seeing: >> daily.6 is inaccessible, and the rotation that happened so far is not >> reflected under .zfs/snapshot, either. >> >> >> cu >> Gerrit >