From owner-freebsd-fs@FreeBSD.ORG Sun Oct 5 17:26:08 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 90B37106568A; Sun, 5 Oct 2008 17:26:08 +0000 (UTC) (envelope-from vwe@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 66A348FC18; Sun, 5 Oct 2008 17:26:08 +0000 (UTC) (envelope-from vwe@FreeBSD.org) Received: from freefall.freebsd.org (vwe@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m95HQ8rn011668; Sun, 5 Oct 2008 17:26:08 GMT (envelope-from vwe@freefall.freebsd.org) Received: (from vwe@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m95HQ8mk011664; Sun, 5 Oct 2008 17:26:08 GMT (envelope-from vwe) Date: Sun, 5 Oct 2008 17:26:08 GMT Message-Id: <200810051726.m95HQ8mk011664@freefall.freebsd.org> To: wgodfrey@ena.com, vwe@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: vwe@FreeBSD.org Cc: Subject: Re: kern/125149: [nfs][panic] changing into .zfs dir from nfs client causes panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 Oct 2008 17:26:08 -0000 Old Synopsis: [zfs][nfs] changing into .zfs dir from nfs client causes endless panic loop New Synopsis: [nfs][panic] changing into .zfs dir from nfs client causes panic State-Changed-From-To: feedback->open State-Changed-By: vwe State-Changed-When: Sun Oct 5 17:24:30 UTC 2008 State-Changed-Why: Over to maintainer(s). Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: vwe Responsible-Changed-When: Sun Oct 5 17:24:30 UTC 2008 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=125149 From owner-freebsd-fs@FreeBSD.ORG Sun Oct 5 19:39:48 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1631D106568B for ; Sun, 5 Oct 2008 19:39:48 +0000 (UTC) (envelope-from dimitar.vassilev@gmail.com) Received: from mail-gx0-f21.google.com (mail-gx0-f21.google.com [209.85.217.21]) by mx1.freebsd.org (Postfix) with ESMTP id C2F028FC19 for ; Sun, 5 Oct 2008 19:39:47 +0000 (UTC) (envelope-from dimitar.vassilev@gmail.com) Received: by gxk14 with SMTP id 14so4172109gxk.19 for ; Sun, 05 Oct 2008 12:39:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:mime-version:content-type; bh=zbAP8en2AVgEbYY1WwJqAyZZKWIpf1LY3o+QFms3XVM=; b=qESTr0TbF21Q7VsoP4ThNBpMTM6SOH82LvRVvxG52kEFfcGAYwYHLfiwhSzwucb6x/ DCiCR36JOJzZo4ETc1Md06g59U2YJP3IpgVMaIbdwXAfbH+G1K0Qa50tzC3HgdrSfTiJ /DWwu8Kv93gzW+HqaRrHwmcDpjFbpkVwaFJr4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type; b=Swwg3//xvj1/2SAgJ0SUujE5NExZKU6nXbm58a4IIPqDMHoj44gScfnLN3Ix1d96AM OzjORVExXDLZ4KmoncPiFLBYnr1IZf+zKgLMxXVmW7o76G7vH8Tp914mJNuuXBEgNMXL /xpM1R4JyaDuPa9tQ3y2uOxVsXvZzWjzrop2s= Received: by 10.150.156.9 with SMTP id d9mr6083651ybe.221.1223233817354; Sun, 05 Oct 2008 12:10:17 -0700 (PDT) Received: by 10.150.182.11 with HTTP; Sun, 5 Oct 2008 12:10:17 -0700 (PDT) Message-ID: <59adc1a0810051210t4a3503aci2bc06ba0aa5376c3@mail.gmail.com> Date: Sun, 5 Oct 2008 22:10:17 +0300 From: "Dimitar Vasilev" To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: zfs as layer distributor X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 Oct 2008 19:39:48 -0000 Hi all, Does someone use zfs as layer distributor on the top of hardware raid - (RAID10,RAID6,etc)? Could you give feedback on benefits and downsides. thanks in advance! From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 00:02:09 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 70E4E1065688 for ; Mon, 6 Oct 2008 00:02:09 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (host-122-100-2-232.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 30F308FC18 for ; Mon, 6 Oct 2008 00:02:08 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id 31E8D17DB1; Mon, 6 Oct 2008 10:02:07 +1000 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-0.1 required=10.0 tests=ALL_TRUSTED,DNS_FROM_DOB, DNS_FROM_SECURITYSAGE,RCVD_IN_DOB autolearn=no version=3.2.3 Received: from [10.20.30.100] (60.218.233.220.exetel.com.au [220.233.218.60]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id D121D17306; Mon, 6 Oct 2008 10:02:02 +1000 (EST) Message-ID: <48E9556C.9060004@modulus.org> Date: Mon, 06 Oct 2008 11:01:48 +1100 From: Andrew Snow User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: Dimitar Vasilev References: <59adc1a0810051210t4a3503aci2bc06ba0aa5376c3@mail.gmail.com> In-Reply-To: <59adc1a0810051210t4a3503aci2bc06ba0aa5376c3@mail.gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs as layer distributor X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 00:02:09 -0000 Dimitar Vasilev wrote: > Hi all, > Does someone use zfs as layer distributor on the top of hardware raid - > (RAID10,RAID6,etc)? I've found ZFS works faster when given more than one disk device. The reason being, it is smart about writing journal logs and metadata copies to different devices, resulting in higher performance by using idle disks. It also provides more "channels" for write clustering so higher throughput on write-heavy loads. Secondly if you use ZFS to provide RAID1 or RAID5, due to checksumming it can be smarter about which data it chooses in the event of a checksum failure. Hardware RAID can only do this with RAID6. Finally, when ZFS issues "flush cache" command to the disk for metadata and journal logs, there is less data to flush when you give it multiple smaller devices. If you have a single monolithic RAID device with a large (eg. 256mb) cache, it can ruin performance while the RAID card flushes its entire cache. (This can be disabled with a sysctl). - Andrew From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 05:25:48 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2AB1E10656A2 for ; Mon, 6 Oct 2008 05:25:48 +0000 (UTC) (envelope-from dimitar.vassilev@gmail.com) Received: from mail-gx0-f21.google.com (mail-gx0-f21.google.com [209.85.217.21]) by mx1.freebsd.org (Postfix) with ESMTP id C1FD88FC30 for ; Mon, 6 Oct 2008 05:25:47 +0000 (UTC) (envelope-from dimitar.vassilev@gmail.com) Received: by gxk14 with SMTP id 14so4432511gxk.19 for ; Sun, 05 Oct 2008 22:25:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=JYPCOIn0jrv6E6/jtdjIJ3pLYc8uL7/Eo1f4leB9ouQ=; b=iKzAu6jc0StUvKXfHvH5s3P4Y/B6faAx0ZTaXyLHQtzl8pDqtk7St0RDambXQcZ5e6 vEnYs00BFzeEEjt/7kI7Uk6F2e/bcsYv8qrD21+5TacPWHkgxPGcXsHWVmi6ezRRhVLg yZ2h9RneBalpaw81MqNzwtDvpWo9gOIuEzSzo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=UcP+dgPdDGGcHLn6D7qf+HV/qdcM1lrmP7dl0LQKZQvBHYL9bItE1CCd1xuIiJlrqs 6OWE3s0ltI+zIsIM2dphJV2CeUDKG/jcMy2lia4ESaheHLZ5BlumlG1HLfHkBVjAcS/p r2j6hbrZaO/1GT8bH1TBj5aaW5fAW5NdZzKbU= Received: by 10.151.108.13 with SMTP id k13mr6954123ybm.112.1223270747064; Sun, 05 Oct 2008 22:25:47 -0700 (PDT) Received: by 10.150.182.11 with HTTP; Sun, 5 Oct 2008 22:25:47 -0700 (PDT) Message-ID: <59adc1a0810052225w6d8c1b78r226ffe8ca4ccf35d@mail.gmail.com> Date: Mon, 6 Oct 2008 08:25:47 +0300 From: "Dimitar Vasilev" To: "Andrew Snow" In-Reply-To: <48E9556C.9060004@modulus.org> MIME-Version: 1.0 References: <59adc1a0810051210t4a3503aci2bc06ba0aa5376c3@mail.gmail.com> <48E9556C.9060004@modulus.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: zfs as layer distributor X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 05:25:48 -0000 2008/10/6 Andrew Snow > Dimitar Vasilev wrote: > >> Hi all, >> Does someone use zfs as layer distributor on the top of hardware raid - >> (RAID10,RAID6,etc)? >> > > I've found ZFS works faster when given more than one disk device. The > reason being, it is smart about writing journal logs and metadata copies to > different devices, resulting in higher performance by using idle disks. It > also provides more "channels" for write clustering so higher throughput on > write-heavy loads. > > Secondly if you use ZFS to provide RAID1 or RAID5, due to checksumming it > can be smarter about which data it chooses in the event of a checksum > failure. Hardware RAID can only do this with RAID6. > > Finally, when ZFS issues "flush cache" command to the disk for metadata and > journal logs, there is less data to flush when you give it multiple smaller > devices. If you have a single monolithic RAID device with a large (eg. > 256mb) cache, it can ruin performance while the RAID card flushes its entire > cache. (This can be disabled with a sysctl). > > - Andrew Thanks Andrew, I have an Areca 1120 with RAID-6 and on the top of it a zfs as a layer distributor. So far I can tell the following: 1)works nice and fast 2)can be pain in the rear if your controller spits one of the disks due to power surge/etc. 3) zfs snapshots caused some crashes and bad descriptors on 7.0-stable as of 3 months behind- but it's somewhat expected. I'm thinking of raidz2 and setting the disks as pass-through. Would love if someone to hear if someone has tested hardware raid6 and zfs over it. Best regards, Dimitar From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 06:40:02 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DAF53106568E for ; Mon, 6 Oct 2008 06:40:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id C98BD8FC17 for ; Mon, 6 Oct 2008 06:40:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m966e2tM084502 for ; Mon, 6 Oct 2008 06:40:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m966e2qg084501; Mon, 6 Oct 2008 06:40:02 GMT (envelope-from gnats) Date: Mon, 6 Oct 2008 06:40:02 GMT Message-Id: <200810060640.m966e2qg084501@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Nate Eldredge Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Nate Eldredge List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 06:40:02 -0000 The following reply was made to PR kern/127213; it has been noted by GNATS. From: Nate Eldredge To: bug-followup@FreeBSD.org, citrin@citrin.ru Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption Date: Sun, 5 Oct 2008 23:20:42 -0700 (PDT) Hi, I investigated this a bit. First, note that this bug has some security implications, because it appears that the garbage written by sendfile is kernel memory contents, which could contain something sensitive. It is sufficient for an attacker to have read access to a file on a mounted tmpfs. So it should really get fixed. I'm not terribly familiar with vfs or vm internals, but it appears that sendfile causes VOP_READ to be called with the IO_VMIO flag and a dummy uio. tmpfs_read (in sys/fs/tmpfs/tmpfs_vnops.c) doesn't handle this correctly; it always just copies the data to the supplied uio, which in this case does nothing. It looks like the data is supposed to make it into vn->v_object, and tmpfs_read doesn't do that. (If I understand it correctly, on a normal filesystem this is taken care of by bread().) I am not sure what the correct semantics of IO_VMIO are supposed to be, so I don't know what the correct fix would be. However, a quick fix is to not have a v_object at all; remove the call to vnode_create_vobject in tmpfs_open. This seems to be legal since procfs, etc, work that way. It does however mean that sendfile doesn't work at all. I am curious what was the point of having a v_object in the first place, since the data is already in virtual memory. Unless the goal was just to make sendfile work, which evidently wasn't successful. Incidentally, to the initial reporter, what application do you have that requires sendfile? In my experience, most things will fall back to a read/write loop if sendfile fails, since sendfile isn't available on all systems or under all circumstances. So if you apply the quick fix so that sendfile always fails, it might at least get your application working again. -- Nate Eldredge neldredge@math.ucsd.edu From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 07:39:17 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ADD33106568F for ; Mon, 6 Oct 2008 07:39:17 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (host-122-100-2-232.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 657E38FC28 for ; Mon, 6 Oct 2008 07:39:17 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id 5416017DB6; Mon, 6 Oct 2008 17:39:15 +1000 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.3 required=10.0 tests=ALL_TRUSTED, DNS_FROM_SECURITYSAGE autolearn=no version=3.2.3 Received: from [10.20.30.100] (60.218.233.220.exetel.com.au [220.233.218.60]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 1D31F17DA7; Mon, 6 Oct 2008 17:39:07 +1000 (EST) Message-ID: <48E9C08C.8040108@modulus.org> Date: Mon, 06 Oct 2008 18:38:52 +1100 From: Andrew Snow User-Agent: Thunderbird 1.5.0.9 (Windows/20061207) MIME-Version: 1.0 To: Dimitar Vasilev References: <59adc1a0810051210t4a3503aci2bc06ba0aa5376c3@mail.gmail.com> <48E9556C.9060004@modulus.org> <59adc1a0810052225w6d8c1b78r226ffe8ca4ccf35d@mail.gmail.com> In-Reply-To: <59adc1a0810052225w6d8c1b78r226ffe8ca4ccf35d@mail.gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: zfs as layer distributor X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 07:39:17 -0000 Dimitar Vasilev wrote: > Would love if someone to hear if someone has tested hardware raid6 and > zfs over it. Yes, I am using 3ware RAID6 over 16 disks as a single volume, because we also had UFS partitions that we wanted to keep. The performance is more than adequate, but not anywhere near if you used them as single disks. Personally - based on prior experience with certain hardware - I'd trust ZFS software raid over Areca hardware :-) How many disks do you have? If you can split up your disk pack into a group of between 5 and 10 smaller RAIDs, that is the optimal range for ZFS performance. - Andrew From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 07:40:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E5581065686 for ; Mon, 6 Oct 2008 07:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 388DE8FC17 for ; Mon, 6 Oct 2008 07:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m967e3c1015066 for ; Mon, 6 Oct 2008 07:40:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m967e3jt015065; Mon, 6 Oct 2008 07:40:03 GMT (envelope-from gnats) Date: Mon, 6 Oct 2008 07:40:03 GMT Message-Id: <200810060740.m967e3jt015065@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Maxim Konovalov Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Maxim Konovalov List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 07:40:04 -0000 The following reply was made to PR kern/127213; it has been noted by GNATS. From: Maxim Konovalov To: Nate Eldredge Cc: bug-followup@freebsd.org Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption Date: Mon, 6 Oct 2008 11:36:38 +0400 (MSD) Hello, On Mon, 6 Oct 2008, 06:40-0000, Nate Eldredge wrote: [...] > Incidentally, to the initial reporter, what application do you have > that requires sendfile? In my experience, most things will fall > back to a read/write loop if sendfile fails, since sendfile isn't > available on all systems or under all circumstances. So if you > apply the quick fix so that sendfile always fails, it might at > least get your application working again. > As stated in the PR Andrey used nginx (ports/www/nginx). But I could easily reproduce the bug with the stock ftpd(8) with the ftproot on tmpfs. -- Maxim Konovalov From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 08:30:43 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 251E0106568F for ; Mon, 6 Oct 2008 08:30:43 +0000 (UTC) (envelope-from dimitar.vassilev@gmail.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.29]) by mx1.freebsd.org (Postfix) with ESMTP id D09658FC1E for ; Mon, 6 Oct 2008 08:30:42 +0000 (UTC) (envelope-from dimitar.vassilev@gmail.com) Received: by yx-out-2324.google.com with SMTP id 8so426544yxb.13 for ; Mon, 06 Oct 2008 01:30:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=z/zmh9P9FnAtlgfnAyflVi1Ia548YGjC9Pc7N4DhKqc=; b=xxscgOWGfwc14GHzr7wEWkTDG1F+WDOwwx76R8t9IH+YBcKFOygFW6K1sAk8WpU18g P0SBKL0x9X+cnS5Ly5uqEVuMvfa6vR1EJW4q7sLMHQhA/MNqueypTImXBpwBq5e2MlgY C2GLxeVuwSlyBstePGNSLWgtwfE4gJzoFWGLY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=BwLtqOS0MvF8ACx/7msRPLfQFh6HWvKwH1f6ELUIzwluUrRCPXwQKxluO/lTSpADXr i2By/p8PpLBecyDmi/JEcsBRbp2DeZacGooczucKI5pjKcqi4j65yqMJUdGQhK1/b5th HKh2O2zyX7N+0CUibZ15gx8puskxOzoUOiyog= Received: by 10.151.11.19 with SMTP id o19mr7215738ybi.143.1223281842189; Mon, 06 Oct 2008 01:30:42 -0700 (PDT) Received: by 10.150.182.11 with HTTP; Mon, 6 Oct 2008 01:30:42 -0700 (PDT) Message-ID: <59adc1a0810060130j14a1707ao132e980dc5f17138@mail.gmail.com> Date: Mon, 6 Oct 2008 11:30:42 +0300 From: "Dimitar Vasilev" To: "Andrew Snow" In-Reply-To: <48E9C08C.8040108@modulus.org> MIME-Version: 1.0 References: <59adc1a0810051210t4a3503aci2bc06ba0aa5376c3@mail.gmail.com> <48E9556C.9060004@modulus.org> <59adc1a0810052225w6d8c1b78r226ffe8ca4ccf35d@mail.gmail.com> <48E9C08C.8040108@modulus.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: zfs as layer distributor X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 08:30:43 -0000 2008/10/6 Andrew Snow > Dimitar Vasilev wrote: > >> Would love if someone to hear if someone has tested hardware raid6 and zfs >> over it. >> > > Yes, I am using 3ware RAID6 over 16 disks as a single volume, because we > also had UFS partitions that we wanted to keep. > > The performance is more than adequate, but not anywhere near if you used > them as single disks. Personally - based on prior experience with certain > hardware - I'd trust ZFS software raid over Areca hardware :-) > > How many disks do you have? If you can split up your disk pack into a group > of between 5 and 10 smaller RAIDs, that is the optimal range for ZFS > performance. > > > - Andrew I got 8 disks out of 12 possible. As the local reps of Areca told me - RAID6 is good over 12 disks. So next time I think I will go with raidz2 and pass-through. Best regards, Dimitar From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 11:06:54 2008 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CD8E2106568B for ; Mon, 6 Oct 2008 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id BCB508FC28 for ; Mon, 6 Oct 2008 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m96B6s3m035467 for ; Mon, 6 Oct 2008 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m96B6sKP035463 for freebsd-fs@FreeBSD.org; Mon, 6 Oct 2008 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 6 Oct 2008 11:06:54 GMT Message-Id: <200810061106.m96B6sKP035463@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 11:06:54 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127213 fs [tmpfs] sendfile on tmpfs data corruption o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125536 fs [ext2fs] ext 2 mounts cleanly but fails on commands li o kern/125149 fs [nfs][panic] changing into .zfs dir from nfs client ca o kern/124621 fs [ext3] Cannot mount ext2fs partition o kern/122888 fs [zfs] zfs hang w/ prefetch on, zil off while running t o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o bin/118249 fs mv(1): moving a directory changes its mtime o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D 20 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 21:40:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8CC2A1065686 for ; Mon, 6 Oct 2008 21:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 61FFB8FC0C for ; Mon, 6 Oct 2008 21:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m96Le4E3088616 for ; Mon, 6 Oct 2008 21:40:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m96Le4uk088615; Mon, 6 Oct 2008 21:40:04 GMT (envelope-from gnats) Date: Mon, 6 Oct 2008 21:40:04 GMT Message-Id: <200810062140.m96Le4uk088615@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Anton Yuzhaninov Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Anton Yuzhaninov List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 21:40:04 -0000 The following reply was made to PR kern/127213; it has been noted by GNATS. From: Anton Yuzhaninov To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption Date: Tue, 07 Oct 2008 01:38:50 +0400 > Incidentally, to the initial reporter, what application do you have that > requires sendfile? We want to use tmpfs with our homegrown application, which can work only using sendfile(). Currently we use md+ufs, but with md data present in memory twice - in md and in VM cache. -- Anton Yuzhaninov From owner-freebsd-fs@FreeBSD.ORG Mon Oct 6 22:30:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CDB20106569A for ; Mon, 6 Oct 2008 22:30:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id ACF0C8FC16 for ; Mon, 6 Oct 2008 22:30:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m96MU4hn091347 for ; Mon, 6 Oct 2008 22:30:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m96MU4lE091342; Mon, 6 Oct 2008 22:30:04 GMT (envelope-from gnats) Date: Mon, 6 Oct 2008 22:30:04 GMT Message-Id: <200810062230.m96MU4lE091342@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Nate Eldredge Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Nate Eldredge List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Oct 2008 22:30:04 -0000 The following reply was made to PR kern/127213; it has been noted by GNATS. From: Nate Eldredge To: Maxim Konovalov Cc: bug-followup@freebsd.org, JH Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption Date: Mon, 6 Oct 2008 15:22:57 -0700 (PDT) On Mon, 6 Oct 2008, Maxim Konovalov wrote: > Hello, > > On Mon, 6 Oct 2008, 06:40-0000, Nate Eldredge wrote: > > [...] >> Incidentally, to the initial reporter, what application do you have >> that requires sendfile? In my experience, most things will fall >> back to a read/write loop if sendfile fails, since sendfile isn't >> available on all systems or under all circumstances. So if you >> apply the quick fix so that sendfile always fails, it might at >> least get your application working again. >> > As stated in the PR Andrey used nginx (ports/www/nginx). But I could > easily reproduce the bug with the stock ftpd(8) with the ftproot on > tmpfs. To simplify matters further, here is the testcase I used when testing this, which uses sendfile to send some data over a unix domain socket. Do: ./server /tmpfs/data mysocket & ./client mysocket >data.out cmp /tmpfs/data data.out If things work right, data and data.out should be identical. But if data is a file on a tmpfs, data.out contains apparently random kernel memory contents. # This is a shell archive. Save it in a file, remove anything before # this line, and then unpack it by entering "sh file". Note, it may # create directories; files and directories will be owned by you and # have default permissions. # # This archive contains: # # Makefile # client.c # server.c # util.c # util.h # echo x - Makefile sed 's/^X//' >Makefile << 'END-of-Makefile' XCC = gcc XCFLAGS = -Wall -W -g X Xall : server client X Xserver : server.o util.o X $(CC) -o $@ $> X Xclient : client.o util.o X $(CC) -o $@ $> X Xserver.o client.o util.o : util.h X Xclean : X rm -f server client *.o END-of-Makefile echo x - client.c sed 's/^X//' >client.c << 'END-of-client.c' X#include X#include X#include X#include X#include "util.h" X Xint main(int argc, char *argv[]) { X int s; X if (argc < 2) { X fprintf(stderr, "Usage: %s socketname\n", argv[0]); X exit(1); X } X if ((s = connect_unix_socket(argv[1])) < 0) { X exit(1); X } X fake_sendfile(s, 1); X return 0; X} X X X END-of-client.c echo x - server.c sed 's/^X//' >server.c << 'END-of-server.c' X#include X#include X#include X#include X#include "util.h" X Xint main(int argc, char *argv[]) { X int f, listener, connection; X if (argc < 3) { X fprintf(stderr, "Usage: %s filename socketname\n", argv[0]); X exit(1); X } X if ((f = open(argv[1], O_RDONLY)) < 0) { X perror(argv[1]); X exit(1); X } X if ((listener = listen_unix_socket(argv[2])) < 0) { X exit(1); X } X if ((connection = accept_unix_socket(listener)) >= 0) { X real_sendfile(f, connection); X } X return 0; X} X X X END-of-server.c echo x - util.c sed 's/^X//' >util.c << 'END-of-util.c' X/* send data from file to unix domain socket */ X X#include X#include X#include X#include X#include X#include X#include X#include X#include X#include X Xint create_unix_socket(void) { X int fd; X if ((fd = socket(PF_LOCAL, SOCK_STREAM, 0)) < 0) { X perror("socket"); X return -1; X } X return fd; X} X Xint make_unix_sockaddr(const char *pathname, struct sockaddr_un *sa) { X memset(sa, 0, sizeof(*sa)); X sa->sun_family = PF_LOCAL; X if (strlen(pathname) + 1 > sizeof(sa->sun_path)) { X fprintf(stderr, "%s: pathname too long (max %lu)\n", X pathname, sizeof(sa->sun_path)); X errno = ENAMETOOLONG; X return -1; X } X strcpy(sa->sun_path, pathname); X return 0; X} X Xstatic char *sockname; Xvoid delete_socket(void) { X unlink(sockname); X} X Xint listen_unix_socket(const char *path) { X int fd; X struct sockaddr_un sa; X if (make_unix_sockaddr(path, &sa) < 0) X return -1; X if ((fd = create_unix_socket()) < 0) X return -1; X if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) { X perror("bind"); X close(fd); X return -1; X } X sockname = strdup(path); X atexit(delete_socket); X X if (listen(fd, 5) < 0) { X perror("listen"); X close(fd); X return -1; X } X return fd; X} X Xint accept_unix_socket(int fd) { X int s; X if ((s = accept(fd, NULL, 0)) < 0) { X perror("accept"); X return -1; X } X return s; X} X Xint connect_unix_socket(const char *path) { X int fd; X struct sockaddr_un sa; X if (make_unix_sockaddr(path, &sa) < 0) X return -1; X if ((fd = create_unix_socket()) < 0) X return -1; X if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) { X perror("connect"); X return -1; X } X return fd; X} X X#define BUFSIZE 65536 X Xint fake_sendfile(int from, int to) { X char buf[BUFSIZE]; X int v; X int sent = 0; X while ((v = read(from, buf, BUFSIZE)) > 0) { X int d = 0; X while (d < v) { X int w = write(to, buf, v - d); X if (w <= 0) { X perror("write"); X return -1; X } X d += w; X sent += w; X } X } X if (v != 0) { X perror("read"); X return -1; X } X return sent; X} X Xint real_sendfile(int from, int to) { X int v; X v = sendfile(from, to, 0, 0, NULL, NULL, 0); X if (v < 0) { X perror("sendfile"); X } X return v; X} X X END-of-util.c echo x - util.h sed 's/^X//' >util.h << 'END-of-util.h' X/* send data from file to unix domain socket */ X X#include X#include X#include X#include X#include X#include X#include X Xint create_unix_socket(void); Xint make_unix_sockaddr(const char *pathname, struct sockaddr_un *sa); Xint listen_unix_socket(const char *path); Xint accept_unix_socket(int fd); Xint connect_unix_socket(const char *path); Xint fake_sendfile(int from, int to); Xint real_sendfile(int from, int to); X X END-of-util.h exit -- Nate Eldredge neldredge@math.ucsd.edu From owner-freebsd-fs@FreeBSD.ORG Tue Oct 7 03:50:05 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2406F1065698 for ; Tue, 7 Oct 2008 03:50:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 12DD18FC1D for ; Tue, 7 Oct 2008 03:50:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m973o4SG017985 for ; Tue, 7 Oct 2008 03:50:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m973o411017984; Tue, 7 Oct 2008 03:50:04 GMT (envelope-from gnats) Date: Tue, 7 Oct 2008 03:50:04 GMT Message-Id: <200810070350.m973o411017984@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Maxim Konovalov Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Maxim Konovalov List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Oct 2008 03:50:05 -0000 The following reply was made to PR kern/127213; it has been noted by GNATS. From: Maxim Konovalov To: Nate Eldredge Cc: bug-followup@freebsd.org, JH Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption Date: Tue, 7 Oct 2008 07:43:17 +0400 (MSD) On Mon, 6 Oct 2008, 15:22-0700, Nate Eldredge wrote: > On Mon, 6 Oct 2008, Maxim Konovalov wrote: > > > Hello, > > > > On Mon, 6 Oct 2008, 06:40-0000, Nate Eldredge wrote: > > > > [...] > > > Incidentally, to the initial reporter, what application do you have > > > that requires sendfile? In my experience, most things will fall > > > back to a read/write loop if sendfile fails, since sendfile isn't > > > available on all systems or under all circumstances. So if you > > > apply the quick fix so that sendfile always fails, it might at > > > least get your application working again. > > > > > As stated in the PR Andrey used nginx (ports/www/nginx). But I could > > easily reproduce the bug with the stock ftpd(8) with the ftproot on > > tmpfs. > > To simplify matters further, here is the testcase I used when > testing this, which uses sendfile to send some data over a unix > domain socket. Do: > > ./server /tmpfs/data mysocket & > ./client mysocket >data.out > cmp /tmpfs/data data.out > > If things work right, data and data.out should be identical. But if > data is a file on a tmpfs, data.out contains apparently random > kernel memory contents. > Hi Nate, It'd be really nice if you extend src/tools/regression/sockets/sendfile regression test for this bug. Now it doesn't detect this case. -- Maxim Konovalov From owner-freebsd-fs@FreeBSD.ORG Tue Oct 7 15:40:05 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1017F1065696 for ; Tue, 7 Oct 2008 15:40:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id F18C78FC15 for ; Tue, 7 Oct 2008 15:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m97Fe4Xd012309 for ; Tue, 7 Oct 2008 15:40:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m97Fe4Oi012308; Tue, 7 Oct 2008 15:40:04 GMT (envelope-from gnats) Date: Tue, 7 Oct 2008 15:40:04 GMT Message-Id: <200810071540.m97Fe4Oi012308@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Jaakko Heinonen Cc: Subject: Re: kern/125149: [zfs][nfs] changing into .zfs dir from nfs client causes endless panic loop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Jaakko Heinonen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Oct 2008 15:40:05 -0000 The following reply was made to PR kern/125149; it has been noted by GNATS. From: Jaakko Heinonen To: Volker Werth Cc: Weldon Godfrey , bug-followup@FreeBSD.org Subject: Re: kern/125149: [zfs][nfs] changing into .zfs dir from nfs client causes endless panic loop Date: Tue, 7 Oct 2008 18:36:30 +0300 Hi, On 2008-10-02, Volker Werth wrote: > > #8 0xffffffff804f06fa in vput (vp=0x0) at atomic.h:142 > > #9 0xffffffff8060670d in nfsrv_readdirplus (nfsd=0xffffff000584f100, > > slp=0xffffff0005725900, > > td=0xffffff00059a0340, mrq=0xffffffffdf761af0) at > > /usr/src/sys/nfsserver/nfs_serv.c:3613 > > #10 0xffffffff80615a5d in nfssvc (td=Variable "td" is not available. > > ) at /usr/src/sys/nfsserver/nfs_syscalls.c:461 > > #11 0xffffffff8072f377 in syscall (frame=0xffffffffdf761c70) at > > /usr/src/sys/amd64/amd64/trap.c:852 > > #12 0xffffffff807158bb in Xfast_syscall () at > > /usr/src/sys/amd64/amd64/exception.S:290 > > #13 0x000000080068746c in ?? () > > Previous frame inner to this frame (corrupt stack?) > > I think the problem is the NULL pointer to vput. A maintainer needs to > check how nvp can get a NULL pointer (judging by assuming my fresh > codebase is not too different from yours). The bug is reproducible with nfs clients using readdirplus. FreeBSD client doesn't use readdirplus by default but you can enable it with -l mount option. Here are steps to reproduce the panic with FreeBSD nfs client: - nfs export a zfs file system - on client mount the file system with -l mount option and list the zfs control directory # mount_nfs -l x.x.x.x:/tank /mnt # ls /mnt/.zfs I see two bugs here: 1) nfsrv_readdirplus() doesn't check VFS_VGET() error status properly. It only checks for EOPNOTSUPP but other errors are ignored. This is the final reason for the panic and in theory it could happen for other file systems too. In this case VFS_VGET() returns EINVAL and results NULL nvp. 2) zfs VFS_VGET() returns EINVAL for .zfs control directory entries. Looking at zfs_vget() it tries find corresponding znode to fulfill the request. However control directory entries don't have backing znodes. Here is a patch which fixes 1). The patch prevents system from panicing but a fix for 2) is needed to make readdirplus work with .zfs directory. %%% Index: sys/nfsserver/nfs_serv.c =================================================================== --- sys/nfsserver/nfs_serv.c (revision 183511) +++ sys/nfsserver/nfs_serv.c (working copy) @@ -3597,9 +3597,12 @@ again: * Probe one of the directory entries to see if the filesystem * supports VGET. */ - if (VFS_VGET(vp->v_mount, dp->d_fileno, LK_EXCLUSIVE, &nvp) == - EOPNOTSUPP) { - error = NFSERR_NOTSUPP; + error = VFS_VGET(vp->v_mount, dp->d_fileno, LK_EXCLUSIVE, &nvp); + if (error) { + if (error == EOPNOTSUPP) + error = NFSERR_NOTSUPP; + else + error = NFSERR_SERVERFAULT; vrele(vp); vp = NULL; free((caddr_t)cookies, M_TEMP); %%% And here's an attempt to add support for .zfs control directory entries (bug 2)) in zfs_vget(). The patch is very experimental and it only works for snapshots which are already active (mounted). %%% Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (working copy) @@ -759,9 +759,10 @@ zfs_vget(vfs_t *vfsp, ino_t ino, int fla VN_RELE(ZTOV(zp)); err = EINVAL; } - if (err != 0) - *vpp = NULL; - else { + if (err != 0) { + /* try .zfs control directory */ + err = zfsctl_vget(vfsp, ino, flags, vpp); + } else { *vpp = ZTOV(zp); vn_lock(*vpp, flags); } Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (working copy) @@ -1047,6 +1047,63 @@ zfsctl_lookup_objset(vfs_t *vfsp, uint64 return (error); } +int +zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp) +{ + zfsvfs_t *zfsvfs = vfsp->vfs_data; + vnode_t *dvp, *vp; + zfsctl_snapdir_t *sdp; + zfsctl_node_t *zcp; + zfs_snapentry_t *sep; + int error; + + *vpp = NULL; + + ASSERT(zfsvfs->z_ctldir != NULL); + error = zfsctl_root_lookup(zfsvfs->z_ctldir, "snapshot", &dvp, + NULL, 0, NULL, kcred); + if (error != 0) + return (error); + + if (nodeid == ZFSCTL_INO_ROOT || nodeid == ZFSCTL_INO_SNAPDIR) { + if (nodeid == ZFSCTL_INO_SNAPDIR) + *vpp = dvp; + else { + VN_RELE(dvp); + *vpp = zfsvfs->z_ctldir; + VN_HOLD(*vpp); + } + /* XXX: LK_RETRY? */ + vn_lock(*vpp, flags | LK_RETRY); + return (0); + } + + sdp = dvp->v_data; + + mutex_enter(&sdp->sd_lock); + sep = avl_first(&sdp->sd_snaps); + while (sep != NULL) { + vp = sep->se_root; + zcp = vp->v_data; + if (zcp->zc_id == nodeid) + break; + + sep = AVL_NEXT(&sdp->sd_snaps, sep); + } + + if (sep != NULL) { + VN_HOLD(vp); + *vpp = vp; + vn_lock(*vpp, flags); + } else + error = EINVAL; + + mutex_exit(&sdp->sd_lock); + + VN_RELE(dvp); + + return (error); +} /* * Unmount any snapshots for the given filesystem. This is called from * zfs_umount() - if we have a ctldir, then go through and unmount all the Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (working copy) @@ -60,6 +60,7 @@ int zfsctl_root_lookup(vnode_t *dvp, cha int flags, vnode_t *rdir, cred_t *cr); int zfsctl_lookup_objset(vfs_t *vfsp, uint64_t objsetid, zfsvfs_t **zfsvfsp); +int zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp); #define ZFSCTL_INO_ROOT 0x1 #define ZFSCTL_INO_SNAPDIR 0x2 %%% -- Jaakko From owner-freebsd-fs@FreeBSD.ORG Wed Oct 8 21:20:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 48B25106568F for ; Wed, 8 Oct 2008 21:20:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 2A47F8FC14 for ; Wed, 8 Oct 2008 21:20:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m98LK3S5090570 for ; Wed, 8 Oct 2008 21:20:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m98LK3fx090569; Wed, 8 Oct 2008 21:20:03 GMT (envelope-from gnats) Date: Wed, 8 Oct 2008 21:20:03 GMT Message-Id: <200810082120.m98LK3fx090569@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: "Weldon Godfrey" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Weldon Godfrey List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Oct 2008 21:20:04 -0000 The following reply was made to PR kern/125149; it has been noted by GNATS. From: "Weldon Godfrey" To: "Jaakko Heinonen" , "Volker Werth" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop Date: Wed, 8 Oct 2008 16:06:50 -0500 Thanks! I will apply these patches tomorrow. Weldon -----Original Message----- From: Jaakko Heinonen [mailto:jh@saunalahti.fi]=20 Sent: Tuesday, October 07, 2008 10:37 AM To: Volker Werth Cc: Weldon Godfrey; bug-followup@freebsd.org Subject: Re: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop Hi, On 2008-10-02, Volker Werth wrote: > > #8 0xffffffff804f06fa in vput (vp=3D0x0) at atomic.h:142 > > #9 0xffffffff8060670d in nfsrv_readdirplus (nfsd=3D0xffffff000584f100, > > slp=3D0xffffff0005725900,=20 > > td=3D0xffffff00059a0340, mrq=3D0xffffffffdf761af0) at > > /usr/src/sys/nfsserver/nfs_serv.c:3613 > > #10 0xffffffff80615a5d in nfssvc (td=3DVariable "td" is not = available. > > ) at /usr/src/sys/nfsserver/nfs_syscalls.c:461 > > #11 0xffffffff8072f377 in syscall (frame=3D0xffffffffdf761c70) at > > /usr/src/sys/amd64/amd64/trap.c:852 > > #12 0xffffffff807158bb in Xfast_syscall () at > > /usr/src/sys/amd64/amd64/exception.S:290 > > #13 0x000000080068746c in ?? () > > Previous frame inner to this frame (corrupt stack?) >=20 > I think the problem is the NULL pointer to vput. A maintainer needs to > check how nvp can get a NULL pointer (judging by assuming my fresh > codebase is not too different from yours). The bug is reproducible with nfs clients using readdirplus. FreeBSD client doesn't use readdirplus by default but you can enable it with -l mount option. Here are steps to reproduce the panic with FreeBSD nfs client: - nfs export a zfs file system - on client mount the file system with -l mount option and list the zfs control directory # mount_nfs -l x.x.x.x:/tank /mnt # ls /mnt/.zfs I see two bugs here: 1) nfsrv_readdirplus() doesn't check VFS_VGET() error status properly. It only checks for EOPNOTSUPP but other errors are ignored. This is the final reason for the panic and in theory it could happen for other file systems too. In this case VFS_VGET() returns EINVAL and results NULL nvp. 2) zfs VFS_VGET() returns EINVAL for .zfs control directory entries. Looking at zfs_vget() it tries find corresponding znode to fulfill the request. However control directory entries don't have backing znodes. Here is a patch which fixes 1). The patch prevents system from panicing but a fix for 2) is needed to make readdirplus work with .zfs directory. %%% Index: sys/nfsserver/nfs_serv.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/nfsserver/nfs_serv.c (revision 183511) +++ sys/nfsserver/nfs_serv.c (working copy) @@ -3597,9 +3597,12 @@ again: * Probe one of the directory entries to see if the filesystem * supports VGET. */ - if (VFS_VGET(vp->v_mount, dp->d_fileno, LK_EXCLUSIVE, &nvp) =3D=3D - EOPNOTSUPP) { - error =3D NFSERR_NOTSUPP; + error =3D VFS_VGET(vp->v_mount, dp->d_fileno, LK_EXCLUSIVE, &nvp); + if (error) { + if (error =3D=3D EOPNOTSUPP) + error =3D NFSERR_NOTSUPP; + else + error =3D NFSERR_SERVERFAULT; vrele(vp); vp =3D NULL; free((caddr_t)cookies, M_TEMP); %%% And here's an attempt to add support for .zfs control directory entries (bug 2)) in zfs_vget(). The patch is very experimental and it only works for snapshots which are already active (mounted). %%% Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (working copy) @@ -759,9 +759,10 @@ zfs_vget(vfs_t *vfsp, ino_t ino, int fla VN_RELE(ZTOV(zp)); err =3D EINVAL; } - if (err !=3D 0) - *vpp =3D NULL; - else { + if (err !=3D 0) { + /* try .zfs control directory */ + err =3D zfsctl_vget(vfsp, ino, flags, vpp); + } else { *vpp =3D ZTOV(zp); vn_lock(*vpp, flags); } Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (working copy) @@ -1047,6 +1047,63 @@ zfsctl_lookup_objset(vfs_t *vfsp, uint64 return (error); } =20 +int +zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp) +{ + zfsvfs_t *zfsvfs =3D vfsp->vfs_data; + vnode_t *dvp, *vp; + zfsctl_snapdir_t *sdp; + zfsctl_node_t *zcp; + zfs_snapentry_t *sep; + int error; + + *vpp =3D NULL; + + ASSERT(zfsvfs->z_ctldir !=3D NULL); + error =3D zfsctl_root_lookup(zfsvfs->z_ctldir, "snapshot", &dvp, + NULL, 0, NULL, kcred); + if (error !=3D 0) + return (error); + + if (nodeid =3D=3D ZFSCTL_INO_ROOT || nodeid =3D=3D ZFSCTL_INO_SNAPDIR) = { + if (nodeid =3D=3D ZFSCTL_INO_SNAPDIR) + *vpp =3D dvp; + else { + VN_RELE(dvp); + *vpp =3D zfsvfs->z_ctldir; + VN_HOLD(*vpp); + } + /* XXX: LK_RETRY? */ + vn_lock(*vpp, flags | LK_RETRY); + return (0); + } + =09 + sdp =3D dvp->v_data; + + mutex_enter(&sdp->sd_lock); + sep =3D avl_first(&sdp->sd_snaps); + while (sep !=3D NULL) { + vp =3D sep->se_root; + zcp =3D vp->v_data; + if (zcp->zc_id =3D=3D nodeid) + break; + + sep =3D AVL_NEXT(&sdp->sd_snaps, sep); + } + + if (sep !=3D NULL) { + VN_HOLD(vp); + *vpp =3D vp; + vn_lock(*vpp, flags); + } else + error =3D EINVAL; + + mutex_exit(&sdp->sd_lock); + + VN_RELE(dvp); + + return (error); +} /* * Unmount any snapshots for the given filesystem. This is called from * zfs_umount() - if we have a ctldir, then go through and unmount all the Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (working copy) @@ -60,6 +60,7 @@ int zfsctl_root_lookup(vnode_t *dvp, cha int flags, vnode_t *rdir, cred_t *cr); =20 int zfsctl_lookup_objset(vfs_t *vfsp, uint64_t objsetid, zfsvfs_t **zfsvfsp); +int zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp); =20 #define ZFSCTL_INO_ROOT 0x1 #define ZFSCTL_INO_SNAPDIR 0x2 %%% --=20 Jaakko From owner-freebsd-fs@FreeBSD.ORG Thu Oct 9 13:20:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BC16E1065687 for ; Thu, 9 Oct 2008 13:20:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 909718FC0A for ; Thu, 9 Oct 2008 13:20:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m99DK4nu011919 for ; Thu, 9 Oct 2008 13:20:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m99DK4VU011916; Thu, 9 Oct 2008 13:20:04 GMT (envelope-from gnats) Date: Thu, 9 Oct 2008 13:20:04 GMT Message-Id: <200810091320.m99DK4VU011916@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: "Weldon Godfrey" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Weldon Godfrey List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Oct 2008 13:20:04 -0000 The following reply was made to PR kern/125149; it has been noted by GNATS. From: "Weldon Godfrey" To: "Jaakko Heinonen" , "Volker Werth" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop Date: Thu, 9 Oct 2008 08:19:38 -0500 I am rebuilding right now. FYI --- I modified the patch (corrected number of lines) -@@ -1047,6 +1047,63 @@ zfsctl_lookup_objset(vfs_t *vfsp, uint64 +@@ -1047,6 +1047,62 @@ zfsctl_lookup_objset(vfs_t *vfsp, uint64 Weldon From owner-freebsd-fs@FreeBSD.ORG Thu Oct 9 16:30:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 400EB106569E for ; Thu, 9 Oct 2008 16:30:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 2DFB38FC16 for ; Thu, 9 Oct 2008 16:30:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m99GU30s025826 for ; Thu, 9 Oct 2008 16:30:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m99GU3kF025823; Thu, 9 Oct 2008 16:30:03 GMT (envelope-from gnats) Date: Thu, 9 Oct 2008 16:30:03 GMT Message-Id: <200810091630.m99GU3kF025823@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: "Weldon Godfrey" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Weldon Godfrey List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Oct 2008 16:30:04 -0000 The following reply was made to PR kern/125149; it has been noted by GNATS. From: "Weldon Godfrey" To: "Jaakko Heinonen" , "Volker Werth" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop Date: Thu, 9 Oct 2008 11:23:12 -0500 Is this patch based on 8-CURRENT or 7-RELEASE? If 8-CURRENT, I don't know if I can test as I would like to stick with 7-RELEASE for now. However, I would like to move to ZFS11 so if there is a patch for 7 for ZFS11 (assuming your patch is based in the v11 code), I would like to apply that. /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zf s/zfs_ctldir.c:1073:33: error: macro "vn_lock" requires 3 arguments, but only 2 given /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zf s/zfs_ctldir.c: In function 'zfsctl_vget': /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zf s/zfs_ctldir.c:1073: error: 'vn_lock' undeclared (first use in this function) /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zf s/zfs_ctldir.c:1073: error: (Each undeclared identifier is reported only once /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zf s/zfs_ctldir.c:1073: error: for each function it appears in.) /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zf s/zfs_ctldir.c:1093:22: error: macro "vn_lock" requires 3 arguments, but only 2 given Weldon -----Original Message----- From: Jaakko Heinonen [mailto:jh@saunalahti.fi]=20 Sent: Tuesday, October 07, 2008 10:37 AM To: Volker Werth Cc: Weldon Godfrey; bug-followup@freebsd.org Subject: Re: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop Hi, On 2008-10-02, Volker Werth wrote: > > #8 0xffffffff804f06fa in vput (vp=3D0x0) at atomic.h:142 > > #9 0xffffffff8060670d in nfsrv_readdirplus (nfsd=3D0xffffff000584f100, > > slp=3D0xffffff0005725900,=20 > > td=3D0xffffff00059a0340, mrq=3D0xffffffffdf761af0) at > > /usr/src/sys/nfsserver/nfs_serv.c:3613 > > #10 0xffffffff80615a5d in nfssvc (td=3DVariable "td" is not = available. > > ) at /usr/src/sys/nfsserver/nfs_syscalls.c:461 > > #11 0xffffffff8072f377 in syscall (frame=3D0xffffffffdf761c70) at > > /usr/src/sys/amd64/amd64/trap.c:852 > > #12 0xffffffff807158bb in Xfast_syscall () at > > /usr/src/sys/amd64/amd64/exception.S:290 > > #13 0x000000080068746c in ?? () > > Previous frame inner to this frame (corrupt stack?) >=20 > I think the problem is the NULL pointer to vput. A maintainer needs to > check how nvp can get a NULL pointer (judging by assuming my fresh > codebase is not too different from yours). The bug is reproducible with nfs clients using readdirplus. FreeBSD client doesn't use readdirplus by default but you can enable it with -l mount option. Here are steps to reproduce the panic with FreeBSD nfs client: - nfs export a zfs file system - on client mount the file system with -l mount option and list the zfs control directory # mount_nfs -l x.x.x.x:/tank /mnt # ls /mnt/.zfs I see two bugs here: 1) nfsrv_readdirplus() doesn't check VFS_VGET() error status properly. It only checks for EOPNOTSUPP but other errors are ignored. This is the final reason for the panic and in theory it could happen for other file systems too. In this case VFS_VGET() returns EINVAL and results NULL nvp. 2) zfs VFS_VGET() returns EINVAL for .zfs control directory entries. Looking at zfs_vget() it tries find corresponding znode to fulfill the request. However control directory entries don't have backing znodes. Here is a patch which fixes 1). The patch prevents system from panicing but a fix for 2) is needed to make readdirplus work with .zfs directory. %%% Index: sys/nfsserver/nfs_serv.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/nfsserver/nfs_serv.c (revision 183511) +++ sys/nfsserver/nfs_serv.c (working copy) @@ -3597,9 +3597,12 @@ again: * Probe one of the directory entries to see if the filesystem * supports VGET. */ - if (VFS_VGET(vp->v_mount, dp->d_fileno, LK_EXCLUSIVE, &nvp) =3D=3D - EOPNOTSUPP) { - error =3D NFSERR_NOTSUPP; + error =3D VFS_VGET(vp->v_mount, dp->d_fileno, LK_EXCLUSIVE, &nvp); + if (error) { + if (error =3D=3D EOPNOTSUPP) + error =3D NFSERR_NOTSUPP; + else + error =3D NFSERR_SERVERFAULT; vrele(vp); vp =3D NULL; free((caddr_t)cookies, M_TEMP); %%% And here's an attempt to add support for .zfs control directory entries (bug 2)) in zfs_vget(). The patch is very experimental and it only works for snapshots which are already active (mounted). %%% Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (working copy) @@ -759,9 +759,10 @@ zfs_vget(vfs_t *vfsp, ino_t ino, int fla VN_RELE(ZTOV(zp)); err =3D EINVAL; } - if (err !=3D 0) - *vpp =3D NULL; - else { + if (err !=3D 0) { + /* try .zfs control directory */ + err =3D zfsctl_vget(vfsp, ino, flags, vpp); + } else { *vpp =3D ZTOV(zp); vn_lock(*vpp, flags); } Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (working copy) @@ -1047,6 +1047,63 @@ zfsctl_lookup_objset(vfs_t *vfsp, uint64 return (error); } =20 +int +zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp) +{ + zfsvfs_t *zfsvfs =3D vfsp->vfs_data; + vnode_t *dvp, *vp; + zfsctl_snapdir_t *sdp; + zfsctl_node_t *zcp; + zfs_snapentry_t *sep; + int error; + + *vpp =3D NULL; + + ASSERT(zfsvfs->z_ctldir !=3D NULL); + error =3D zfsctl_root_lookup(zfsvfs->z_ctldir, "snapshot", &dvp, + NULL, 0, NULL, kcred); + if (error !=3D 0) + return (error); + + if (nodeid =3D=3D ZFSCTL_INO_ROOT || nodeid =3D=3D ZFSCTL_INO_SNAPDIR) = { + if (nodeid =3D=3D ZFSCTL_INO_SNAPDIR) + *vpp =3D dvp; + else { + VN_RELE(dvp); + *vpp =3D zfsvfs->z_ctldir; + VN_HOLD(*vpp); + } + /* XXX: LK_RETRY? */ + vn_lock(*vpp, flags | LK_RETRY); + return (0); + } + =09 + sdp =3D dvp->v_data; + + mutex_enter(&sdp->sd_lock); + sep =3D avl_first(&sdp->sd_snaps); + while (sep !=3D NULL) { + vp =3D sep->se_root; + zcp =3D vp->v_data; + if (zcp->zc_id =3D=3D nodeid) + break; + + sep =3D AVL_NEXT(&sdp->sd_snaps, sep); + } + + if (sep !=3D NULL) { + VN_HOLD(vp); + *vpp =3D vp; + vn_lock(*vpp, flags); + } else + error =3D EINVAL; + + mutex_exit(&sdp->sd_lock); + + VN_RELE(dvp); + + return (error); +} /* * Unmount any snapshots for the given filesystem. This is called from * zfs_umount() - if we have a ctldir, then go through and unmount all the Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (revision 183587) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (working copy) @@ -60,6 +60,7 @@ int zfsctl_root_lookup(vnode_t *dvp, cha int flags, vnode_t *rdir, cred_t *cr); =20 int zfsctl_lookup_objset(vfs_t *vfsp, uint64_t objsetid, zfsvfs_t **zfsvfsp); +int zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp); =20 #define ZFSCTL_INO_ROOT 0x1 #define ZFSCTL_INO_SNAPDIR 0x2 %%% --=20 Jaakko From owner-freebsd-fs@FreeBSD.ORG Thu Oct 9 19:50:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 830CD1065687 for ; Thu, 9 Oct 2008 19:50:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 675FD8FC1A for ; Thu, 9 Oct 2008 19:50:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m99Jo4YB041884 for ; Thu, 9 Oct 2008 19:50:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m99Jo48P041883; Thu, 9 Oct 2008 19:50:04 GMT (envelope-from gnats) Date: Thu, 9 Oct 2008 19:50:04 GMT Message-Id: <200810091950.m99Jo48P041883@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Jaakko Heinonen Cc: Subject: Re: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Jaakko Heinonen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Oct 2008 19:50:04 -0000 The following reply was made to PR kern/125149; it has been noted by GNATS. From: Jaakko Heinonen To: Weldon Godfrey Cc: bug-followup@freebsd.org Subject: Re: kern/125149: [zfs][nfs] changing into .zfs dir from nfs clientcauses endless panic loop Date: Thu, 9 Oct 2008 22:44:38 +0300 On 2008-10-09, Weldon Godfrey wrote: > Is this patch based on 8-CURRENT or 7-RELEASE? If 8-CURRENT, I don't > know if I can test as I would like to stick with 7-RELEASE for now. Patches are against head. Sorry that I didn't mention that. The nfs patch applies against RELENG_7 with offset and here's the zfs patch against RELENG_7. (Disclaimer: compile tested only) %%% Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (revision 183727) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (working copy) @@ -759,9 +759,10 @@ zfs_vget(vfs_t *vfsp, ino_t ino, int fla VN_RELE(ZTOV(zp)); err = EINVAL; } - if (err != 0) - *vpp = NULL; - else { + if (err != 0) { + /* try .zfs control directory */ + err = zfsctl_vget(vfsp, ino, flags, vpp); + } else { *vpp = ZTOV(zp); vn_lock(*vpp, flags, curthread); } Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (revision 183727) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (working copy) @@ -1044,6 +1044,63 @@ zfsctl_lookup_objset(vfs_t *vfsp, uint64 return (error); } +int +zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp) +{ + zfsvfs_t *zfsvfs = vfsp->vfs_data; + vnode_t *dvp, *vp; + zfsctl_snapdir_t *sdp; + zfsctl_node_t *zcp; + zfs_snapentry_t *sep; + int error; + + *vpp = NULL; + + ASSERT(zfsvfs->z_ctldir != NULL); + error = zfsctl_root_lookup(zfsvfs->z_ctldir, "snapshot", &dvp, + NULL, 0, NULL, kcred); + if (error != 0) + return (error); + + if (nodeid == ZFSCTL_INO_ROOT || nodeid == ZFSCTL_INO_SNAPDIR) { + if (nodeid == ZFSCTL_INO_SNAPDIR) + *vpp = dvp; + else { + VN_RELE(dvp); + *vpp = zfsvfs->z_ctldir; + VN_HOLD(*vpp); + } + /* XXX: LK_RETRY? */ + vn_lock(*vpp, flags | LK_RETRY, curthread); + return (0); + } + + sdp = dvp->v_data; + + mutex_enter(&sdp->sd_lock); + sep = avl_first(&sdp->sd_snaps); + while (sep != NULL) { + vp = sep->se_root; + zcp = vp->v_data; + if (zcp->zc_id == nodeid) + break; + + sep = AVL_NEXT(&sdp->sd_snaps, sep); + } + + if (sep != NULL) { + VN_HOLD(vp); + *vpp = vp; + vn_lock(*vpp, flags, curthread); + } else + error = EINVAL; + + mutex_exit(&sdp->sd_lock); + + VN_RELE(dvp); + + return (error); +} /* * Unmount any snapshots for the given filesystem. This is called from * zfs_umount() - if we have a ctldir, then go through and unmount all the Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (revision 183727) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (working copy) @@ -60,6 +60,7 @@ int zfsctl_root_lookup(vnode_t *dvp, cha int flags, vnode_t *rdir, cred_t *cr); int zfsctl_lookup_objset(vfs_t *vfsp, uint64_t objsetid, zfsvfs_t **zfsvfsp); +int zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp); #define ZFSCTL_INO_ROOT 0x1 #define ZFSCTL_INO_SNAPDIR 0x2 %%% -- Jaakko From owner-freebsd-fs@FreeBSD.ORG Fri Oct 10 13:20:05 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A477F1065688 for ; Fri, 10 Oct 2008 13:20:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 8EDAB8FC08 for ; Fri, 10 Oct 2008 13:20:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m9ADK5c3063011 for ; Fri, 10 Oct 2008 13:20:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m9ADK5g4063010; Fri, 10 Oct 2008 13:20:05 GMT (envelope-from gnats) Date: Fri, 10 Oct 2008 13:20:05 GMT Message-Id: <200810101320.m9ADK5g4063010@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: "Weldon Godfrey" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfsclientcauses endless panic loop X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Weldon Godfrey List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Oct 2008 13:20:05 -0000 The following reply was made to PR kern/125149; it has been noted by GNATS. From: "Weldon Godfrey" To: "Jaakko Heinonen" Cc: Subject: RE: kern/125149: [zfs][nfs] changing into .zfs dir from nfsclientcauses endless panic loop Date: Fri, 10 Oct 2008 08:11:17 -0500 That's okay, although I won't be able to help test since I am close to using the system in production. We can live without needing to go to .zfs directory from a client. Also, I have set the nordirplus option on the clients now. =20 Which, btw, could this also be the other issue I was seeing? When we tested rigoriously from CentOS 3.x clients, after 2-3 hrs of testing, the system would panic. From the fbsd-fs list, it was noted from the backtrace that the vnode was becoming invalid. This seemed to be less of a case with CentOS 5.x clients (by a lot, although I did get 1 panic recently). I am rerunning the tests over this weekend. Thank you for helping! Weldon -----Original Message----- From: Jaakko Heinonen [mailto:jh@saunalahti.fi]=20 Sent: Thursday, October 09, 2008 2:45 PM To: Weldon Godfrey Cc: bug-followup@freebsd.org Subject: Re: kern/125149: [zfs][nfs] changing into .zfs dir from nfsclientcauses endless panic loop On 2008-10-09, Weldon Godfrey wrote: > Is this patch based on 8-CURRENT or 7-RELEASE? If 8-CURRENT, I don't > know if I can test as I would like to stick with 7-RELEASE for now. Patches are against head. Sorry that I didn't mention that. The nfs patch applies against RELENG_7 with offset and here's the zfs patch against RELENG_7. (Disclaimer: compile tested only) %%% Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (revision 183727) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c (working copy) @@ -759,9 +759,10 @@ zfs_vget(vfs_t *vfsp, ino_t ino, int fla VN_RELE(ZTOV(zp)); err =3D EINVAL; } - if (err !=3D 0) - *vpp =3D NULL; - else { + if (err !=3D 0) { + /* try .zfs control directory */ + err =3D zfsctl_vget(vfsp, ino, flags, vpp); + } else { *vpp =3D ZTOV(zp); vn_lock(*vpp, flags, curthread); } Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (revision 183727) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c (working copy) @@ -1044,6 +1044,63 @@ zfsctl_lookup_objset(vfs_t *vfsp, uint64 return (error); } =20 +int +zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp) +{ + zfsvfs_t *zfsvfs =3D vfsp->vfs_data; + vnode_t *dvp, *vp; + zfsctl_snapdir_t *sdp; + zfsctl_node_t *zcp; + zfs_snapentry_t *sep; + int error; + + *vpp =3D NULL; + + ASSERT(zfsvfs->z_ctldir !=3D NULL); + error =3D zfsctl_root_lookup(zfsvfs->z_ctldir, "snapshot", &dvp, + NULL, 0, NULL, kcred); + if (error !=3D 0) + return (error); + + if (nodeid =3D=3D ZFSCTL_INO_ROOT || nodeid =3D=3D ZFSCTL_INO_SNAPDIR) = { + if (nodeid =3D=3D ZFSCTL_INO_SNAPDIR) + *vpp =3D dvp; + else { + VN_RELE(dvp); + *vpp =3D zfsvfs->z_ctldir; + VN_HOLD(*vpp); + } + /* XXX: LK_RETRY? */ + vn_lock(*vpp, flags | LK_RETRY, curthread); + return (0); + } + =09 + sdp =3D dvp->v_data; + + mutex_enter(&sdp->sd_lock); + sep =3D avl_first(&sdp->sd_snaps); + while (sep !=3D NULL) { + vp =3D sep->se_root; + zcp =3D vp->v_data; + if (zcp->zc_id =3D=3D nodeid) + break; + + sep =3D AVL_NEXT(&sdp->sd_snaps, sep); + } + + if (sep !=3D NULL) { + VN_HOLD(vp); + *vpp =3D vp; + vn_lock(*vpp, flags, curthread); + } else + error =3D EINVAL; + + mutex_exit(&sdp->sd_lock); + + VN_RELE(dvp); + + return (error); +} /* * Unmount any snapshots for the given filesystem. This is called from * zfs_umount() - if we have a ctldir, then go through and unmount all the Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (revision 183727) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/zfs_ctldir.h (working copy) @@ -60,6 +60,7 @@ int zfsctl_root_lookup(vnode_t *dvp, cha int flags, vnode_t *rdir, cred_t *cr); =20 int zfsctl_lookup_objset(vfs_t *vfsp, uint64_t objsetid, zfsvfs_t **zfsvfsp); +int zfsctl_vget(vfs_t *vfsp, uint64_t nodeid, int flags, vnode_t **vpp); =20 #define ZFSCTL_INO_ROOT 0x1 #define ZFSCTL_INO_SNAPDIR 0x2 %%% --=20 Jaakko