From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 00:01:41 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E8846A1E for ; Sun, 18 Nov 2012 00:01:41 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 9A54D8FC0C for ; Sun, 18 Nov 2012 00:01:40 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap4EABAkqFCDaFvO/2dsb2JhbABFhiC+F4IeAQEDAiNWGw4KAgINGQJZBoggC6x8kX6BIosSg3qBEwOIWo0ikEODDYF7 X-IronPort-AV: E=Sophos;i="4.83,270,1352091600"; d="scan'208";a="711640" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 17 Nov 2012 19:01:33 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 78992B403F; Sat, 17 Nov 2012 19:01:33 -0500 (EST) Date: Sat, 17 Nov 2012 19:01:33 -0500 (EST) From: Rick Macklem To: Konstantin Belousov Message-ID: <1913501733.497510.1353196893436.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20121117140448.GP73505@kib.kiev.ua> Subject: Re: RFC: moving NFSv4.1 client from projects to head MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 00:01:42 -0000 Konstantin Belousov wrote: > On Fri, Nov 16, 2012 at 09:15:49PM -0500, Rick Macklem wrote: > > Hi, > > > > I've been working on NFSv4.1 client support for FreeBSD > > for some time now and known issues from testing at a > > Bakeathon last June have been resolved. The patch is > > rather big, but I believe it should not affect the > > client unless the new mount options: > > minorversion=1,pnfs > > are used for an nfsv4 mount. > > > > Since I don't believe that the new NFS client will be > > affected unless these new mount options are used, I think > > it could go into head now. On the other hand, there are few > > NFSv4.1 servers currently available, so it might not yet > > be widely useful. (See below for slides w.r.t. server availability.) > > > > How do folks feel about doing this in early December? > > > > Since it doesn't change any KBIs, it could also be MFC'd > > to stable/9. Would MFC'ing it to stable/9 make sense? > > > > For those interested in testing and/or reviewing it, > > the code is currently in: > > base/projects/nfsv4.1-client > > (It is purely a kernel patch.) > > Also, the current state of NFSv4.1 servers is roughly: > > http://www.pnfs.com/docs/LISA-11-pNFS-BoF-final.pdf > > > > Thanks in advance for any comments, rick > IMO, the earlier the change that you feel mature enough, hits the HEAD > in > the HEAD x.0 cycle, the better. That said, would you mind to put a > diff > somewhere to ease the review and testing ? Will do, as soon as I get home at the end of Nov. (I can't do svn or uploads until then.) Thanks for the comments, rick From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 00:26:55 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 9B05AD89; Sun, 18 Nov 2012 00:26:55 +0000 (UTC) (envelope-from bartosz.stec@it4pro.pl) Received: from mainframe.kkip.pl (kkip.pl [78.9.102.18]) by mx1.freebsd.org (Postfix) with ESMTP id 3C9978FC0C; Sun, 18 Nov 2012 00:26:54 +0000 (UTC) Received: from 89-69-114-93.dynamic.chello.pl ([89.69.114.93] helo=[192.168.0.100]) by mainframe.kkip.pl with esmtpsa (TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1TZsij-000I4n-0C; Sun, 18 Nov 2012 01:26:52 +0100 Message-ID: <50A82B3A.6020608@it4pro.pl> Date: Sun, 18 Nov 2012 01:26:34 +0100 From: Bartosz Stec Organization: IT4Pro User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andriy Gapon Subject: Re: problem booting to multi-vdev root pool [Was: kern/150503: [zfs] ZFS disks are UNAVAIL and corrupted after reboot] References: <509D1DEC.6040505@FreeBSD.org> <50A27243.408@madpilot.net> <50A65F83.5000604@FreeBSD.org> <50A66701.701@madpilot.net> In-Reply-To: <50A66701.701@madpilot.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Authenticated-User: bartosz.stec@it4pro.pl X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Spam-Score: -8.1 X-Spam-Score-Int: -80 X-Exim-Version: 4.80.1 (build at 26-Oct-2012 17:04:45) X-Date: 2012-11-18 01:26:52 X-Connected-IP: 89.69.114.93:51865 X-Message-Linecount: 72 X-Body-Linecount: 56 X-Message-Size: 2349 X-Body-Size: 1578 X-Received-Count: 1 X-Recipient-Count: 4 X-Local-Recipient-Count: 4 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 00:26:55 -0000 W dniu 2012-11-16 17:17, Guido Falsi pisze: > On 11/16/12 16:45, Andriy Gapon wrote: >> on 13/11/2012 18:16 Guido Falsi said the following: >>> My idea, but is just a speculation, i could be very wrong, is that >>> the geom >>> tasting code has some problem with multiple vdev root pools. >> >> Guido, >> >> you are absolutely correct. The code for reconstructing/tasting a >> root pool >> configuration is a modified upstream code, so it inherited a >> limitation from it: >> the support for only a single top-level vdev in a root pool. >> I have an idea how to add the missing support, but it turned out not >> to be >> something that I can hack together in couple of hours. > > I can imagine, it does not look simple in any way! > >> >> So, instead I wrote the following patch that should fall back to >> using a root pool >> configuration from zpool.cache (if it's present there) for a >> multi-vdev root pool: >> http://people.freebsd.org/~avg/zfs-spa-multi_vdev_root_fallback.diff >> >> The patch also fixes a minor (single-time) memory leak. >> >> Guido, Bartosz, >> could you please test the patch? > > I have just compiler an r242910 kernel with this patch (and just this > one) applied. > > System booted so it seems to work fine! :) I've just compiled and installed fresh kernel with your patch, system booted without any problems, so apparently patch works as intended. Good job Andriy! > >> >> Apologies for the breakage. >> > > No worries, and thanks for this fix. > > Also thanks for all the work on ZFS! > Make it twice :) Regards, -- Bartosz Stec From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 00:51:34 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E1AB53F8 for ; Sun, 18 Nov 2012 00:51:34 +0000 (UTC) (envelope-from je@8192.net) Received: from smtp129.dfw.emailsrvr.com (smtp129.dfw.emailsrvr.com [67.192.241.129]) by mx1.freebsd.org (Postfix) with ESMTP id B926B8FC1F for ; Sun, 18 Nov 2012 00:51:34 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp29.relay.dfw1a.emailsrvr.com (SMTP Server) with ESMTP id C84A939825C for ; Sat, 17 Nov 2012 19:42:43 -0500 (EST) X-Virus-Scanned: OK Received: by smtp29.relay.dfw1a.emailsrvr.com (Authenticated sender: john-AT-8192.net) with ESMTPSA id 9BDF639825A for ; Sat, 17 Nov 2012 19:42:43 -0500 (EST) Message-ID: <50A82F08.8010908@8192.net> Date: Sat, 17 Nov 2012 16:42:48 -0800 From: je User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: ZFS behavior with odd-number of non-parity drives? Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 00:51:34 -0000 Given an odd number of data (non-parity) drives, how does ZFS write data to the individual disks? For example, a 4-drive RAIDZ or 5-drive RAIDZ2 would use three drives for data and two for parity when writing. Since the recordsize is a power of two (and no power of two is evenly divisible by three), it is impossible to write a full sector (be it a 512b or 4k sector) of data to each data drive. In this case, what does ZFS do? Will it write recordsize/3 bytes of data to each drive, leaving the drive to do a read/modify/write operation? Or will ZFS round the write up to the nearest drive sector size and "waste" the extra bytes? I see the best practice for RAIDZ is to use an odd number of drives, and for RAIDZ2 an even number, but I am curious as to the behavior of ZFS in these sub-optimal conditions. John From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 02:07:43 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 00638F8F for ; Sun, 18 Nov 2012 02:07:42 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id A154C8FC13 for ; Sun, 18 Nov 2012 02:07:41 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 5E69947DDA; Sun, 18 Nov 2012 03:01:38 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.4 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [172.19.191.4] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 697BF47DD8 for ; Sun, 18 Nov 2012 03:01:36 +0100 (CET) Message-ID: <50A84180.9080309@platinum.linux.pl> Date: Sun, 18 Nov 2012 03:01:36 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Jumbo Packet fail. References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 02:07:43 -0000 setup ip addresses after setting the mtu or restart the interface with ifconfig em0 down up mtu gets cached in routes, see netstat -rnW On 2012-11-17 23:32, Zaphod Beeblebrox wrote: > I recently started using an iSCSI disk on my ZFS array seriously from > a windows 7 host on the network. The performance is acceptable, but I > was led to believe that using Jumbo packets is a win here. My win7 > motherboard adapter did not support jumbo frames, so I got one that > did... configured it, etc. Just in case anyone cares, the motherboard > had an 82567V-2 (does not support jumbo frames) and I added in an > intel 82574L based card. > > Similarly, I configured em0 on my FreeBSD host to have an MTU of 9014 > bytes (I also tried 9000). The hardware on the FreeBSD 9.1RC2 side > is: > > em0: port 0xdc00-0xdc1f > mem 0xfcfe0000-0xfcffffff,0xfcfc0000-0xfcfdffff irq 16 at device 0.0 > on pci3 > > pciconf -lv identifies the chipset as 82572EI > > Now... my problem is that the windows machine correctly advertises an > MSS of 8960 bytes in it's SYN packet while FreeBSD advertises 1460 in > the syn-ack. > > [1:42:342]root@vr:/usr/local/etc/istgt> ifconfig em0 > em0: flags=8843 metric 0 mtu 9014 > options=4019b > ether 00:15:17:0d:04:a8 > inet 66.96.20.52 netmask 0xffffffe0 broadcast 66.96.20.63 > inet6 fe80::215:17ff:fe0d:4a8%em0 prefixlen 64 scopeid 0x5 > inet6 2001:1928:1::52 prefixlen 64 > inet 192.168.221.2 netmask 0xffffff00 broadcast 192.168.221.255 > nd6 options=21 > media: Ethernet autoselect (1000baseT ) > status: active > > I have tested this with both ipv4 and ipv6 connections between the > win7 host and the FreeBSD server. win7 always requests the larger > mss, and FreeBSD the smaller. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 02:15:22 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 2F534BF for ; Sun, 18 Nov 2012 02:15:22 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id 994FB8FC08 for ; Sun, 18 Nov 2012 02:15:21 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id go10so1070204lbb.13 for ; Sat, 17 Nov 2012 18:15:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=tpa5fNWvLZl96ElcEWuEbpTYNLPH4ZPqcM3rc6A+1SI=; b=NL9u6B5/zm8VaUMxx1xdYTGJYVCLECGjAWRPbqCTfuowxhS3VJPTEZ6cO8pCLHH6Qb Qk87ZI0EnImVUA8/3n1+I6khlFyiwrtlIuWGMPe+HmmpliU72gpdtyoZFTn20rk7zE2A 9oOCndi2BxdBWd4f7A/ti5icDmLSohMmz0eW8/kRtEFwAWpoCVSKRPlXngdxMP8vQ55+ fQtSgnUZjKN923b2MdGgmRAaOzOQkiW3GkmTVkTgNpjcggukr0oDl44aeTAUdrVh7TAh e00HXMHlgWaCo7SohAn54pAQXkBdjfQV0LVQ3u0ewQRnUDTP/B8nf7sxMwNPjVvBhm3t 21Kg== MIME-Version: 1.0 Received: by 10.152.129.197 with SMTP id ny5mr8110106lab.43.1353204920533; Sat, 17 Nov 2012 18:15:20 -0800 (PST) Received: by 10.112.49.138 with HTTP; Sat, 17 Nov 2012 18:15:20 -0800 (PST) In-Reply-To: <20121117225851.GJ1462@egr.msu.edu> References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <20121116044055.GA47859@neutralgood.org> <50A64694.5030001@egr.msu.edu> <20121117181803.GA26421@neutralgood.org> <20121117225851.GJ1462@egr.msu.edu> Date: Sat, 17 Nov 2012 21:15:20 -0500 Message-ID: Subject: Re: SSD recommendations for ZFS cache/log From: Zaphod Beeblebrox To: Adam McDougall Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 02:15:22 -0000 On Sat, Nov 17, 2012 at 5:58 PM, Adam McDougall wrote: > Some found a way to measure progress and kept letting it churn/deadlock/reboot > until things came back to normal. I think in -current there is a new zfs > feature allowing for background deletion that may ease this issue, and > someone reported success. I think the "feature" you're thinking about is the background deletion of a filesystem object. As I recall the problem, the deletion of a filesystem object is atomic ... and can hang a whole array while it happens (I suppose this is doubly true if the filesystem is deduped). The patch makes this happen in the background. However... it wouldn't have any effect on removing a non-filesystem chuck of data, AFAIK. In my experience of copying 1T of data and then recopying that same 1T of data, the first copy took some hours and the 2nd copy took some days. The first delete took many days and the 2nd delete was fairly quick. From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 11:49:06 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 2C7F3AE9; Sun, 18 Nov 2012 11:49:06 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 326428FC16; Sun, 18 Nov 2012 11:49:04 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id NAA02000; Sun, 18 Nov 2012 13:48:45 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Ta3Mj-0008zE-K2; Sun, 18 Nov 2012 13:48:45 +0200 Message-ID: <50A8CB1C.9090907@FreeBSD.org> Date: Sun, 18 Nov 2012 13:48:44 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: Bartosz Stec , Guido Falsi Subject: Re: problem booting to multi-vdev root pool References: <509D1DEC.6040505@FreeBSD.org> <50A27243.408@madpilot.net> <50A65F83.5000604@FreeBSD.org> <50A66701.701@madpilot.net> <50A82B3A.6020608@it4pro.pl> In-Reply-To: <50A82B3A.6020608@it4pro.pl> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 11:49:06 -0000 on 18/11/2012 02:26 Bartosz Stec said the following: > W dniu 2012-11-16 17:17, Guido Falsi pisze: >> On 11/16/12 16:45, Andriy Gapon wrote: >>> Guido, Bartosz, >>> could you please test the patch? >> >> I have just compiler an r242910 kernel with this patch (and just this one) >> applied. >> >> System booted so it seems to work fine! :) > I've just compiled and installed fresh kernel with your patch, system booted > without any problems, so apparently patch works as intended. Thank you both very much for testing! Committed as r243213. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 14:04:36 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 8726CA6 for ; Sun, 18 Nov 2012 14:04:36 +0000 (UTC) (envelope-from graudeejs@yandex.ru) Received: from forward3h.mail.yandex.net (forward3h.mail.yandex.net [IPv6:2a02:6b8:0:f05::3]) by mx1.freebsd.org (Postfix) with ESMTP id 01C398FC12 for ; Sun, 18 Nov 2012 14:04:36 +0000 (UTC) Received: from web29h.yandex.ru (web29h.yandex.ru [84.201.187.163]) by forward3h.mail.yandex.net (Yandex) with ESMTP id 0E18E13614F1 for ; Sun, 18 Nov 2012 18:03:54 +0400 (MSK) Received: from 127.0.0.1 (localhost.localdomain [127.0.0.1]) by web29h.yandex.ru (Yandex) with ESMTP id C2A493384F1; Sun, 18 Nov 2012 18:03:54 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1353247434; bh=sJJxuZ5KYMBL+kd/GsDFYCv10HvIKDTfz5AdmKnuqRc=; h=From:To:Subject:Date; b=YFoiwfhpVyLmHR7HWXQORqiYA/7XtmcRcm2juvEyDr4YTR1v8e7pKPieJ7pW6hueH Dulgln+x0ef4pxkMqyLLzBE3EgzOf3J0osSDO+xjSqKLjhBB4Is+t6Y3uut0+PsgHd BFOYq0A1GJtnyHgZRsDa5XPD3VkCAM4Ci1cPji7w= Received: from mpe-11-155.mpe.lv (mpe-11-155.mpe.lv [83.241.11.155]) by web29h.yandex.ru with HTTP; Sun, 18 Nov 2012 18:03:54 +0400 From: Aldis Berjoza To: freebsd-fs Subject: Why do we need vfs.root.mountfrom for zfs MIME-Version: 1.0 Message-Id: <848051353247434@web29h.yandex.ru> Date: Sun, 18 Nov 2012 16:03:54 +0200 X-Mailer: Yamail [ http://yandex.ru ] 5.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 14:04:36 -0000 I was wondering why oh, why do we need to set vfs.root.mountfrom in /boot/loader.conf in order to boot from zfs. zpools have bootfs option. This info is redundant. I think one of two could be totally avoided at least in case when we boot form gptzfsboot. What I'm missing? -- Aldis Berjoza FreeBSD addict From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 19:19:53 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A6D258A2 for ; Sun, 18 Nov 2012 19:19:53 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id DC6C98FC33 for ; Sun, 18 Nov 2012 19:19:51 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id VAA05361; Sun, 18 Nov 2012 21:19:48 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TaAPE-0009HA-9N; Sun, 18 Nov 2012 21:19:48 +0200 Message-ID: <50A934D2.9010205@FreeBSD.org> Date: Sun, 18 Nov 2012 21:19:46 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: Aldis Berjoza Subject: Re: Why do we need vfs.root.mountfrom for zfs References: <848051353247434@web29h.yandex.ru> In-Reply-To: <848051353247434@web29h.yandex.ru> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 19:19:53 -0000 on 18/11/2012 16:03 Aldis Berjoza said the following: > I was wondering why oh, why do we need to set > vfs.root.mountfrom in /boot/loader.conf in order to boot from zfs. Who is 'we'? And why do you think 'we' have to do that? :-) > zpools have bootfs option. > > This info is redundant. > I think one of two could be totally avoided at least in case when we boot form gptzfsboot. > > What I'm missing? > You don't miss anything. The defaults work fine. You don't have to set vfs.root.mountfrom unless you want it to point to some non-default fs. More details: http://ru.kyivbsd.org.ua/arhiv/2012/kyivbsd12-gapon-zfs.pdf?attredirects=0&d=1 Page 17 and one. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 19:38:34 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 56A96FF7; Sun, 18 Nov 2012 19:38:34 +0000 (UTC) (envelope-from graudeejs@yandex.ru) Received: from forward3h.mail.yandex.net (forward3h.mail.yandex.net [IPv6:2a02:6b8:0:f05::3]) by mx1.freebsd.org (Postfix) with ESMTP id C25EB8FC0C; Sun, 18 Nov 2012 19:38:33 +0000 (UTC) Received: from web27h.yandex.ru (web27h.yandex.ru [84.201.187.161]) by forward3h.mail.yandex.net (Yandex) with ESMTP id 0F2891361AED; Sun, 18 Nov 2012 23:38:31 +0400 (MSK) Received: from 127.0.0.1 (localhost.localdomain [127.0.0.1]) by web27h.yandex.ru (Yandex) with ESMTP id ACFA86408030; Sun, 18 Nov 2012 23:38:31 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1353267511; bh=GmPphrmw5uXmfOdiB26Gnnwt2UuXLTiniOV6DDZ3UQY=; h=From:To:Cc:In-Reply-To:References:Subject:Date; b=ulB7zIi29CL7WrF3Ul7SYRgyP+eldo65KgxDtM8tSoaqDEXnXtn7+Ykie6XEmPQw9 yTU3lm/M/xjSCxmjkMLdOvG3aM5rQU2QNp5d/mNdSPE5MqjE7cNAI62Kc4iJYBN/Aa Rao6BCHyChkpxbhGzum66SmxqduQK4ayQCeWLH0E= Received: from mpe-11-155.mpe.lv (mpe-11-155.mpe.lv [83.241.11.155]) by web27h.yandex.ru with HTTP; Sun, 18 Nov 2012 23:38:31 +0400 From: Aldis Berjoza To: Andriy Gapon In-Reply-To: <50A934D2.9010205@FreeBSD.org> References: <848051353247434@web29h.yandex.ru> <50A934D2.9010205@FreeBSD.org> Subject: Re: Why do we need vfs.root.mountfrom for zfs MIME-Version: 1.0 Message-Id: <1026501353267511@web27h.yandex.ru> Date: Sun, 18 Nov 2012 21:38:31 +0200 X-Mailer: Yamail [ http://yandex.ru ] 5.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=koi8-r Cc: "freebsd-fs@FreeBSD.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 19:38:34 -0000 18.11.2012, 21:19, "Andriy Gapon" : > on 18/11/2012 16:03 Aldis Berjoza said the following: > >> šI was wondering why oh, why do we need to set >> švfs.root.mountfrom in /boot/loader.conf in order to boot from zfs. > > Who is 'we'? šAnd why do you think 'we' have to do that? :-) > >> šzpools have bootfs option. >> >> šThis info is redundant. >> šI think one of two could be totally avoided at least in case when we boot form gptzfsboot. >> >> šWhat I'm missing? > > You don't miss anything. šThe defaults work fine. šYou don't have to set > vfs.root.mountfrom unless you want it to point to some non-default fs. > > More details: > http://ru.kyivbsd.org.ua/arhiv/2012/kyivbsd12-gapon-zfs.pdf?attredirects=0&d=1 > Page 17 and one. > > -- > Andriy Gapon Thank you, thank you so very much. I never knew vfs.root.mountfrom was optional. All the turorials I've been reading said to set it. -- Aldis Berjoza FreeBSD addict From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 19:43:46 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id C5DD43A7 for ; Sun, 18 Nov 2012 19:43:46 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 07E7D8FC08 for ; Sun, 18 Nov 2012 19:43:45 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id VAA05588; Sun, 18 Nov 2012 21:43:43 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TaAmN-0009Id-Cc; Sun, 18 Nov 2012 21:43:43 +0200 Message-ID: <50A93A6E.1090907@FreeBSD.org> Date: Sun, 18 Nov 2012 21:43:42 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: Aldis Berjoza Subject: Re: Why do we need vfs.root.mountfrom for zfs References: <848051353247434@web29h.yandex.ru> <50A934D2.9010205@FreeBSD.org> <1026501353267511@web27h.yandex.ru> In-Reply-To: <1026501353267511@web27h.yandex.ru> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 19:43:46 -0000 on 18/11/2012 21:38 Aldis Berjoza said the following: > > 18.11.2012, 21:19, "Andriy Gapon" : >> on 18/11/2012 16:03 Aldis Berjoza said the following: >> >>> I was wondering why oh, why do we need to set >>> vfs.root.mountfrom in /boot/loader.conf in order to boot from zfs. >> >> Who is 'we'? And why do you think 'we' have to do that? :-) >> >>> zpools have bootfs option. >>> >>> This info is redundant. >>> I think one of two could be totally avoided at least in case when we boot form gptzfsboot. >>> >>> What I'm missing? >> >> You don't miss anything. The defaults work fine. You don't have to set >> vfs.root.mountfrom unless you want it to point to some non-default fs. >> >> More details: >> http://ru.kyivbsd.org.ua/arhiv/2012/kyivbsd12-gapon-zfs.pdf?attredirects=0&d=1 >> Page 17 and one. the above should have been "and on". > > Thank you, thank you so very much. > I never knew vfs.root.mountfrom was optional. All the turorials I've been reading said to set it. Well, it used to be true that either an fstab "/" entry or vfs.root.mountfrom had to be configured. But now that's optional. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 22:20:01 2012 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 875E02C5 for ; Sun, 18 Nov 2012 22:20:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 5CF258FC08 for ; Sun, 18 Nov 2012 22:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qAIMK1h5061510 for ; Sun, 18 Nov 2012 22:20:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qAIMK1oY061509; Sun, 18 Nov 2012 22:20:01 GMT (envelope-from gnats) Date: Sun, 18 Nov 2012 22:20:01 GMT Message-Id: <201211182220.qAIMK1oY061509@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Peter Wullinger Subject: Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Peter Wullinger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 22:20:01 -0000 The following reply was made to PR kern/153520; it has been noted by GNATS. From: Peter Wullinger To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable. Date: Sun, 18 Nov 2012 23:10:42 +0100 I see this too on two identical HP Proliant 320 DL G6 with 9-STABLE. The machine usually needs some nudging in the form of warm restarts to boot the operating system. =3D=3D bootloader output =3D=3D error 1 lba 32 error 1 lba 1 No ZFS pools located, can't boot =3D=3D bootloader output =3D=3D =3D=3D gpart show =3D=3D =3D> 34 488326973 da0 GPT (232G) 34 128 1 freebsd-boot (64k) 162 67108864 2 freebsd-swap (32G) 67109026 421217981 3 freebsd-zfs [bootme] (200G) =3D=3D gpart show =3D=3D Root is on a hardware-mirror provided by the machine's HPQ Smart Array P410i The other 6 drives are configured as JBOD with a raidz2 data pool (not shown here) =3D=3D zpool status rpool =3D=3D zpool status rpool pool: rpool state: ONLINE scan: scrub repaired 0 in 0h12m with 0 errors on Wed Nov 7 04:47:21 2012 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 da0p3 ONLINE 0 0 0 errors: No known data errors =3D=3D zpool status rpool =3D=3D =3D=3D zdb =3D=3D rpool: version: 28 name: 'rpool' state: 0 txg: 22695230816 pool_guid: 16981309850887562666 hostid: 1031366389 hostname: '...' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 16981309850887562666 children[0]: type: 'disk' id: 0 guid: 6716485486554561877 path: '/dev/da0p3' phys_path: '/dev/da0p3' whole_disk: 0 metaslab_array: 23 metaslab_shift: 31 ashift: 9 asize: 215658790912 is_log: 0 DTL: 65 =3D=3D zdb =3D=3D =3D=3D dmesg =3D=3D Copyright (c) 1992-2012 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.1-PRERELEASE #3 r242394: Wed Oct 31 13:15:48 CET 2012 src@...:/usr/obj/usr/src/sys/ML350 amd64 CPU: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (2000.12-MHz K8-class = CPU) Origin =3D "GenuineIntel" Id =3D 0x106a5 Family =3D 0x6 Model =3D 0x1a= Stepping =3D 5 Features=3D0xbfebfbff Features2=3D0x9ce3bd AMD Features=3D0x28100800 AMD Features2=3D0x1 TSC: P-state invariant, performance statistics real memory =3D 6442450944 (6144 MB) avail memory =3D 6177263616 (5891 MB) Event timer "LAPIC" quality 400 ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 2 cpu2 (AP): APIC ID: 4 cpu3 (AP): APIC ID: 6 ACPI Warning: Invalid length for Pm1aControlBlock: 32, using default 16 (20= 110527/tbfadt-638) ACPI Warning: Invalid length for Pm2ControlBlock: 32, using default 8 (2011= 0527/tbfadt-638) ioapic1 irqs 24-47 on motherboard ioapic0 irqs 0-23 on motherboard kbd1 at kbdmux0 cryptosoft0: on motherboard acpi0: on motherboard acpi0: Power Button (fixed) cpu0: on acpi0 cpu1: on acpi0 cpu2: on acpi0 cpu3: on acpi0 attimer0: port 0x40-0x43 irq 0 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 hpet0: iomem 0xfed00000-0xfed003ff on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 Event timer "HPET" frequency 14318180 Hz quality 450 Event timer "HPET1" frequency 14318180 Hz quality 440 Event timer "HPET2" frequency 14318180 Hz quality 440 Event timer "HPET3" frequency 14318180 Hz quality 440 atrtc0: port 0x70-0x71 on acpi0 Event timer "RTC" frequency 32768 Hz quality 0 Timecounter "ACPI-fast" frequency 3579545 Hz quality 900 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x908-0x90b on acpi0 pcib0: on acpi0 pci0: on pcib0 pcib1: at device 1.0 on pci0 pci4: on pcib1 ciss0: port 0x4000-0x40ff mem 0xfb000000-0xfb3fffff,= 0xfaff0000-0xfaff0fff irq 28 at device 0.0 on pci4 ciss0: PERFORMANT Transport pcib2: at device 2.0 on pci0 pci23: on pcib2 pcib3: at device 3.0 on pci0 pci5: on pcib3 pcib4: at device 4.0 on pci0 pci8: on pcib4 pcib5: at device 5.0 on pci0 pci11: on pcib5 pcib6: at device 6.0 on pci0 pci24: on pcib6 pcib7: at device 7.0 on pci0 pci14: on pcib7 ciss1: port 0x5000-0x50ff mem 0xfb800000-0xfbbfffff,0= xfb7f0000-0xfb7f0fff irq 30 at device 0.0 on pci14 ciss1: PERFORMANT Transport pcib8: at device 8.0 on pci0 pci25: on pcib8 pcib9: at device 9.0 on pci0 pci17: on pcib9 pcib10: at device 10.0 on pci0 pci20: on pcib10 mpt0: port 0x6000-0x60ff mem 0xfbff0000-0xfbff3= fff,0xfbfe0000-0xfbfeffff irq 33 at device 0.0 on pci20 mpt0: MPI Version=3D1.5.16.0 mpt0: Capabilities: ( RAID-0 RAID-1E RAID-1 ) mpt0: 0 Active Volumes (2 Max) mpt0: 0 Hidden Drive Members (10 Max) pci0: at device 20.0 (no driver att= ached) pci0: at device 20.1 (no driver att= ached) pci0: at device 20.2 (no driver att= ached) pcib11: at device 28.0 on pci0 pci2: on pcib11 pcib12: at device 0.0 on pci2 pci3: on pcib12 bge0: mem 0= xfaef0000-0xfaefffff,0xfaee0000-0xfaeeffff irq 16 at device 4.0 on pci3 bge0: CHIP ID 0x00009003; ASIC REV 0x09; CHIP REV 0x90; PCI-X 133 MHz miibus0: on bge0 brgphy0: PHY 1 on miibus0 brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000ba= seT-master, 1000baseT-FDX, 1000baseT-FDX-master, auto, auto-flow bge0: Ethernet address: 00:25:b3:ad:a5:8c bge1: mem 0= xfaed0000-0xfaedffff,0xfaec0000-0xfaecffff irq 17 at device 4.1 on pci3 bge1: CHIP ID 0x00009003; ASIC REV 0x09; CHIP REV 0x90; PCI-X 133 MHz miibus1: on bge1 brgphy1: PHY 1 on miibus1 brgphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000ba= seT-master, 1000baseT-FDX, 1000baseT-FDX-master, auto, auto-flow bge1: Ethernet address: 00:25:b3:ad:a5:8d uhci0: port 0x1000-0x101f irq = 20 at device 29.0 on pci0 usbus0 on uhci0 uhci1: port 0x1020-0x103f irq = 23 at device 29.1 on pci0 usbus1 on uhci1 uhci2: port 0x1040-0x105f irq = 22 at device 29.2 on pci0 usbus2 on uhci2 uhci3: port 0x1060-0x107f irq = 23 at device 29.3 on pci0 usbus3 on uhci3 ehci0: mem 0xfabf0000-0xfa= bf03ff irq 20 at device 29.7 on pci0 usbus4: EHCI version 1.0 usbus4 on ehci0 pcib13: at device 30.0 on pci0 pci1: on pcib13 vgapci0: port 0x3000-0x30ff mem 0xf0000000-0xf7fff= fff,0xfadf0000-0xfadfffff irq 23 at device 3.0 on pci1 pci1: at device 4.0 (no driver attached) pci1: at device 4.2 (no driver attached) uhci4: port 0x3800-0x381f irq 22 at device = 4.4 on pci1 usbus5 on uhci4 ipmi0: mem 0xfacf0000-0xfacf00ff irq 21 at device 4= =2E6 on pci1 ipmi0: using KSC interface isab0: at device 31.0 on pci0 isa0: on isab0 atapci0: port 0x10c0-0x10c7,0x10c8-0x10cb,= 0x10d0-0x10d7,0x10d8-0x10db,0x10e0-0x10ef,0x10f0-0x10ff irq 17 at device 31= =2E5 on pci0 ata2: at channel 0 on atapci0 ata3: at channel 1 on atapci0 acpi_tz0: on acpi0 atkbdc0: port 0x60,0x64 irq 1 on acpi0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] qpi0: on motherboard ipmi1: on isa0 device_attach: ipmi1 attach returned 16 ipmi1: on isa0 device_attach: ipmi1 attach returned 16 orm0: at iomem 0xc0000-0xcafff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=3D0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 coretemp0: on cpu0 est0: on cpu0 est: CPU supports Enhanced Speedstep, but is not recognized. est: cpu_vendor GenuineIntel, msr f device_attach: est0 attach returned 6 p4tcc0: on cpu0 coretemp1: on cpu1 est1: on cpu1 est: CPU supports Enhanced Speedstep, but is not recognized. est: cpu_vendor GenuineIntel, msr f device_attach: est1 attach returned 6 p4tcc1: on cpu1 coretemp2: on cpu2 est2: on cpu2 est: CPU supports Enhanced Speedstep, but is not recognized. est: cpu_vendor GenuineIntel, msr f device_attach: est2 attach returned 6 p4tcc2: on cpu2 coretemp3: on cpu3 est3: on cpu3 est: CPU supports Enhanced Speedstep, but is not recognized. est: cpu_vendor GenuineIntel, msr f device_attach: est3 attach returned 6 p4tcc3: on cpu3 ZFS filesystem version 5 ZFS storage pool version 28 Timecounters tick every 10.000 msec usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 12Mbps Full Speed USB v1.0 usbus3: 12Mbps Full Speed USB v1.0 usbus4: 480Mbps High Speed USB v2.0 usbus5: 12Mbps Full Speed USB v1.0 ugen0.1: at usbus0 uhub0: on usbus0 ugen1.1: at usbus1 uhub1: on usbus1 (probe0:ciss0:0:0:0): REPORT LUNS. CDB: a0 0 0 0 0 0 0 0 0 10 0 0=20 (probe0:ciss0:0:0:0): CAM status: SCSI Status Error (probe0:ciss0:0:0:0): SCSI status: Check Condition (probe0:ciss0:0:0:0): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command= operation code) (probe0:ciss0:0:0:0): Error 22, Unretryable error ugen2.1: at usbus2 uhub2: on usbus2 ugen3.1: at usbus3 uhub3: on usbus3 (probe1:ciss0:0:1:0): REPORT LUNS. CDB: a0 0 0 0 0 0 0 0 0 10 0 0=20 (probe1:ciss0:0:1:0): CAM status: SCSI Status Error (probe1:ciss0:0:1:0): SCSI status: Check Condition (probe1:ciss0:0:1:0): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command= operation code) (probe1:ciss0:0:1:0): Error 22, Unretryable error ugen4.1: at usbus4 uhub4: on usbus4 ugen5.1: <0x103c> at usbus5 uhub5: <0x103c UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus5 (probe2:ciss0:0:2:0): REPORT LUNS. CDB: a0 0 0 0 0 0 0 0 0 10 0 0=20 (probe2:ciss0:0:2:0): CAM status: SCSI Status Error (probe2:ciss0:0:2:0): SCSI status: Check Condition (probe2:ciss0:0:2:0): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command= operation code) (probe2:ciss0:0:2:0): Error 22, Unretryable error ipmi0: IPMI device rev. 1, firmware rev. 2.07, version 2.0 unknown: FAILURE - INQUIRY ILLEGAL REQUEST asc=3D0x24 ascq=3D0x00=20 ipmi0: Number of channels 0 (probe3:ciss0:0:3:0): REPORT LUNS. CDB: a0 0 0 0 0 0 0 0 0 10 0 0=20 (probe3:ciss0:0:3:0): CAM status: SCSI Status Error (probe3:ciss0:0:3:0): SCSI status: Check Condition (probe3:ciss0:0:3:0): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command= operation code) (probe3:ciss0:0:3:0): Error 22, Unretryable error ipmi0: Attached watchdog (probe4:ciss0:0:4:0): REPORT LUNS. CDB: a0 0 0 0 0 0 0 0 0 10 0 0=20 (probe4:ciss0:0:4:0): CAM status: SCSI Status Error (probe4:ciss0:0:4:0): SCSI status: Check Condition (probe4:ciss0:0:4:0): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command= operation code) (probe4:ciss0:0:4:0): Error 22, Unretryable error (probe5:ciss0:0:5:0): REPORT LUNS. CDB: a0 0 0 0 0 0 0 0 0 10 0 0=20 (probe5:ciss0:0:5:0): CAM status: SCSI Status Error (probe5:ciss0:0:5:0): SCSI status: Check Condition (probe5:ciss0:0:5:0): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command= operation code) (probe5:ciss0:0:5:0): Error 22, Unretryable error (probe6:ciss0:0:6:0): REPORT LUNS. CDB: a0 0 0 0 0 0 0 0 0 10 0 0=20 (probe6:ciss0:0:6:0): CAM status: SCSI Status Error (probe6:ciss0:0:6:0): SCSI status: Check Condition (probe6:ciss0:0:6:0): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command= operation code) (probe6:ciss0:0:6:0): Error 22, Unretryable error da0 at ciss0 bus 0 scbus0 target 0 lun 0 da0: Fixed Direct Access SCSI-5 device=20 da0: 135.168MB/s transfers da0: Command Queueing enabled da0: 238440MB (488327040 512 byte sectors: 255H 32S/T 59844C) da1 at ciss0 bus 0 scbus0 target 1 lun 0 da1: Fixed Direct Access SCSI-5 device=20 da1: 135.168MB/s transfers da1: Command Queueing enabled da1: 953837MB (1953459632 512 byte sectors: 255H 32S/T 65535C) da2 at ciss0 bus 0 scbus0 target 2 lun 0 da2: Fixed Direct Access SCSI-5 device=20 da2: 135.168MB/s transfers da2: Command Queueing enabled da2: 953837MB (1953459632 512 byte sectors: 255H 32S/T 65535C) da3 at ciss0 bus 0 scbus0 target 3 lun 0 da3: Fixed Direct Access SCSI-5 device=20 da3: 135.168MB/s transfers da3: Command Queueing enabled da3: 953837MB (1953459632 512 byte sectors: 255H 32S/T 65535C) da4 at ciss0 bus 0 scbus0 target 4 lun 0 da4: Fixed Direct Access SCSI-5 device=20 da4: 135.168MB/s transfers da4: Command Queueing enabled da4: 953837MB (1953459632 512 byte sectors: 255H 32S/T 65535C) da5 at ciss0 bus 0 scbus0 target 5 lun 0 da5: Fixed Direct Access SCSI-5 device=20 da5: 135.168MB/s transfers da5: Command Queueing enabled da5: 953837MB (1953459632 512 byte sectors: 255H 32S/T 65535C) da6 at ciss0 bus 0 scbus0 target 6 lun 0 da6: Fixed Direct Access SCSI-5 device=20 da6: 135.168MB/s transfers da6: Command Queueing enabled da6: 953837MB (1953459632 512 byte sectors: 255H 32S/T 65535C) sa0 at ciss1 bus 32 scbus3 target 3 lun 0 sa0: Removable Sequential Access SCSI-5 device=20 sa0: 135.168MB/s transfers sa0: Command Queueing enabled sa1 at mpt0 bus 0 scbus4 target 5 lun 0 sa1: Removable Sequential Access SCSI-5 device=20 sa1: 300.000MB/s transfers sa1: Command Queueing enabled cd0 at ata3 bus 0 scbus6 target 0 lun 0 cd0: Removable CD-ROM SCSI-0 device=20 cd0: 3.300MB/s transfers cd0: Attempt to query device size failed: NOT READY, Medium not present - t= ray closed ch0 at mpt0 bus 0 scbus4 target 5 lun 1 ch0: Removable Changer SCSI-5 device=20 ch0: 300.000MB/s transfers ch0: Command Queueing enabled SMP: AP CPU #3 Launched! ch0: 8 slots, 1 drive, 1 picker, 0 portals SMP: AP CPU #2 Launched! SMP: AP CPU #1 Launched! Timecounter "TSC-low" frequency 15625916 Hz quality 1000 GEOM: da3: the primary GPT table is corrupt or invalid. GEOM: da3: using the secondary instead -- recovery strongly advised. GEOM: da4: the primary GPT table is corrupt or invalid. GEOM: da4: using the secondary instead -- recovery strongly advised. GEOM: da5: the primary GPT table is corrupt or invalid. GEOM: da5: using the secondary instead -- recovery strongly advised. GEOM: da6: the primary GPT table is corrupt or invalid. GEOM: da6: using the secondary instead -- recovery strongly advised. Root mount waiting for: usbus5 usbus4 usbus3 usbus2 usbus1 usbus0 uhub0: 2 ports with 2 removable, self powered uhub1: 2 ports with 2 removable, self powered uhub2: 2 ports with 2 removable, self powered uhub5: 2 ports with 2 removable, self powered uhub3: 2 ports with 2 removable, self powered Root mount waiting for: usbus5 usbus4 ugen5.2: at usbus5 ukbd0: on usbus5 kbd2 at ukbd0 ums0: on usbus5 ums0: 3 buttons and [XY] coordinates ID=3D0 Root mount waiting for: usbus4 Root mount waiting for: usbus4 uhub4: 8 ports with 8 removable, self powered Trying to mount root from zfs:rpool []... ugen3.2: at usbus3 =3D=3D dmesg =3D=3D --=20 Erfahrung hei=DFt gar nichts. Man kann seine Sache auch 35 Jahre schlecht machen. =20 -- Kurt Tucholsky From owner-freebsd-fs@FreeBSD.ORG Sun Nov 18 23:10:33 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 05321635 for ; Sun, 18 Nov 2012 23:10:33 +0000 (UTC) (envelope-from bryan@shatow.net) Received: from secure.xzibition.com (secure.xzibition.com [173.160.118.92]) by mx1.freebsd.org (Postfix) with ESMTP id 7F30E8FC15 for ; Sun, 18 Nov 2012 23:10:31 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; c=nofws; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; q=dns; s=sweb; b=gaWZ97 M+R4M8TYdI31OhtrngZCO6an7glB29uk1iv8noXjHi9IE5+nvxLEjr4hM/cCSd1T bk3O9bvNVcjPVkLVpIzYdXCc3aJVw1zizweedigF47d1/4SzhzyzbSFz2hHsJ3nr aRfGdEedWy3tdmxnnXwePtIEyTEmZt9IaN1nE= DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=shatow.net; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; s=sweb; bh=WTNSfTn9kBBd oFrcXBrp122+/DxFOxf6LVnY3spDp4k=; b=guEYfTHJBDbxImpkA0NcFuKIc8ar 8+ICdZZykmrGzHVjBJkgP/jpnHAgrdI84GKV3trPU8VX3KcLOYlQES+Hyl89nljF qVVCwJ4LpmCvTLm+sgPUMiB83IEy94FRzTwSsajAjaMxZpeHeDkGyFJrhrmLqrCD EcEYozKxsRRlBuQ= Received: (qmail 98311 invoked from network); 18 Nov 2012 17:10:28 -0600 Received: from unknown (HELO ?10.10.0.115?) (bryan@shatow.net@10.10.0.115) by sweb.xzibition.com with ESMTPA; 18 Nov 2012 17:10:28 -0600 Message-ID: <50A96AE2.60803@shatow.net> Date: Sun, 18 Nov 2012 17:10:26 -0600 From: Bryan Drewery User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andriy Gapon Subject: Re: Why do we need vfs.root.mountfrom for zfs References: <848051353247434@web29h.yandex.ru> <50A934D2.9010205@FreeBSD.org> <1026501353267511@web27h.yandex.ru> <50A93A6E.1090907@FreeBSD.org> In-Reply-To: <50A93A6E.1090907@FreeBSD.org> X-Enigmail-Version: 1.4.5 OpenPGP: id=3C9B0CF9; url=http://www.shatow.net/bryan/bryan.asc Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, vermaden X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Nov 2012 23:10:33 -0000 On 11/18/2012 1:43 PM, Andriy Gapon wrote: > on 18/11/2012 21:38 Aldis Berjoza said the following: >> >> 18.11.2012, 21:19, "Andriy Gapon" : >>> on 18/11/2012 16:03 Aldis Berjoza said the following: >>> >>>> I was wondering why oh, why do we need to set >>>> vfs.root.mountfrom in /boot/loader.conf in order to boot from zfs. >>> >>> Who is 'we'? And why do you think 'we' have to do that? :-) >>> >>>> zpools have bootfs option. >>>> >>>> This info is redundant. >>>> I think one of two could be totally avoided at least in case when we boot form gptzfsboot. >>>> >>>> What I'm missing? >>> >>> You don't miss anything. The defaults work fine. You don't have to set >>> vfs.root.mountfrom unless you want it to point to some non-default fs. >>> >>> More details: >>> http://ru.kyivbsd.org.ua/arhiv/2012/kyivbsd12-gapon-zfs.pdf?attredirects=0&d=1 >>> Page 17 and one. > > the above should have been "and on". > >> >> Thank you, thank you so very much. >> I never knew vfs.root.mountfrom was optional. All the turorials I've been reading said to set it. > > > Well, it used to be true that either an fstab "/" entry or vfs.root.mountfrom > had to be configured. But now that's optional. > Can you define "now"? 7.4-RELEASE, 8.3-RELEASE, 9.0-RELEASE, or just STABLE/CURRENT? Asking so this may possibly come out of sysutils/beadm Bryan From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 07:21:35 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CAD55F83 for ; Mon, 19 Nov 2012 07:21:35 +0000 (UTC) (envelope-from je@8192.net) Received: from smtp141.dfw.emailsrvr.com (smtp141.dfw.emailsrvr.com [67.192.241.141]) by mx1.freebsd.org (Postfix) with ESMTP id A29538FC0C for ; Mon, 19 Nov 2012 07:21:35 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp14.relay.dfw1a.emailsrvr.com (SMTP Server) with ESMTP id E367D298131 for ; Mon, 19 Nov 2012 02:12:52 -0500 (EST) X-Virus-Scanned: OK Received: by smtp14.relay.dfw1a.emailsrvr.com (Authenticated sender: john-AT-8192.net) with ESMTPSA id AE941298242 for ; Mon, 19 Nov 2012 02:12:52 -0500 (EST) Message-ID: <50A9DBF4.4000303@8192.net> Date: Sun, 18 Nov 2012 23:12:52 -0800 From: je User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS behavior with odd-number of non-parity drives? References: <50A82F08.8010908@8192.net> In-Reply-To: <50A82F08.8010908@8192.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 07:21:35 -0000 On 11/17/2012 4:42 PM, je wrote: > Given an odd number of data (non-parity) drives, how does ZFS write data > to the individual disks? > > For example, a 4-drive RAIDZ or 5-drive RAIDZ2 would use three drives > for data and two for parity when writing. Since the recordsize is a > power of two (and no power of two is evenly divisible by three), it is > impossible to write a full sector (be it a 512b or 4k sector) of data to > each data drive. > > In this case, what does ZFS do? Will it write recordsize/3 bytes of data > to each drive, leaving the drive to do a read/modify/write operation? Or > will ZFS round the write up to the nearest drive sector size and "waste" > the extra bytes? > > I see the best practice for RAIDZ is to use an odd number of drives, and > for RAIDZ2 an even number, but I am curious as to the behavior of ZFS in > these sub-optimal conditions. > > John > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > If anyone else is interested, I created FreeBSD 9.1-RC3 VM and ran some tests on a RAIDZ2 array of virtual disks. It appears that ZFS will always write a full sector (be it 512b or 4k) of data, rounding up to the nearest whole sector size. If the amount of data does not divide evenly into the number drives, some drives will write more sectors than others. In my testing the division is not always as even as possible, though this may be due to how parity data is written. In summary, whole sectors are written and drives don't need to do a read/modify/write (as long as ZFS knows the real sector size of the drives). I did not test an array of drives with mixed sector sizes. John From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 08:31:32 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id C672D837 for ; Mon, 19 Nov 2012 08:31:32 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id 73D918FC08 for ; Mon, 19 Nov 2012 08:31:32 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id B6CA247DDA; Mon, 19 Nov 2012 09:31:23 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.4 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [172.19.191.4] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id E8ABB47DD8 for ; Mon, 19 Nov 2012 09:31:17 +0100 (CET) Message-ID: <50A9EE53.7020908@platinum.linux.pl> Date: Mon, 19 Nov 2012 09:31:15 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS behavior with odd-number of non-parity drives? References: <50A82F08.8010908@8192.net> In-Reply-To: <50A82F08.8010908@8192.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 08:31:33 -0000 For small files or tails ZFS will reduce the number of drives used. For example two 512 byte (or less) files on 5 drive (512 byte sectors) raidz2 might end as: file 1: disk 1 sector 1 - data, disk 2 sector 1 - parity 1, disk 3 sector 1 - parity 2, file 2: disk 4 sector 1 - data, disk 5 sector 1 - parity 1, disk 1 sector 2 - parity 2. 0.0kiB - 0.5kiB will use 3 sectors, 0.5kiB - 1.0kiB will use 4 sectors, 1.0kiB - 1.5kiB will use 5 sectors (all drives used), 1.5kiB - 2.0kiB will use 8 sectors (cycle repeats). On 2012-11-18 01:42, je wrote: > Given an odd number of data (non-parity) drives, how does ZFS write data > to the individual disks? > > For example, a 4-drive RAIDZ or 5-drive RAIDZ2 would use three drives > for data and two for parity when writing. Since the recordsize is a > power of two (and no power of two is evenly divisible by three), it is > impossible to write a full sector (be it a 512b or 4k sector) of data to > each data drive. > > In this case, what does ZFS do? Will it write recordsize/3 bytes of data > to each drive, leaving the drive to do a read/modify/write operation? Or > will ZFS round the write up to the nearest drive sector size and "waste" > the extra bytes? > > I see the best practice for RAIDZ is to use an odd number of drives, and > for RAIDZ2 an even number, but I am curious as to the behavior of ZFS in > these sub-optimal conditions. > > John > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 11:06:44 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 4D1C01C2 for ; Mon, 19 Nov 2012 11:06:44 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 31E0B8FC20 for ; Mon, 19 Nov 2012 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qAJB6iEu013286 for ; Mon, 19 Nov 2012 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qAJB6hNO013284 for freebsd-fs@FreeBSD.org; Mon, 19 Nov 2012 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 19 Nov 2012 11:06:43 GMT Message-Id: <201211191106.qAJB6hNO013284@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 11:06:44 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173234 fs [zfs] [patch] Allow filtering of properties on zfs rec o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/172259 fs [zfs] [patch] ZFS fails to receive valid snapshots (pa o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o kern/170914 fs [zfs] [patch] Import patchs related with issues 3090 a o kern/170912 fs [zfs] [patch] unnecessarily setting DS_FLAG_INCONSISTE o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/170238 fs [zfs] [panic] Panic when deleting data o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167066 fs [zfs] ZVOLs not appearing in /dev/zvol o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165923 fs [nfs] Writing to NFS-backed mmapped files fails if flu o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162362 fs [snapshots] [panic] ufs with snapshot(s) panics when g o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo p kern/161897 fs [zfs] [patch] zfs partition probing causing long delay o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " p kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o conf/144213 fs [rc.d] [patch] Disappearing zvols on reboot o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 298 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 12:50:05 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EE9AEFA2 for ; Mon, 19 Nov 2012 12:50:05 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 3312D8FC12 for ; Mon, 19 Nov 2012 12:50:03 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA13503; Mon, 19 Nov 2012 14:49:42 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TaQnG-000CXg-3N; Mon, 19 Nov 2012 14:49:42 +0200 Message-ID: <50AA2AE5.5080002@FreeBSD.org> Date: Mon, 19 Nov 2012 14:49:41 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: Bryan Drewery Subject: Re: Why do we need vfs.root.mountfrom for zfs References: <848051353247434@web29h.yandex.ru> <50A934D2.9010205@FreeBSD.org> <1026501353267511@web27h.yandex.ru> <50A93A6E.1090907@FreeBSD.org> <50A96AE2.60803@shatow.net> In-Reply-To: <50A96AE2.60803@shatow.net> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, vermaden X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 12:50:06 -0000 on 19/11/2012 01:10 Bryan Drewery said the following: > On 11/18/2012 1:43 PM, Andriy Gapon wrote: >> Well, it used to be true that either an fstab "/" entry or vfs.root.mountfrom >> had to be configured. But now that's optional. >> > > Can you define "now"? 7.4-RELEASE, 8.3-RELEASE, 9.0-RELEASE, or just > STABLE/CURRENT? > > Asking so this may possibly come out of sysutils/beadm Since r235330 (May of this year) and its MFC-es. Not sure if there was any release since then, most likely not. Also, I don't recall if I MFC-ed this change to stable/7. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 13:00:38 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 82F6E322; Mon, 19 Nov 2012 13:00:38 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 813B58FC08; Mon, 19 Nov 2012 13:00:37 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA13645; Mon, 19 Nov 2012 15:00:16 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TaQxU-000CY9-3b; Mon, 19 Nov 2012 15:00:16 +0200 Message-ID: <50AA2D5D.7080105@FreeBSD.org> Date: Mon, 19 Nov 2012 15:00:13 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: Bartosz Stec , Guido Falsi Subject: Re: problem booting to multi-vdev root pool References: <509D1DEC.6040505@FreeBSD.org> <50A27243.408@madpilot.net> <50A65F83.5000604@FreeBSD.org> <50A66701.701@madpilot.net> <50A82B3A.6020608@it4pro.pl> <50A8CB1C.9090907@FreeBSD.org> In-Reply-To: <50A8CB1C.9090907@FreeBSD.org> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 13:00:38 -0000 on 18/11/2012 13:48 Andriy Gapon said the following: > on 18/11/2012 02:26 Bartosz Stec said the following: >> W dniu 2012-11-16 17:17, Guido Falsi pisze: >>> On 11/16/12 16:45, Andriy Gapon wrote: >>>> Guido, Bartosz, >>>> could you please test the patch? >>> >>> I have just compiler an r242910 kernel with this patch (and just this one) >>> applied. >>> >>> System booted so it seems to work fine! :) >> I've just compiled and installed fresh kernel with your patch, system booted >> without any problems, so apparently patch works as intended. > > Thank you both very much for testing! > Committed as r243213. > BTW, if you have some spare time and a desire to do some more testing, you can try the following patch: http://people.freebsd.org/~avg/zfs-spa-multi_vdev_root_support.diff It adds support for multi-vdev root pool probing in kernel. The best way to test is to remove zpool.cache before rebooting (but make sure to keep a copy somewhere and be able to recover). I'd use a boot environment (a root filesystem clone) for this. Thank you. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 13:13:32 2012 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 67FF1DD6; Mon, 19 Nov 2012 13:13:32 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 348858FC12; Mon, 19 Nov 2012 13:13:32 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qAJDDWhC022865; Mon, 19 Nov 2012 13:13:32 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qAJDDWM5022859; Mon, 19 Nov 2012 13:13:32 GMT (envelope-from ae) Date: Mon, 19 Nov 2012 13:13:32 GMT Message-Id: <201211191313.qAJDDWM5022859@freefall.freebsd.org> To: ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/147560: [zfs] [boot] Booting 8.1-PRERELEASE raidz system take ages X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 13:13:32 -0000 Synopsis: [zfs] [boot] Booting 8.1-PRERELEASE raidz system take ages Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Mon Nov 19 13:12:52 UTC 2012 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=147560 From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 13:14:20 2012 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 49EDAE5D; Mon, 19 Nov 2012 13:14:20 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 18B128FC18; Mon, 19 Nov 2012 13:14:20 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qAJDEJST023396; Mon, 19 Nov 2012 13:14:19 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qAJDEJee023392; Mon, 19 Nov 2012 13:14:19 GMT (envelope-from ae) Date: Mon, 19 Nov 2012 13:14:19 GMT Message-Id: <201211191314.qAJDEJee023392@freefall.freebsd.org> To: ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/161897: [zfs] [patch] zfs partition probing causing long delay at BTX loader X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 13:14:20 -0000 Synopsis: [zfs] [patch] zfs partition probing causing long delay at BTX loader Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Mon Nov 19 13:14:00 UTC 2012 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=161897 From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 14:37:52 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 66207148 for ; Mon, 19 Nov 2012 14:37:52 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 9D6D18FC08 for ; Mon, 19 Nov 2012 14:37:51 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id QAA14557; Mon, 19 Nov 2012 16:37:43 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AA4437.1070509@FreeBSD.org> Date: Mon, 19 Nov 2012 16:37:43 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Peter Wullinger Subject: Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable. References: <201211182220.qAIMK1oY061509@freefall.freebsd.org> In-Reply-To: <201211182220.qAIMK1oY061509@freefall.freebsd.org> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 14:37:52 -0000 on 19/11/2012 00:20 Peter Wullinger said the following: > The following reply was made to PR kern/153520; it has been noted by GNATS. > > From: Peter Wullinger > To: bug-followup@FreeBSD.org > Cc: > Subject: Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 > unstable. > Date: Sun, 18 Nov 2012 23:10:42 +0100 > > I see this too on two identical HP Proliant 320 DL G6 with 9-STABLE. > > The machine usually needs some nudging in the form of warm restarts > to boot the operating system. Could you please check if upgrading to r243217 or later changes anything? Please be sure that you update the on-disk boot blocks (gpart bootcode ...). > =3D=3D bootloader output =3D=3D > error 1 lba 32 > error 1 lba 1 > No ZFS pools located, can't boot > =3D=3D bootloader output =3D=3D > > =3D=3D gpart show =3D=3D > =3D> 34 488326973 da0 GPT (232G) > 34 128 1 freebsd-boot (64k) > 162 67108864 2 freebsd-swap (32G) > 67109026 421217981 3 freebsd-zfs [bootme] (200G) > =3D=3D gpart show =3D=3D > > Root is on a hardware-mirror provided by the machine's HPQ Smart Array P410i > > The other 6 drives are configured as JBOD with a raidz2 data pool > (not shown here) > > =3D=3D zpool status rpool =3D=3D > zpool status rpool > pool: rpool > state: ONLINE > scan: scrub repaired 0 in 0h12m with 0 errors on Wed Nov 7 04:47:21 2012 > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > da0p3 ONLINE 0 0 0 > > errors: No known data errors > =3D=3D zpool status rpool =3D=3D > > =3D=3D zdb =3D=3D > rpool: > version: 28 > name: 'rpool' > state: 0 > txg: 22695230816 > pool_guid: 16981309850887562666 > hostid: 1031366389 > hostname: '...' > vdev_children: 1 > vdev_tree: > type: 'root' > id: 0 > guid: 16981309850887562666 > children[0]: > type: 'disk' > id: 0 > guid: 6716485486554561877 > path: '/dev/da0p3' > phys_path: '/dev/da0p3' > whole_disk: 0 > metaslab_array: 23 > metaslab_shift: 31 > ashift: 9 > asize: 215658790912 > is_log: 0 > DTL: 65 -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 15:08:09 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 27EE781A; Mon, 19 Nov 2012 15:08:09 +0000 (UTC) (envelope-from mad@madpilot.net) Received: from winston.madpilot.net (winston.madpilot.net [78.47.75.155]) by mx1.freebsd.org (Postfix) with ESMTP id CDC698FC0C; Mon, 19 Nov 2012 15:08:08 +0000 (UTC) Received: from winston.madpilot.net (localhost [127.0.0.1]) by winston.madpilot.net (Postfix) with ESMTP id 3Y4thc6nTJzFTDk; Mon, 19 Nov 2012 16:08:00 +0100 (CET) X-Virus-Scanned: amavisd-new at madpilot.net Received: from winston.madpilot.net ([127.0.0.1]) by winston.madpilot.net (winston.madpilot.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id OfGhoGxL7M1i; Mon, 19 Nov 2012 16:07:55 +0100 (CET) Received: from vwg82.hq.ignesti.it (unknown [80.74.176.55]) by winston.madpilot.net (Postfix) with ESMTPSA; Mon, 19 Nov 2012 16:07:55 +0100 (CET) Message-ID: <50AA4B48.5090804@madpilot.net> Date: Mon, 19 Nov 2012 16:07:52 +0100 From: Guido Falsi User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121114 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andriy Gapon Subject: Re: problem booting to multi-vdev root pool References: <509D1DEC.6040505@FreeBSD.org> <50A27243.408@madpilot.net> <50A65F83.5000604@FreeBSD.org> <50A66701.701@madpilot.net> <50A82B3A.6020608@it4pro.pl> <50A8CB1C.9090907@FreeBSD.org> <50AA2D5D.7080105@FreeBSD.org> In-Reply-To: <50AA2D5D.7080105@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 15:08:09 -0000 On 11/19/12 14:00, Andriy Gapon wrote: > on 18/11/2012 13:48 Andriy Gapon said the following: >> on 18/11/2012 02:26 Bartosz Stec said the following: >> >> Thank you both very much for testing! >> Committed as r243213. >> > > BTW, if you have some spare time and a desire to do some more testing, you can > try the following patch: > http://people.freebsd.org/~avg/zfs-spa-multi_vdev_root_support.diff > > It adds support for multi-vdev root pool probing in kernel. > The best way to test is to remove zpool.cache before rebooting (but make sure to > keep a copy somewhere and be able to recover). I'd use a boot environment (a > root filesystem clone) for this. > Hi! Thank you again for the fast work. I tested this one on that machine and it was able to boot without zpool.cache. No file zpool.cache was created after boot. Are there any further test I should perform? -- Guido Falsi From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 15:23:44 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 40A9AFB1; Mon, 19 Nov 2012 15:23:44 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 3D7558FC15; Mon, 19 Nov 2012 15:23:43 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA14947; Mon, 19 Nov 2012 17:23:27 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AA4EEF.6090306@FreeBSD.org> Date: Mon, 19 Nov 2012 17:23:27 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Guido Falsi Subject: Re: problem booting to multi-vdev root pool References: <509D1DEC.6040505@FreeBSD.org> <50A27243.408@madpilot.net> <50A65F83.5000604@FreeBSD.org> <50A66701.701@madpilot.net> <50A82B3A.6020608@it4pro.pl> <50A8CB1C.9090907@FreeBSD.org> <50AA2D5D.7080105@FreeBSD.org> <50AA4B48.5090804@madpilot.net> In-Reply-To: <50AA4B48.5090804@madpilot.net> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 15:23:44 -0000 on 19/11/2012 17:07 Guido Falsi said the following: > On 11/19/12 14:00, Andriy Gapon wrote: >> on 18/11/2012 13:48 Andriy Gapon said the following: >>> on 18/11/2012 02:26 Bartosz Stec said the following: >>> >>> Thank you both very much for testing! >>> Committed as r243213. >>> >> >> BTW, if you have some spare time and a desire to do some more testing, you can >> try the following patch: >> http://people.freebsd.org/~avg/zfs-spa-multi_vdev_root_support.diff >> >> It adds support for multi-vdev root pool probing in kernel. >> The best way to test is to remove zpool.cache before rebooting (but make sure to >> keep a copy somewhere and be able to recover). I'd use a boot environment (a >> root filesystem clone) for this. > > Thank you again for the fast work. > > I tested this one on that machine and it was able to boot without zpool.cache. Great! Thank you for testing. > No file zpool.cache was created after boot. This is expected. > Are there any further test I should perform? This was sufficient. Thanks again. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 19:48:50 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 0D3412FC; Mon, 19 Nov 2012 19:48:50 +0000 (UTC) (envelope-from vermaden@interia.pl) Received: from smtpo.poczta.interia.pl (smtpo.poczta.interia.pl [217.74.65.205]) by mx1.freebsd.org (Postfix) with ESMTP id 7E5908FC0C; Mon, 19 Nov 2012 19:48:48 +0000 (UTC) Date: Mon, 19 Nov 2012 20:25:50 +0100 From: vermaden Subject: Re: Why do we need vfs.root.mountfrom for zfs To: Andriy Gapon X-Mailer: interia.pl/pf09 In-Reply-To: <50AA2AE5.5080002@FreeBSD.org> References: <848051353247434@web29h.yandex.ru> <50A934D2.9010205@FreeBSD.org> <1026501353267511@web27h.yandex.ru> <50A93A6E.1090907@FreeBSD.org> <50A96AE2.60803@shatow.net> <50AA2AE5.5080002@FreeBSD.org> X-Originating-IP: 46.76.244.23 Message-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=interia.pl; s=biztos; t=1353353151; bh=bJQ2J5WtIpGJdaWwoq4OOO5qMvx9LCnkOP3yNTc+UbM=; h=Date:From:Subject:To:Cc:X-Mailer:In-Reply-To:References: X-Originating-IP:Message-Id:MIME-Version:Content-Type: Content-Transfer-Encoding; b=Vpb2aS4RzHYscKnR3ZDzqaWGw6GvnsMiZVttnZ2Rs19Q6riTAg+bBWtQiGB3vNuXH BqUaaHpZtWzW6+GP7PIZLgUwjq6aV+kb/SEeMILS3O9jeA10jm5tjqkpXmoUnsDEGt 3GaBf3Cs4keV8fZCQPXzsv9QYUZp8xBx6Ex+LR+o= Cc: freebsd-fs@FreeBSD.org, Bryan Drewery X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 19:48:50 -0000 "Andriy Gapon" pisze: > on 19/11/2012 01:10 Bryan Drewery said the following: > > On 11/18/2012 1:43 PM, Andriy Gapon wrote: > >> Well, it used to be true that either an fstab "/" entry or vfs.root.mountfrom > >> had to be configured. But now that's optional. > >> > > > > Can you define "now"? 7.4-RELEASE, 8.3-RELEASE, 9.0-RELEASE, or just > > STABLE/CURRENT? > > > > Asking so this may possibly come out of sysutils/beadm > > Since r235330 (May of this year) and its MFC-es. Not sure if there was any > release since then, most likely not. > Also, I don't recall if I MFC-ed this change to stable/7. > > -- > Andriy Gapon Hi, the beadm already checks if its being run on FreeBSD 8.0 or later, but that can be inappropriate if the needed bits would be MFC to stable/7 and then put into the 7.5-RELEASE (is there any chance? for that). So please decide if You want beadm to support the stable/7 branch. Yesterday I got that piece of information ,that vfs.root.mountfrom is not needed now. Can I assume that using vfs.root.mountfrom is harmless then? I ask because if we remove that from beadm, then anything from 2012.06 and later will just work and everything before won't. As beadm already checks if its being used on FreeBSD 8.0 or later, removing the setting of vfs.root.mountfrom from beadm would probably make it 9.1+ exclusive only (along with 9.0-STABLE from 2012.06 or later of course). The needed bits will probably find its way into the stable/8, but as 8.3-RELEASE was released 2012.04, then the 8.4-RELEASE with needed bits would be probably available in late 2013, it would not be nice for 8.x users to force them to upgrade to 9.x series when its still possible to use beadm on 8.x (even in its limited form - without the boot menu BE selection). Another solution may be checking the commit version (r243107) for needed features and then set (or not) the vfs.root.mountfrom option. I would like to hear your comments on these thoughts/solutions. Regards, vermaden -- Religions, worst damnation of mankind. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 19 20:53:09 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A6F9D4F9 for ; Mon, 19 Nov 2012 20:53:09 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id EB72D8FC12 for ; Mon, 19 Nov 2012 20:53:08 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id WAA17230; Mon, 19 Nov 2012 22:52:50 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TaYKo-000Cwo-0s; Mon, 19 Nov 2012 22:52:50 +0200 Message-ID: <50AA9C20.1090004@FreeBSD.org> Date: Mon, 19 Nov 2012 22:52:48 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: vermaden Subject: Re: Why do we need vfs.root.mountfrom for zfs References: <848051353247434@web29h.yandex.ru> <50A934D2.9010205@FreeBSD.org> <1026501353267511@web27h.yandex.ru> <50A93A6E.1090907@FreeBSD.org> <50A96AE2.60803@shatow.net> <50AA2AE5.5080002@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, Bryan Drewery X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Nov 2012 20:53:09 -0000 on 19/11/2012 21:25 vermaden said the following: > the beadm already checks if its being run on FreeBSD 8.0 or later, but > that can be inappropriate if the needed bits would be MFC to stable/7 > and then put into the 7.5-RELEASE (is there any chance? for that). I don't expect that there ever will be 7.5 and I am not MFC-ing anything to stable/7 now, because it is a "legacy" branch. > So please decide if You want beadm to support the stable/7 branch. Not up to me to decide... > Yesterday I got that piece of information ,that vfs.root.mountfrom is > not needed now. Can I assume that using vfs.root.mountfrom is > harmless then? For ZFS booting vfs.root.mountfrom now has a reasonable default value, that's all. You can still use vfs.root.mountfrom or fstab, but in most cases it's just potentially more confusing than the default (bootfs). > I ask because if we remove that from beadm, then anything from > 2012.06 and later will just work and everything before won't. As beadm > already checks if its being used on FreeBSD 8.0 or later, removing the > setting of vfs.root.mountfrom from beadm would probably make it > 9.1+ exclusive only (along with 9.0-STABLE from 2012.06 or later of > course). > > The needed bits will probably find its way into the stable/8, but as > 8.3-RELEASE was released 2012.04, then the 8.4-RELEASE with needed > bits would be probably available in late 2013, it would not be nice > for 8.x users to force them to upgrade to 9.x series when its still > possible to use beadm on 8.x (even in its limited form - without > the boot menu BE selection). I believe that 8.4 release process will start as soon as 9.1 is out of the door (that's a very uncertain date, I know) and hopefully it will take not more than a few weeks. So I'd hope for early 2013 or perhaps even late 2012. But definitely not late 2013. > Another solution may be checking the commit version (r243107) > for needed features and then set (or not) the vfs.root.mountfrom > option. > > I would like to hear your comments on these thoughts/solutions. Potentially you could just for a branch+revision combination (or __FreeBSD_version). But you can just keep using vfs.root.mountfrom. BTW, please note that it is the version of loader that is important here. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 00:22:14 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3AF7ACBE; Tue, 20 Nov 2012 00:22:14 +0000 (UTC) (envelope-from lstewart@freebsd.org) Received: from lauren.room52.net (lauren.room52.net [210.50.193.198]) by mx1.freebsd.org (Postfix) with ESMTP id EBC5E8FC08; Tue, 20 Nov 2012 00:22:13 +0000 (UTC) Received: from lstewart.caia.swin.edu.au (lstewart.caia.swin.edu.au [136.186.229.95]) by lauren.room52.net (Postfix) with ESMTPSA id ABCAD7E820; Tue, 20 Nov 2012 11:14:39 +1100 (EST) Message-ID: <50AACB63.6090608@freebsd.org> Date: Tue, 20 Nov 2012 11:14:27 +1100 From: Lawrence Stewart User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121031 Thunderbird/16.0.2 MIME-Version: 1.0 To: Alexander Motin Subject: Re: graid often resyncs raid1 array after clean reboot/shutdown References: <508E0C3F.8080602@freebsd.org> <508E3E81.9010209@FreeBSD.org> <508E49AD.4090501@FreeBSD.org> <508E91CF.5070003@FreeBSD.org> <508F1045.60002@freebsd.org> In-Reply-To: <508F1045.60002@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=0.0 required=5.0 tests=UNPARSEABLE_RELAY autolearn=unavailable version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on lauren.room52.net Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 00:22:14 -0000 On 10/30/12 10:24, Lawrence Stewart wrote: > Hi Alexander, > > On 10/30/12 01:25, Alexander Motin wrote: >> On 29.10.2012 11:17, Alexander Motin wrote: >>> On 29.10.2012 10:29, Alexander Motin wrote: >>>> Hi. >>>> >>>> On 29.10.2012 06:55, Lawrence Stewart wrote: >>>>> I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB >>>>> Seagate >>>>> ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID >>>>> controller. The system is configured to boot from ZFS off the raid1 >>>>> array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop. >>>>> >>>>> Everything works great, except that after a "shutdown -r now" of the >>>>> system, graid almost always (I believe I've noted a few times where >>>>> everything comes up fine) detects one of the disks in the array as >>>>> stale >>>>> and does a full resync of the array over the course of a few hours. >>>>> Here's an example of what I see when starting up: >>>> >>>> From log messages it indeed looks like result of unclean shutdown. I've >>>> never seen such problem with UFS, but I never tested graid with ZFS. I >>>> guess there may be some difference in shutdown process that makes RAID >>>> metadata to have dirty flag on reboot. I'll try to reproduce it now. >>> >>> I confirm the problem. Seems it happens only when using ZFS as root file >>> system. Probably ZFS issues some last moment write that makes volume >>> dirty. I will trace it more. >> >> I've found problem in the fact that ZFS seems doesn't close devices on >> shutdown. That doesn't allow graid to shutdown gracefully. r242314 in >> HEAD fixes that by more aggressively marking volumes clean on shutdown. > > Thanks for the quick detective work and fix. I'll merge r242314 back to > my local stable/9 tree and test it. I've rebooted the machine a few times now and the array has been started in optimal state without requiring a rebuild each time. Thanks again for the fix. Cheers, Lawrence From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 04:05:58 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 9BA0579E for ; Tue, 20 Nov 2012 04:05:58 +0000 (UTC) (envelope-from lists@eitanadler.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id 07D7D8FC12 for ; Tue, 20 Nov 2012 04:05:57 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id go10so2651598lbb.13 for ; Mon, 19 Nov 2012 20:05:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=eitanadler.com; s=0xdeadbeef; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type; bh=uI0hhJhBoqM3TCJL/xL4eVe61yxS00r0hU5vraeEdnY=; b=mYlvN+UIZRLy0ko73TFUaqMZavs6nIcJ3tW4duF5sKRu5hPaGjh+XHbCNnpmXzGobt 5DpPuUUuSaK5eYKYw1nNCdV0hFOjK+cgqCDh0kwzQ1eWz7wWSbUYTCAe2bdD0GvJFyYp 9FhjMPEOtXiizP1KklU0itemgguy0zyjEenLU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type :x-gm-message-state; bh=uI0hhJhBoqM3TCJL/xL4eVe61yxS00r0hU5vraeEdnY=; b=O8nNTBfwB+Q9RaHFi3WmTMFZPf3qq8SdlugeiA3fPKUyebw7gvYLbNHZVXfVV9cs6c VqV+6nnFOOmwrzwVBCCXfqTJYrIrH4YrQ2yntz9/0pyzeZgGqDpUmFBRPx/c5fUSIYxg z7n5ir/zFkM4POkTyxDZ4DP3zFHuztA8QX/NwojiJY28pvDMiCgq3CXbAcKFhc5bseGA eUMbcoZuKP6x7XVmPLEujmJVlluAxK/vZQCgiZA+6dZ+/Tp9whZaoR2xhYjgfa5HJk0n F3duSeBHzHOZsd9wQC9HSJVZR8U4OdMQuHSuSezARbU8E6MsS4Vx9A70EvBlGYodAkdl dRuA== Received: by 10.112.54.40 with SMTP id g8mr6009986lbp.49.1353384356703; Mon, 19 Nov 2012 20:05:56 -0800 (PST) MIME-Version: 1.0 Sender: lists@eitanadler.com Received: by 10.112.25.166 with HTTP; Mon, 19 Nov 2012 20:05:26 -0800 (PST) In-Reply-To: References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <57ac1f$gg70bn@ipmail05.adl6.internode.on.net> From: Eitan Adler Date: Mon, 19 Nov 2012 23:05:26 -0500 X-Google-Sender-Auth: A75_2zdlwiRGr2PCNldNemVPmO0 Message-ID: Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) To: Stephen McKay Content-Type: text/plain; charset=UTF-8 X-Gm-Message-State: ALoCoQkaLGvC12Cg17SFT4mx6rlDUQ1n4u2FyzloTpe1xV5FScVf9Tqr8p3Dtj1WkMBANJE5IzSQ Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 04:05:58 -0000 On 16 November 2012 13:41, Eitan Adler wrote: > On 15 November 2012 23:58, Stephen McKay wrote: >> On Thursday, 15th November 2012, Eitan Adler wrote: >> >>>Can people here please tell me what is wrong in the following content? >> >> A few things. I'll intersperse them. >> >>>Is there additional data or questions to add? >> >> The whole ZFS world desperately needs good documentation. There >> are misconceptions everywhere. There are good tuning hints and >> bad (or out of date) ones. Further, it depends on your target >> application whether the defaults are fairly good or plain suck. > > New version of the patch taking into account the comments so far: > > http://people.freebsd.org/~eadler/files/add-zfs-faq-section.diff Thanks for all the comments, private and public. I've committed a modified version of the above. -- Eitan Adler Source, Ports, Doc committer Bugmeister, Ports Security teams From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 04:20:49 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EA48AA00 for ; Tue, 20 Nov 2012 04:20:49 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 656B98FC12 for ; Tue, 20 Nov 2012 04:20:49 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so5260881lah.13 for ; Mon, 19 Nov 2012 20:20:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=XRbWs1UJAv5OciNTZJyM7uJsC7Qu5gg/la2CfeBxPo0=; b=wpG7fIuDsCPoN4CXufTwsG0WNoIlKZSuaUsUCi1gWdaldZWupxQST2ZiNHxhowJ11v t/KgJsw3sRZmZULWCqoM5NSAtdQTYF60AgPt1kBlxCZdjZl87G8G0TBYT9LWfyGdm1+B Q+hghReTOl6uZyRmWcVXDSmaNZXS8k/jNM67+oZMotE36qJcLOiBbEAVNZneworu/xGU IQmnEqWyc60ICuu1xHoxHHreMnScFldJaEzx4/IRSJBPsLKbJMH1nkspTsteKHgVfGFC SldqKdEk1ezaIQKStItWUOSAn7Vspz4oIpajh6q9r8PQlZiJL+OZFrzgS3tZ3pwGddKU oUOQ== MIME-Version: 1.0 Received: by 10.152.144.69 with SMTP id sk5mr13564062lab.22.1353385226196; Mon, 19 Nov 2012 20:20:26 -0800 (PST) Sender: artemb@gmail.com Received: by 10.112.80.103 with HTTP; Mon, 19 Nov 2012 20:20:26 -0800 (PST) In-Reply-To: <20121120040258.GA27849@neutralgood.org> References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <20121116044055.GA47859@neutralgood.org> <50A64694.5030001@egr.msu.edu> <20121117181803.GA26421@neutralgood.org> <20121117225851.GJ1462@egr.msu.edu> <20121120040258.GA27849@neutralgood.org> Date: Mon, 19 Nov 2012 20:20:26 -0800 X-Google-Sender-Auth: 5Cn1BIt05z2oaolH-F4gfYwwFEE Message-ID: Subject: Re: SSD recommendations for ZFS cache/log From: Artem Belevich To: kpneal@pobox.com Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 04:20:50 -0000 On Mon, Nov 19, 2012 at 8:02 PM, wrote: > Advising people to use dedup when high dedup ratios are expected, and > advising people to otherwise not use dedup, is by itself incorrect advice. > Rather, dedup should only be enabled on a system with a large amount of > memory. The usual advice of 1G of ram per 1TB of disk is flat out wrong. > > Now, I do not know how much memory to give as a minimum. I suspect that > the minimum should be more like 16-32G, with more if large amounts of > deduped data are to be removed by destroying entire datasets. But that's > just a guess. For what it's worth, Oracle has published an article on memory sizing for dedupe. http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-113-size-zfs-dedup-1354231.html In a nutshell, it's 320 bytes per record. Number of records will depend on your data set and the way it's been written. --Artem From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 04:59:39 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 10F56EC0; Tue, 20 Nov 2012 04:59:39 +0000 (UTC) (envelope-from spork@bway.net) Received: from smtp3.bway.net (smtp3.bway.net [216.220.96.27]) by mx1.freebsd.org (Postfix) with ESMTP id 7B82E8FC08; Tue, 20 Nov 2012 04:59:38 +0000 (UTC) Received: from toasty.sporklab.com (foon.sporktines.com [96.57.144.66]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: spork@bway.net) by smtp3.bway.net (Postfix) with ESMTPSA id 6928E9586D; Mon, 19 Nov 2012 23:59:29 -0500 (EST) References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <57ac1f$gg70bn@ipmail05.adl6.internode.on.net> In-Reply-To: Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii Message-Id: <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> Content-Transfer-Encoding: 7bit From: Charles Sprickman Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) Date: Mon, 19 Nov 2012 23:59:28 -0500 To: Eitan Adler X-Mailer: Apple Mail (2.1084) Cc: FreeBSD FS , Stephen McKay X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 04:59:39 -0000 On Nov 19, 2012, at 11:05 PM, Eitan Adler wrote: > On 16 November 2012 13:41, Eitan Adler wrote: >> On 15 November 2012 23:58, Stephen McKay wrote: >>> On Thursday, 15th November 2012, Eitan Adler wrote: >>> >>>> Can people here please tell me what is wrong in the following content? >>> >>> A few things. I'll intersperse them. >>> >>>> Is there additional data or questions to add? >>> >>> The whole ZFS world desperately needs good documentation. There >>> are misconceptions everywhere. There are good tuning hints and >>> bad (or out of date) ones. Further, it depends on your target >>> application whether the defaults are fairly good or plain suck. >> >> New version of the patch taking into account the comments so far: >> >> http://people.freebsd.org/~eadler/files/add-zfs-faq-section.diff > > Thanks for all the comments, private and public. I've committed a > modified version of the above. Wonderful to see some work on this. One of the great remaining zfs mysteries remains all the tunables that are under "vfs.zfs.*". Obviously there are plenty of read-only items there, but conflicting information gathered from random forum posts and commit messages exist about what exactly one can do regarding tuning beyond arc sizing. If you have any opportunity to work with the people who have ported and are now maintaining zfs, it would be really wonderful to get some feedback from them on what knobs are safe to twiddle and why. I suspect many of the tunable items don't really have meaningful equivalents in Sun's implementation since the way zfs falls under the vfs layer in FreeBSD is so different. Thanks, Charles > > -- > Eitan Adler > Source, Ports, Doc committer > Bugmeister, Ports Security teams > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 07:56:03 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CF6F9B08 for ; Tue, 20 Nov 2012 07:56:03 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id 66C708FC16 for ; Tue, 20 Nov 2012 07:56:03 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id AD69647E21; Tue, 20 Nov 2012 08:56:01 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.4 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.1.2] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id B3A2B47DCD for ; Tue, 20 Nov 2012 08:55:56 +0100 (CET) Message-ID: <50AB3789.1000508@platinum.linux.pl> Date: Tue, 20 Nov 2012 08:55:53 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <57ac1f$gg70bn@ipmail05.adl6.internode.on.net> <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> In-Reply-To: <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 07:56:03 -0000 On 2012-11-20 05:59, Charles Sprickman wrote: > Wonderful to see some work on this. > > One of the great remaining zfs mysteries remains all the tunables > that are under "vfs.zfs.*". Obviously there are plenty of read-only > items there, but conflicting information gathered from random forum > posts and commit messages exist about what exactly one can do > regarding tuning beyond arc sizing. > > If you have any opportunity to work with the people who have ported > and are now maintaining zfs, it would be really wonderful to get > some feedback from them on what knobs are safe to twiddle and why. > I suspect many of the tunable items don't really have meaningful > equivalents in Sun's implementation since the way zfs falls under > the vfs layer in FreeBSD is so different. > > Thanks, > > Charles I'll share my experiences while tuning for home NAS: vfs.zfs.write_limit_* is a mess. 6 sysctls work together to produce a single value - maximum size of txg commit. If size of data yet to be stored on disk grows to this size a txg commit will be forced, but there is a catch, this size is only an estimate and absolutely worst case one at that - multiply by 24 (there is a reason for this madness below). This means that writing a 1MB file will result in 24MB estimated txg commit size (+ metadata). Back to the sysctls: # vfs.zfs.write_limit_override - if not 0 absolutely override write limit (ignore other sysctls), if 0 then an internal dynamically computed value is used based on: # vfs.zfs.txg.synctime_ms - adjust write limit based on previous txg commits so the time to write is equal to this value in milliseconds (basically estimates disks write bandwidth), # vfs.zfs.write_limit_shift - sets vfs.zfs.write_limit_max to ram size / 2^write_limit_shift, # vfs.zfs.write_limit_max - used to derive vfs.zfs.write_limit_inflated (multiply by 24), but only if vfs.zfs.write_limit_shift is not 0, # vfs.zfs.write_limit_inflated - maximum size of the dynamic write limit, # vfs.zfs.write_limit_min - minimum size of the dynamic write limit, and to have the whole picture: # vfs.zfs.txg.timeout - force txg commit every this many seconds if it didn't happen by write limit. For my home NAS (10x 2TB disks encrypted with geli in raidz2, cpu with hw aes, 16GB ram, 2x 1GE for samba and iSCSI with MCS) I have ended with: /boot/loader.conf: vfs.zfs.write_limit_shift="4" # 16GB ram / 2^4 = 1GB limit vfs.zfs.write_limit_min="2400M" # 100MB minimum multiplied by the 24 factor, during heavy read-write operations dynamic write limit would enter positive feedback loop and reduce write limit too much vfs.zfs.txg.synctime_ms="2000" # try to maintain 2 seconds commit time during large writes vfs.zfs.txg.timeout="120" # 2 minutes to reduce fragmentation and wear from small writes, worst case scenario 2 minutes of asynchronous writes is lost, synchronous end in ZIL anyway and for completness: vfs.zfs.arc_min="10000M" vfs.zfs.arc_max="10000M" vfs.zfs.vdev.cache.size="16M" # vdev cache helps a lot during scrubs vfs.zfs.vdev.cache.bshift="14" # grow all i/o requests to 16kiB, smaller have shown to have same latency so might as well get more "for free" vfs.zfs.vdev.cache.max="16384" vfs.zfs.vdev.write_gap_limit="0" vfs.zfs.vdev.read_gap_limit="131072" vfs.zfs.vdev.aggregation_limit="131072" # group smaller reads into one larger, benchmarking shown no appreciable latency increase while again getting more bytes vfs.zfs.vdev.min_pending="1" vfs.zfs.vdev.max_pending="1" # seems to help txg commit bandwidth by reducing seeking with parallel reads (not fully tested) and a reason for 24 factor (4 * 3 * 2, from the code): /* * The worst case is single-sector max-parity RAID-Z blocks, in which * case the space requirement is exactly (VDEV_RAIDZ_MAXPARITY + 1) * times the size; so just assume that. Add to this the fact that * we can have up to 3 DVAs per bp, and one more factor of 2 because * the block may be dittoed with up to 3 DVAs by ddt_sync(). */ From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 08:19:36 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EB8F4F61 for ; Tue, 20 Nov 2012 08:19:36 +0000 (UTC) (envelope-from lserinol@gmail.com) Received: from mail-bk0-f54.google.com (mail-bk0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 796E18FC15 for ; Tue, 20 Nov 2012 08:19:35 +0000 (UTC) Received: by mail-bk0-f54.google.com with SMTP id je9so1595863bkc.13 for ; Tue, 20 Nov 2012 00:19:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=IJ9QxAWMOAQcl9dL2UnifIjQiHlMm23jEVc2EtFakPA=; b=b9Hd76IvVXrSfl61Utrxxw1CNUAb1e6juEUm5HGtRkgV+EOyKyh6ugjeUUEhSjZe07 4Zvt0lpFPOHah/QvNnA7BMhh3L3kAVLlX07FB1l6+xyizAtj4roaPW6YZBM5UKQaP18l sYhsAu94GETy6vbkWWwOtYl/3Vwm2WW0zaiqcSO0UcO7SEXYamU0/ScnMGbYuTqVSv/X A10BXoQ153QiumbgKe78pvWBj032zH4wSYFvdgv9yfoyfCk1vy7FE/Faym8vrbrSejGv wCLxPYwt/kns475pqpSt+YJZ18MFVXHQzj1kQIhVyJBlPNZwg/c0B8YczISVDbo3G9UQ pS9g== MIME-Version: 1.0 Received: by 10.204.3.214 with SMTP id 22mr5651167bko.108.1353399574504; Tue, 20 Nov 2012 00:19:34 -0800 (PST) Received: by 10.205.113.203 with HTTP; Tue, 20 Nov 2012 00:19:34 -0800 (PST) Date: Tue, 20 Nov 2012 10:19:34 +0200 Message-ID: Subject: ZFS spa_sync() spends 10-20% of its time in spa_free_sync_cb() From: Levent Serinol To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 08:19:37 -0000 Hi, I was following illumos cvs for a while and noticed a bug #3329 yesterday that fixed in cvs. It looks like Freebsd cvs/web is down so I couldn't be sure if it's also fixed in freebsd head cvs. Can anyone know if this one fixed in freebsd zfs code or scheduled to do ? https://illumos.org/issues/3329 http://cr.illumos.org/~webrev/csiden/illumos-3329/ Thanks, Levent -- http://lserinol.blogspot.com/ From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 08:57:04 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7B23C642; Tue, 20 Nov 2012 08:57:04 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id 901FB8FC0C; Tue, 20 Nov 2012 08:57:03 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id go10so2814989lbb.13 for ; Tue, 20 Nov 2012 00:57:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=IXt3W4ri+azUVKsrkgJW9Yk05tK+3cdxHj1GaOmLx1M=; b=qlUmacWzHZ5hd3zClSB/3Q6HJiC15/t/fh4URTdwa+7dn0FU7A65VEfyS/M5PJO7ew jlEVeRx1f4mNxXwQSTT5QK+amP6Gx8HVWwnKOESqeVTrbMIa5konTOocT64sp0bKdQTM llriA5Lsf0NGezlsKBwliN39APyXE1V95aQqI9Zvt+5VWbHamuG9Srqz5Cgy1IZYxPAY B348mAOgvlxBWAd4rHQhgH4EluXrIY9PK7W1U6g4+NLpL6mRBOd307xD4ucYkvJZmVtp x6c+XwS8ZKaQdex7L7X4EJB9E/D9PnvySlHZE7qUvxv+pWDoCBHXQ/ySP0OgqoZHjZjj CWJQ== MIME-Version: 1.0 Received: by 10.152.111.166 with SMTP id ij6mr14076334lab.38.1353401822162; Tue, 20 Nov 2012 00:57:02 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 00:57:02 -0800 (PST) Date: Tue, 20 Nov 2012 10:57:02 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: freebsd-current@freebsd.org, avg@FreeBSD.org, bartosz.stec@it4pro.pl, freebsd-fs@FreeBSD.org Content-Type: text/plain; charset=ISO-8859-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 08:57:04 -0000 Hi! My system: # uname -a FreeBSD open.technica-03.local 10.0-CURRENT FreeBSD 10.0-CURRENT #0: Tue Oct 30 14:13:01 EET 2012 root@open.technica-03.local:/usr/obj/usr/src/sys/SMP64R amd64 # zpool status -v pool: zsolar state: ONLINE scan: resilvered 2,56M in 0h0m with 0 errors on Tue Nov 20 10:26:35 2012 config: NAME STATE READ WRITE CKSUM zsolar ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 gpt/disk3 ONLINE 0 0 0 errors: No known data errors Update source: # svn info Path: . Working Copy Root Path: /usr/src URL: svn://svn.freebsd.org/base/head Repository Root: svn://svn.freebsd.org/base Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f Revision: 243278 Node Kind: directory Schedule: normal Last Changed Author: avg Last Changed Rev: 243272 Last Changed Date: 2012-11-19 13:35:56 +0200 I used http://people.freebsd.org/~avg/zfs-spa-multi_vdev_root_support.diff buildworld + kernel rm /boot/zfs/zpool.cache Reboot.... Mounting from zfs:zsolar failed with error 45 --- Best regards, Andrei Lavreniyuk. From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 09:01:28 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 772628F2 for ; Tue, 20 Nov 2012 09:01:28 +0000 (UTC) (envelope-from prvs=1671869427=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id EDB5F8FC16 for ; Tue, 20 Nov 2012 09:01:27 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50001116876.msg for ; Tue, 20 Nov 2012 09:01:19 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Tue, 20 Nov 2012 09:01:19 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=1671869427=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <230DE7DAE83749DCBD180D5EF85D4CB1@multiplay.co.uk> From: "Steven Hartland" To: "Adam Nowacki" , References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <57ac1f$gg70bn@ipmail05.adl6.internode.on.net> <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> <50AB3789.1000508@platinum.linux.pl> Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) Date: Tue, 20 Nov 2012 08:57:01 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 09:01:28 -0000 ----- Original Message ----- From: "Adam Nowacki" To: Sent: Tuesday, November 20, 2012 7:55 AM Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) > On 2012-11-20 05:59, Charles Sprickman wrote: >> Wonderful to see some work on this. >> >> One of the great remaining zfs mysteries remains all the tunables >> that are under "vfs.zfs.*". Obviously there are plenty of read-only >> items there, but conflicting information gathered from random forum >> posts and commit messages exist about what exactly one can do >> regarding tuning beyond arc sizing. >> >> If you have any opportunity to work with the people who have ported >> and are now maintaining zfs, it would be really wonderful to get >> some feedback from them on what knobs are safe to twiddle and why. >> I suspect many of the tunable items don't really have meaningful >> equivalents in Sun's implementation since the way zfs falls under >> the vfs layer in FreeBSD is so different. >> >> Thanks, >> >> Charles > > I'll share my experiences while tuning for home NAS: > vfs.zfs.write_limit_* is a mess. > 6 sysctls work together to produce a single value - maximum size of txg > commit. If size of data yet to be stored on disk grows to this size a > txg commit will be forced, but there is a catch, this size is only an > estimate and absolutely worst case one at that - multiply by 24 (there > is a reason for this madness below). This means that writing a 1MB file > will result in 24MB estimated txg commit size (+ metadata). Back to the > sysctls: > > # vfs.zfs.write_limit_override - if not 0 absolutely override write > limit (ignore other sysctls), if 0 then an internal dynamically computed > value is used based on: > # vfs.zfs.txg.synctime_ms - adjust write limit based on previous txg > commits so the time to write is equal to this value in milliseconds > (basically estimates disks write bandwidth), > # vfs.zfs.write_limit_shift - sets vfs.zfs.write_limit_max to ram size / > 2^write_limit_shift, > # vfs.zfs.write_limit_max - used to derive vfs.zfs.write_limit_inflated > (multiply by 24), but only if vfs.zfs.write_limit_shift is not 0, > # vfs.zfs.write_limit_inflated - maximum size of the dynamic write limit, > # vfs.zfs.write_limit_min - minimum size of the dynamic write limit, > and to have the whole picture: > # vfs.zfs.txg.timeout - force txg commit every this many seconds if it > didn't happen by write limit. > > For my home NAS (10x 2TB disks encrypted with geli in raidz2, cpu with > hw aes, 16GB ram, 2x 1GE for samba and iSCSI with MCS) I have ended with: > > /boot/loader.conf: > vfs.zfs.write_limit_shift="4" # 16GB ram / 2^4 = 1GB limit > vfs.zfs.write_limit_min="2400M" # 100MB minimum multiplied by the 24 > factor, during heavy read-write operations dynamic write limit would > enter positive feedback loop and reduce write limit too much > vfs.zfs.txg.synctime_ms="2000" # try to maintain 2 seconds commit time > during large writes > vfs.zfs.txg.timeout="120" # 2 minutes to reduce fragmentation and wear > from small writes, worst case scenario 2 minutes of asynchronous writes > is lost, synchronous end in ZIL anyway > > and for completness: > > vfs.zfs.arc_min="10000M" > vfs.zfs.arc_max="10000M" > vfs.zfs.vdev.cache.size="16M" # vdev cache helps a lot during scrubs > vfs.zfs.vdev.cache.bshift="14" # grow all i/o requests to 16kiB, smaller > have shown to have same latency so might as well get more "for free" > vfs.zfs.vdev.cache.max="16384" This has been disabled by default for a while are you sure of the benefits? "Disable vdev cache (readahead) by default. The vdev cache is very underutilized (hit ratio 30%-70%) and may consume excessive memory on systems with many vdevs. Illumos-gate revision: 13346" > vfs.zfs.vdev.write_gap_limit="0" > vfs.zfs.vdev.read_gap_limit="131072" > vfs.zfs.vdev.aggregation_limit="131072" # group smaller reads into one > larger, benchmarking shown no appreciable latency increase while again > getting more bytes > vfs.zfs.vdev.min_pending="1" > vfs.zfs.vdev.max_pending="1" # seems to help txg commit bandwidth by > reducing seeking with parallel reads (not fully tested) > > and a reason for 24 factor (4 * 3 * 2, from the code): > /* > * The worst case is single-sector max-parity RAID-Z blocks, in which > * case the space requirement is exactly (VDEV_RAIDZ_MAXPARITY + 1) > * times the size; so just assume that. Add to this the fact that > * we can have up to 3 DVAs per bp, and one more factor of 2 because > * the block may be dittoed with up to 3 DVAs by ddt_sync(). > */ > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 09:05:52 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A4E50A0A for ; Tue, 20 Nov 2012 09:05:52 +0000 (UTC) (envelope-from andrnils@gmail.com) Received: from mail-qc0-f182.google.com (mail-qc0-f182.google.com [209.85.216.182]) by mx1.freebsd.org (Postfix) with ESMTP id 5B9A68FC0C for ; Tue, 20 Nov 2012 09:05:52 +0000 (UTC) Received: by mail-qc0-f182.google.com with SMTP id k19so4840556qcs.13 for ; Tue, 20 Nov 2012 01:05:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=fXu6YZlRkQZeXdNkoNgB0H2QkbosLzpGk2ijcJDddhE=; b=UNGu5E33lL5Qr9BlC/GJEul+YDPx8Tcgkl1dRQjWWIqAwtMIYaD5+/SGUZD0qk1tWt BVlKTatS1LDpKRGv/a8/MUWkiKZH8bVWd7vWHLn9+270lugq5bF5xGwzHpmvRPZq1rmW CVOJLKDW6chBGxeUARYaudHsEjiTUoPXfpj7JAQ5iDGPryjd0Z3eP+JBAKkzV7hLMK21 7BqlG1s+90HnNL+epJPE8oQw3qL4huZuvNa5EaOWU2V0+zBuOsPMq4Kj4nJrT3sdMx6l SvT+wEEWOZhabbNjmmqjZrBdRHhbV7HJeA3YlZjrwXWwCOieDxKSrmBS3/iSmgC33CHb n1pQ== MIME-Version: 1.0 Received: by 10.49.2.74 with SMTP id 10mr16476825qes.10.1353402351444; Tue, 20 Nov 2012 01:05:51 -0800 (PST) Received: by 10.229.113.102 with HTTP; Tue, 20 Nov 2012 01:05:51 -0800 (PST) Date: Tue, 20 Nov 2012 10:05:51 +0100 Message-ID: Subject: kern/167066 progress From: Andreas Nilsson To: "freebsd-fs@freebsd.org" Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 09:05:52 -0000 Is there any progress on kern/167066 ? Not having access to zvols without reboot is rather annoying. Best regards Andreas From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 09:16:10 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 509D7D7E for ; Tue, 20 Nov 2012 09:16:10 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) by mx1.freebsd.org (Postfix) with ESMTP id D215B8FC16 for ; Tue, 20 Nov 2012 09:16:08 +0000 (UTC) Received: from server.rulingia.com (c220-239-241-202.belrs5.nsw.optusnet.com.au [220.239.241.202]) by vps.rulingia.com (8.14.5/8.14.5) with ESMTP id qAK9G6B2011516 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 20 Nov 2012 20:16:06 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.5/8.14.5) with ESMTP id qAK9G0h5007939 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 20 Nov 2012 20:16:00 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.5/8.14.5/Submit) id qAK9G0gg007938; Tue, 20 Nov 2012 20:16:00 +1100 (EST) (envelope-from peter) Date: Tue, 20 Nov 2012 20:16:00 +1100 From: Peter Jeremy To: Levent Serinol Subject: Re: ZFS spa_sync() spends 10-20% of its time in spa_free_sync_cb() Message-ID: <20121120091600.GA4535@server.rulingia.com> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="8t9RHnE3ZwKMSgU+" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 09:16:10 -0000 --8t9RHnE3ZwKMSgU+ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2012-Nov-20 10:19:34 +0200, Levent Serinol wrote: > I was following illumos cvs for a while and noticed a bug #3329 yesterday >that fixed in cvs. It looks like Freebsd cvs/web is down so I couldn't be >sure if it's also fixed in freebsd head cvs. Can anyone know if this one >fixed in freebsd zfs code or scheduled to do ? See http://svnweb.freebsd.org/changeset/base/242735 I'm not sure what the plans are for merging the fixes back to -stable. --=20 Peter Jeremy --8t9RHnE3ZwKMSgU+ Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlCrSlAACgkQ/opHv/APuIfDuACgqkbjR4dqruK80785JroZi+hj i8sAn0653VZ8f1ZgQOAW7WSLy9Vz8573 =OOij -----END PGP SIGNATURE----- --8t9RHnE3ZwKMSgU+-- From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 09:23:04 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 58D2CFEB for ; Tue, 20 Nov 2012 09:23:04 +0000 (UTC) (envelope-from lserinol@gmail.com) Received: from mail-bk0-f54.google.com (mail-bk0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id D1F4E8FC16 for ; Tue, 20 Nov 2012 09:23:03 +0000 (UTC) Received: by mail-bk0-f54.google.com with SMTP id je9so1632161bkc.13 for ; Tue, 20 Nov 2012 01:23:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=fhzkigQaGxFeM/kPRoW+XuwoEv0+zMQg5ehSf/Og0Ts=; b=DLDGjjP+SSwzis80U7+PbT2hc6kk8h2GcDGz+FJ+CXr543IBhDRDuuGMPZ1A6BLsPO R7reLUNxlUxxLvtfJ1vTwn2C+aRYJPT/Lov/V4/25Pz86/QRSKTuClT7xAn6naIgTehK iAsGzHk9iEN+zrzev2xDR3YxKf4igbueCGY+dCDBpe7KUhLIBKPLyFdRJVG6u76Pla4B 9ppR8PCnMpYTfljwnukY/mFq8IchqJpVEHCF+/m0+MTvxrukHFwiDCyyZXDRbvFKl5kh 84u6RrU57BHyfrXJFx+gzDqUKQKZknk21XA8g5I1ytJ/6BzGrAyH6BFfKRyRdsIuMax8 SfzA== MIME-Version: 1.0 Received: by 10.204.148.195 with SMTP id q3mr1716599bkv.122.1353403382458; Tue, 20 Nov 2012 01:23:02 -0800 (PST) Received: by 10.205.113.203 with HTTP; Tue, 20 Nov 2012 01:23:02 -0800 (PST) In-Reply-To: <20121120091600.GA4535@server.rulingia.com> References: <20121120091600.GA4535@server.rulingia.com> Date: Tue, 20 Nov 2012 11:23:02 +0200 Message-ID: Subject: Re: ZFS spa_sync() spends 10-20% of its time in spa_free_sync_cb() From: Levent Serinol To: Peter Jeremy Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 09:23:04 -0000 Ok that good news for me. I can test it with cvs head to see results on heavily loaded server. Looks like I was hitting this bug on production servers. Thanks, On Tue, Nov 20, 2012 at 11:16 AM, Peter Jeremy wrote: > On 2012-Nov-20 10:19:34 +0200, Levent Serinol wrote: > > I was following illumos cvs for a while and noticed a bug #3329 yesterday > >that fixed in cvs. It looks like Freebsd cvs/web is down so I couldn't be > >sure if it's also fixed in freebsd head cvs. Can anyone know if this one > >fixed in freebsd zfs code or scheduled to do ? > > See http://svnweb.freebsd.org/changeset/base/242735 > > I'm not sure what the plans are for merging the fixes back to -stable. > > -- > Peter Jeremy > -- http://lserinol.blogspot.com/ From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 09:48:54 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CE5306A2; Tue, 20 Nov 2012 09:48:54 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id CBCC78FC0C; Tue, 20 Nov 2012 09:48:53 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA25073; Tue, 20 Nov 2012 11:48:51 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TakRm-000DT9-SR; Tue, 20 Nov 2012 11:48:50 +0200 Message-ID: <50AB5202.4070906@FreeBSD.org> Date: Tue, 20 Nov 2012 11:48:50 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 09:48:54 -0000 on 20/11/2012 10:57 Andrei Lavreniyuk said the following: > Hi! > > > My system: > > # uname -a > FreeBSD open.technica-03.local 10.0-CURRENT FreeBSD 10.0-CURRENT #0: > Tue Oct 30 14:13:01 EET 2012 > root@open.technica-03.local:/usr/obj/usr/src/sys/SMP64R amd64 > > > # zpool status -v > pool: zsolar > state: ONLINE > scan: resilvered 2,56M in 0h0m with 0 errors on Tue Nov 20 10:26:35 2012 > config: > > NAME STATE READ WRITE CKSUM > zsolar ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > gpt/disk0 ONLINE 0 0 0 > gpt/disk2 ONLINE 0 0 0 > gpt/disk3 ONLINE 0 0 0 > > errors: No known data errors > > > Update source: > > # svn info > Path: . > Working Copy Root Path: /usr/src > URL: svn://svn.freebsd.org/base/head > Repository Root: svn://svn.freebsd.org/base > Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f > Revision: 243278 > Node Kind: directory > Schedule: normal > Last Changed Author: avg > Last Changed Rev: 243272 > Last Changed Date: 2012-11-19 13:35:56 +0200 > > > I used http://people.freebsd.org/~avg/zfs-spa-multi_vdev_root_support.diff > > > buildworld + kernel > > rm /boot/zfs/zpool.cache > > > Reboot.... > > > Mounting from zfs:zsolar failed with error 45 Are there any other unusual messages before this line? Could you please try adding vfs.zfs.debug=1 to loader.conf and check again? Could you also provide 'zdb -CC zsolar' output and 'zdb -l /dev/gpt/diskX' for each of the disks. These could be uploaded somewhere as they can be quite lengthy. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 11:59:45 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 07D97F16; Tue, 20 Nov 2012 11:59:45 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 207868FC13; Tue, 20 Nov 2012 11:59:43 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so5572625lah.13 for ; Tue, 20 Nov 2012 03:59:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=C/yhW0n2CeDzNVy/nbdoNtfsmSapNu2zcST18JywTeU=; b=KQt6/Oung8DmtMB3JPGyDsItzcJH4F11nYw+3l0Xto22iFklOqvCjz5NJFr9KTz8z8 qWXoMB640WDl7W/bPEj9bPn9ozGd991eg0TdDrFhaI5IsUN/KquDv9yT3BW/HRwtkHTV 1LmyYZQaUX3nDARcYOVpA98YKpFeAuyuTepWrsYcrSeSycX98/4lTu5fSKP7izSLDDfk 9WZz45/1U1strLTA+byyfD/PXVjhOqdwDlMiYBgUCjTQfV4mWPaJ0vlKrJkFXPGSCb5F fm/+DaqtB+7qSTXdCu/akFUcp4LprFmuihOBoovo+R8LreuMzagchKxQft3oiw98Ttpt V3MQ== MIME-Version: 1.0 Received: by 10.152.104.148 with SMTP id ge20mr14202676lab.51.1353412782284; Tue, 20 Nov 2012 03:59:42 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 03:59:42 -0800 (PST) In-Reply-To: <50AB5202.4070906@FreeBSD.org> References: <50AB5202.4070906@FreeBSD.org> Date: Tue, 20 Nov 2012 13:59:42 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 11:59:45 -0000 Hi! > Are there any other unusual messages before this line? > Could you please try adding vfs.zfs.debug=1 to loader.conf and check again? > Could you also provide 'zdb -CC zsolar' output and 'zdb -l /dev/gpt/diskX' for > each of the disks. These could be uploaded somewhere as they can be quite lengthy. Please download and view files: http://tor.reactor-xg.kiev.ua/files/zfs/disk0.txt http://tor.reactor-xg.kiev.ua/files/zfs/disk2.txt http://tor.reactor-xg.kiev.ua/files/zfs/disk3.txt http://tor.reactor-xg.kiev.ua/files/zfs/zdb_CC_zsolar.txt http://tor.reactor-xg.kiev.ua/files/zfs/gpart_show.txt http://tor.reactor-xg.kiev.ua/files/zfs/gpart_show_label.txt http://tor.reactor-xg.kiev.ua/files/zfs/20121120_004.jpg http://tor.reactor-xg.kiev.ua/files/zfs/20121120_005.jpg http://tor.reactor-xg.kiev.ua/files/zfs/20121120_006.jpg --- Best regards, Andrei Lavreniyuk. From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 12:03:57 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id E65B650B; Tue, 20 Nov 2012 12:03:57 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id ECCD28FC12; Tue, 20 Nov 2012 12:03:56 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA26823; Tue, 20 Nov 2012 14:03:54 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1TamYU-000DZM-5j; Tue, 20 Nov 2012 14:03:54 +0200 Message-ID: <50AB71A7.7050101@FreeBSD.org> Date: Tue, 20 Nov 2012 14:03:51 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121030 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: <50AB5202.4070906@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 12:03:58 -0000 on 20/11/2012 12:45 Andrei Lavreniyuk said the following: > Hi! > > >> Are there any other unusual messages before this line? >> Could you please try adding vfs.zfs.debug=1 to loader.conf and check again? > >> Could you also provide 'zdb -CC zsolar' output and 'zdb -l /dev/gpt/diskX' for >> each of the disks. These could be uploaded somewhere as they can be quite lengthy. > > > Please view attached files. Thank you. "Can not parse the config for pool" message explains what happens but not why... Could you please apply the following patch, "un-ifdef" the DEBUG sections of it and try again? http://people.freebsd.org/~avg/spa_generate_rootconf.debug.diff -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 12:35:02 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A9F1FAE1; Tue, 20 Nov 2012 12:35:02 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id E9C358FC08; Tue, 20 Nov 2012 12:35:01 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so5604778lah.13 for ; Tue, 20 Nov 2012 04:35:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=vX9OJW9LVS/oprEriBSPFbzie3dZUb4xr26OFVZGDGQ=; b=QxligrTLfiGNwkZ25NrlUVnv0XludUDUq36oOcJic8yNf7LDFVq+88fUbaFagL4Cb1 7V0gaYZWoAQzg9Hl34VA3dTmlDLlb3Ynj7p7xiVBo7EedPDkk6TjMyVh+tr1bxGNLgXu 8ui1qpqZA4cjW7e81SH032GioXPG/hN8dQpaBXLmE1GXAdGVu8gyQjcD/3LMDMGjAGrP J5b3bFkj028weKRJCt1ShlwCXwFkSjMAKaicMErjn+v/8Es9J1Mel8qHeEq+Dyt72IVY TzUS4EFIEYhjxbDg5YGKHg/zouf1aVLP4/fXRO2mSl9wPbFSBVum12Cz5uUmD/OEM2PH Gi8Q== MIME-Version: 1.0 Received: by 10.112.46.37 with SMTP id s5mr6342652lbm.67.1353414900814; Tue, 20 Nov 2012 04:35:00 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 04:35:00 -0800 (PST) In-Reply-To: References: <50AB5202.4070906@FreeBSD.org> Date: Tue, 20 Nov 2012 14:35:00 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Tom Evans , freebsd-current@freebsd.org, freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 12:35:02 -0000 2012/11/20 Tom Evans : > All those files just redirect to the homepage for me... Fixed. Please download and view files: http://tor.reactor-xg.kiev.ua/files/zfs/disk0.txt http://tor.reactor-xg.kiev.ua/files/zfs/disk2.txt http://tor.reactor-xg.kiev.ua/files/zfs/disk3.txt http://tor.reactor-xg.kiev.ua/files/zfs/zdb_CC_zsolar.txt http://tor.reactor-xg.kiev.ua/files/zfs/gpart_show.txt http://tor.reactor-xg.kiev.ua/files/zfs/gpart_show_label.txt http://tor.reactor-xg.kiev.ua/files/zfs/20121120_004.jpeg http://tor.reactor-xg.kiev.ua/files/zfs/20121120_005.jpeg http://tor.reactor-xg.kiev.ua/files/zfs/20121120_006.jpeg From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 12:41:44 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7881ECFF; Tue, 20 Nov 2012 12:41:44 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 783E48FC17; Tue, 20 Nov 2012 12:41:42 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so5611025lah.13 for ; Tue, 20 Nov 2012 04:41:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=O0EqJBwAOE31skW/2JwKWn5yiVv5y1dFR6xwprwr+9I=; b=jNvR+tqVyxrbpgCRW30suQT0FxMO4rG78Q5U2y9RqUALfTAfpCfWL3/PpwQRaluIp9 KGmGwUIoqdxHc+qOYsJvikn8CFxV2Mo7qviREa1hGkSqxFOOPgsjwH+t6javsY3Q0c4s GJ2gAcd1WJF/Y2gg3lAtchOK8JU1nkBmE/sb0GEOhSkc/vWB7xsXyebjE06eipVww9Um /8s6Zplj58eF3N4wXVv34UYXIYNTs0OVdTQU+uybXy7rTiHvrpNPW8I8iR+B0/r3jwYz 0ZNKmK3RPPnkF2kjReIErCCJ6btpMSUEswClWznKzsLm75FgdlOadGL/BFutdbc3F7iS Gokg== MIME-Version: 1.0 Received: by 10.112.39.225 with SMTP id s1mr6358133lbk.117.1353415301877; Tue, 20 Nov 2012 04:41:41 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 04:41:41 -0800 (PST) In-Reply-To: <50AB71A7.7050101@FreeBSD.org> References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> Date: Tue, 20 Nov 2012 14:41:41 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 12:41:44 -0000 Hi! > "Can not parse the config for pool" message explains what happens but not why... > > Could you please apply the following patch, "un-ifdef" the DEBUG sections of it > and try again? > http://people.freebsd.org/~avg/spa_generate_rootconf.debug.diff I use spa_generate_rootconf.debug.diff. make kernel && reboot No new debug messages. Pool cannot mount. --- Best regards, Andrei Lavreniyuk. From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 13:03:16 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CB181416; Tue, 20 Nov 2012 13:03:16 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id C8D488FC13; Tue, 20 Nov 2012 13:03:15 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so5631134lah.13 for ; Tue, 20 Nov 2012 05:03:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=2/e4JBZsFV5xNCwOLsMlMD576nmGdTGAbDJkhl+Bi5k=; b=qH1Q/aO5uIUjtYx0jQp/lV+761Tjp22uu6SpCdNwcatGAzV1yatLCuaLmRWxjJpP2p pV0htflVChV6ILvJ+V6sUyAhIb0ZAoM4lPUYOInfJMud7Qs5DWeT2vxCtmNnHYRvKz+G mOIBxMZB7xQa5cCohj6I4hGhsBSx6p3C6UQZEGuboe+sBOu+BwpmWw+P6WIZe4elybCi Stmno9FWP4i1WH+/g/k9K6JTp5b4tSYkKW/K1uI67AcFc4Ci5lDykZ2NkJNgQvQkxbWh YWFFSDdNjq17SJJb9+TFmGtu3VPX1FERvNCe4tyf81HTWHg69h9j7n2K6uu1p1EAUr6h WRbQ== MIME-Version: 1.0 Received: by 10.112.39.225 with SMTP id s1mr6379595lbk.117.1353416594374; Tue, 20 Nov 2012 05:03:14 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 05:03:14 -0800 (PST) In-Reply-To: References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> Date: Tue, 20 Nov 2012 15:03:14 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 13:03:17 -0000 2012/11/20 Andrei Lavreniyuk : > Hi! > > >> "Can not parse the config for pool" message explains what happens but not why... >> >> Could you please apply the following patch, "un-ifdef" the DEBUG sections of it >> and try again? >> http://people.freebsd.org/~avg/spa_generate_rootconf.debug.diff > > > I use spa_generate_rootconf.debug.diff. > > make kernel && reboot > > No new debug messages. Pool cannot mount. This problem is only on systems with raidz. System with ZFS Mirror work normally. --- Best regards, Andrei Lavreniyuk. From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 13:08:00 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 62CD77E7 for ; Tue, 20 Nov 2012 13:08:00 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id ABCDA8FC0C for ; Tue, 20 Nov 2012 13:07:59 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id qAKCgjnt028008 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Tue, 20 Nov 2012 14:42:46 +0200 (EET) (envelope-from daniel@digsys.bg) Message-ID: <50AB7AC5.3020700@digsys.bg> Date: Tue, 20 Nov 2012 14:42:45 +0200 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:10.0.10) Gecko/20121029 Thunderbird/10.0.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: SSD recommendations for ZFS cache/log References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <20121116044055.GA47859@neutralgood.org> <50A64694.5030001@egr.msu.edu> <20121117181803.GA26421@neutralgood.org> <20121117225851.GJ1462@egr.msu.edu> <20121120040258.GA27849@neutralgood.org> In-Reply-To: <20121120040258.GA27849@neutralgood.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 13:08:00 -0000 On 20.11.12 06:02, kpneal@pobox.com wrote: > Advising people to use dedup when high dedup ratios are expected, and > advising people to otherwise not use dedup, is by itself incorrect > advice. Rather, dedup should only be enabled on a system with a large > amount of memory. The usual advice of 1G of ram per 1TB of disk is > flat out wrong. Perhaps, increasing vfs.zfs.arc_meta_limit is appropriate. It's the starvation for metadata that trashes both memory and disks. By default, that tunable is set to 1/4 of the ARC size. Perhaps 2/3 or even up to the ARC size is better. Changed in /boot/loader.conf. Just an idea, I haven't yet tested this, but from time to time do observe memory starvation from dedupe. (test pending the resilver of an 30TB array, which is abysmally slow) Daniel From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 13:08:33 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id B27768BE; Tue, 20 Nov 2012 13:08:33 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id BA5D88FC1B; Tue, 20 Nov 2012 13:08:31 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA27485; Tue, 20 Nov 2012 15:08:23 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AB80C6.1090507@FreeBSD.org> Date: Tue, 20 Nov 2012 15:08:22 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 13:08:33 -0000 on 20/11/2012 14:41 Andrei Lavreniyuk said the following: > Hi! > > >> "Can not parse the config for pool" message explains what happens but not why... >> >> Could you please apply the following patch, "un-ifdef" the DEBUG sections of it >> and try again? >> http://people.freebsd.org/~avg/spa_generate_rootconf.debug.diff > > > I use spa_generate_rootconf.debug.diff. What about the " "un-ifdef" the DEBUG sections of it" part? > make kernel && reboot > > No new debug messages. Pool cannot mount. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 13:12:51 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D6E25AC1; Tue, 20 Nov 2012 13:12:51 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id D97728FC16; Tue, 20 Nov 2012 13:12:50 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so5641217lah.13 for ; Tue, 20 Nov 2012 05:12:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=BIJFv0RJNB1Zj1WAf4+Z3NSottdAPrmDKLZuDb4osO8=; b=01yl2RYhwAFQiZoSPdcl+WnIT890L+n2FTEFPYG6DYrbie0zUFDv8Z6nue0jWj6ZzV gLdMRL2KSRSElnt9+FExihVAqBcxwVxPdjJnt6FiSXjZfg5MlZg7DRrv5c4vp5ZzynPo CYmo278ttM4kjdMxtjq4DaaDSGOa4Kt8Vu4SFLGzjgPsGz/8FsAGH9iANcsDqy9YipQ8 behvu7XAOB7d5cXkIx9LxoKoM/g/ogZQWmJUbD5uYVpg+OLLbLI9nYSOwxU+7e+7C/15 ZY6WiyB3DGxTuFHuoOQ5Zuaxy0RlDL6R30HkRpsS6q+mnHnOdloKwNwS7lHvu+UQK2Eb hCKQ== MIME-Version: 1.0 Received: by 10.112.39.225 with SMTP id s1mr6390018lbk.117.1353417169455; Tue, 20 Nov 2012 05:12:49 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 05:12:49 -0800 (PST) In-Reply-To: <50AB80C6.1090507@FreeBSD.org> References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> Date: Tue, 20 Nov 2012 15:12:49 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 13:12:52 -0000 2012/11/20 Andriy Gapon : > on 20/11/2012 14:41 Andrei Lavreniyuk said the following: >> Hi! >> >> >>> "Can not parse the config for pool" message explains what happens but not why... >>> >>> Could you please apply the following patch, "un-ifdef" the DEBUG sections of it >>> and try again? >>> http://people.freebsd.org/~avg/spa_generate_rootconf.debug.diff >> >> >> I use spa_generate_rootconf.debug.diff. > > What about the " "un-ifdef" the DEBUG sections of it" part? Sorry. A few minutes ... > >> make kernel && reboot >> >> No new debug messages. Pool cannot mount. > > -- > Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 13:34:21 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BCF657B; Tue, 20 Nov 2012 13:34:21 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id CBF498FC13; Tue, 20 Nov 2012 13:34:20 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id go10so3057912lbb.13 for ; Tue, 20 Nov 2012 05:34:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=P3+eFOD8Cwtu/7eDudUY16h6T2/5JozG3Gjb4j/Fez4=; b=OEaZo2aOF6YjvKekHxE5C9X5Vw/Pc8m8w6EhIlXHSp3h1dCM9wT9RW5sHJNDv7mKWS d+AVywqoO2p2wae79ycm34CbqlI0JNXLW1A5uettNUgTDGjt1CTrrMYZ+oVLVc+eHlLL hJjK++B3jYrfJcSI4L+SfHLcCaMJBJ7HGMSpAxLapYb2nyX4e0cIvXlbqmn7r1+INiB3 59x12yZlHLQAScqlqJ44ARI7fRbm/XLRsstB7wMQENqxZJ6lzxNyBGNkgTF3CHxJGDVQ SemVMpj+MXqC29e6GRi60jP+1UhtG8/EyvPDVhOGzH0bv5sxzn4DF5bA3eqcifF/XdvG rI8A== MIME-Version: 1.0 Received: by 10.112.54.40 with SMTP id g8mr6534959lbp.49.1353418459609; Tue, 20 Nov 2012 05:34:19 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 05:34:19 -0800 (PST) In-Reply-To: <50AB80C6.1090507@FreeBSD.org> References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> Date: Tue, 20 Nov 2012 15:34:19 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 13:34:21 -0000 > What about the " "un-ifdef" the DEBUG sections of it" part? http://tor.reactor-xg.kiev.ua/files/zfs/20121120_000.jpeg http://tor.reactor-xg.kiev.ua/files/zfs/20121120_001.jpeg http://tor.reactor-xg.kiev.ua/files/zfs/20121120_002.jpeg http://tor.reactor-xg.kiev.ua/files/zfs/20121120_003.jpeg From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 13:56:50 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id DABB1D08; Tue, 20 Nov 2012 13:56:50 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id E6FCD8FC0C; Tue, 20 Nov 2012 13:56:49 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA27988; Tue, 20 Nov 2012 15:56:47 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AB8C1F.2040108@FreeBSD.org> Date: Tue, 20 Nov 2012 15:56:47 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 13:56:51 -0000 on 20/11/2012 15:34 Andrei Lavreniyuk said the following: >> What about the " "un-ifdef" the DEBUG sections of it" part? > > > http://tor.reactor-xg.kiev.ua/files/zfs/20121120_000.jpeg > http://tor.reactor-xg.kiev.ua/files/zfs/20121120_001.jpeg > http://tor.reactor-xg.kiev.ua/files/zfs/20121120_002.jpeg > http://tor.reactor-xg.kiev.ua/files/zfs/20121120_003.jpeg > Sorry to make you jump through so many hoops. Now that I see that the probed config is entirely correct, the problem appears to be quite obvious: vdev_alloc is not able to properly use spa_version in this context because spa_ubsync is not initialized yet. Let me think about how to fix this. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 14:17:16 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EF03ECB5; Tue, 20 Nov 2012 14:17:16 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 027638FC0C; Tue, 20 Nov 2012 14:17:15 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id QAA28242; Tue, 20 Nov 2012 16:17:13 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AB90E9.5070102@FreeBSD.org> Date: Tue, 20 Nov 2012 16:17:13 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> <50AB8C1F.2040108@FreeBSD.org> In-Reply-To: <50AB8C1F.2040108@FreeBSD.org> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 14:17:17 -0000 on 20/11/2012 15:56 Andriy Gapon said the following: > on 20/11/2012 15:34 Andrei Lavreniyuk said the following: >>> What about the " "un-ifdef" the DEBUG sections of it" part? >> >> >> http://tor.reactor-xg.kiev.ua/files/zfs/20121120_000.jpeg >> http://tor.reactor-xg.kiev.ua/files/zfs/20121120_001.jpeg >> http://tor.reactor-xg.kiev.ua/files/zfs/20121120_002.jpeg >> http://tor.reactor-xg.kiev.ua/files/zfs/20121120_003.jpeg >> > > Sorry to make you jump through so many hoops. > Now that I see that the probed config is entirely correct, the problem appears to > be quite obvious: vdev_alloc is not able to properly use spa_version in this > context because spa_ubsync is not initialized yet. > > Let me think about how to fix this. I hope that the following simple patch should fix the problem: http://people.freebsd.org/~avg/spa_import_rootpool.version.diff -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 14:43:17 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 9A6AB85B for ; Tue, 20 Nov 2012 14:43:17 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id 42D638FC12 for ; Tue, 20 Nov 2012 14:43:16 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 6A24F47E21; Tue, 20 Nov 2012 15:43:08 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.5 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.1.2] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 031AD47DCD for ; Tue, 20 Nov 2012 15:43:03 +0100 (CET) Message-ID: <50AB96F5.3060402@platinum.linux.pl> Date: Tue, 20 Nov 2012 15:43:01 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <57ac1f$gg70bn@ipmail05.adl6.internode.on.net> <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> <50AB3789.1000508@platinum.linux.pl> <230DE7DAE83749DCBD180D5EF85D4CB1@multiplay.co.uk> In-Reply-To: <230DE7DAE83749DCBD180D5EF85D4CB1@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 14:43:17 -0000 On 2012-11-20 09:57, Steven Hartland wrote: >> vfs.zfs.arc_min="10000M" >> vfs.zfs.arc_max="10000M" >> vfs.zfs.vdev.cache.size="16M" # vdev cache helps a lot during scrubs >> vfs.zfs.vdev.cache.bshift="14" # grow all i/o requests to 16kiB, >> smaller have shown to have same latency so might as well get more "for >> free" >> vfs.zfs.vdev.cache.max="16384" > > This has been disabled by default for a while are you sure of the benefits? > > "Disable vdev cache (readahead) by default. > > The vdev cache is very underutilized (hit ratio 30%-70%) and may consume > excessive memory on systems with many vdevs. > > Illumos-gate revision: 13346" I'm not sure anymore - getting very weird results, with both vdev cache enabled and disabled. What I'm sure of is that 160MB used in my case for vdev cache is 1.5% of 10GB arc so can be ignored as insignificant. Weird results (just after reboot, fs not mounted so completely idle, begin scrub, wait 30 seconds, stop scrub, see how much got scrubbed): run 1) 9990 MB, run 2) 1530 MB, run 3) 10400 MB, run 4) 1490 MB, run 5) 1540 MB, run 6) 1430 MB, run 7) 10600 MB. Is ZFS tossing a coin to decide if scrub should be slow or fast? heads - 333MB/s, tails - 50MB/s ... From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 14:59:08 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 53966F65; Tue, 20 Nov 2012 14:59:08 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 6C8E08FC15; Tue, 20 Nov 2012 14:59:06 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so5754354lah.13 for ; Tue, 20 Nov 2012 06:59:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=WJrmU+WwboRIoNICAisgqtgXckuovl+rtt9WarvthC0=; b=bgAOa7c75o9+OfJcwggOBVJXL+dPU01q6ZgPXPrJfts44Zjc7CCOmHq9Px8EXv2X0/ ECJy/0JcyjIL1JADvvAT4stF8dJopOIUKRKkTdroRZ4InknF7KGhedEm3N3S16sIb8U7 WOvjiuIwaH9bKKr8A3X9G38ZO7Qbh+Jx5QKsaIuI6fKRB1aFjPICYT039C8FhGvo0b/R J0li12qFkjIh45/X6euoAdJg59L916i8Gq3MICETW/LCtE694TBwPGofGhoLbktTiGAe 8a8ciugLCwjlXFJPYYKrxltko/SC9qM9PkCblfa1EZdxNOreUkAub9WoEC1hEEOJ5W4Q JA7w== MIME-Version: 1.0 Received: by 10.112.39.225 with SMTP id s1mr6502787lbk.117.1353423546145; Tue, 20 Nov 2012 06:59:06 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 06:59:06 -0800 (PST) In-Reply-To: <50AB90E9.5070102@FreeBSD.org> References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> <50AB8C1F.2040108@FreeBSD.org> <50AB90E9.5070102@FreeBSD.org> Date: Tue, 20 Nov 2012 16:59:06 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 14:59:08 -0000 >> Sorry to make you jump through so many hoops. >> Now that I see that the probed config is entirely correct, the problem appears to >> be quite obvious: vdev_alloc is not able to properly use spa_version in this >> context because spa_ubsync is not initialized yet. >> >> Let me think about how to fix this. > > I hope that the following simple patch should fix the problem: > http://people.freebsd.org/~avg/spa_import_rootpool.version.diff At mount system trap and reboot. --- Best regards, Andrei Lavreniyuk. From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 15:06:07 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3406F400; Tue, 20 Nov 2012 15:06:07 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 37A428FC1B; Tue, 20 Nov 2012 15:06:05 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA28762; Tue, 20 Nov 2012 17:06:04 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AB9C5B.6030006@FreeBSD.org> Date: Tue, 20 Nov 2012 17:06:03 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> <50AB8C1F.2040108@FreeBSD.org> <50AB90E9.5070102@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 15:06:07 -0000 on 20/11/2012 16:59 Andrei Lavreniyuk said the following: >>> Sorry to make you jump through so many hoops. >>> Now that I see that the probed config is entirely correct, the problem appears to >>> be quite obvious: vdev_alloc is not able to properly use spa_version in this >>> context because spa_ubsync is not initialized yet. >>> >>> Let me think about how to fix this. >> >> I hope that the following simple patch should fix the problem: >> http://people.freebsd.org/~avg/spa_import_rootpool.version.diff > > > At mount system trap and reboot. > Unexpected. Can you catch the backtrace of the panic? If you have it on the screen. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 16:00:33 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 0E33DA5F; Tue, 20 Nov 2012 16:00:33 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 1166A8FC15; Tue, 20 Nov 2012 16:00:31 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA29224; Tue, 20 Nov 2012 18:00:29 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50ABA91D.9090905@FreeBSD.org> Date: Tue, 20 Nov 2012 18:00:29 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> <50AB8C1F.2040108@FreeBSD.org> <50AB90E9.5070102@FreeBSD.org> <50AB9C5B.6030006@FreeBSD.org> In-Reply-To: <50AB9C5B.6030006@FreeBSD.org> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 16:00:33 -0000 on 20/11/2012 17:06 Andriy Gapon said the following: > on 20/11/2012 16:59 Andrei Lavreniyuk said the following: >>>> Sorry to make you jump through so many hoops. >>>> Now that I see that the probed config is entirely correct, the problem appears to >>>> be quite obvious: vdev_alloc is not able to properly use spa_version in this >>>> context because spa_ubsync is not initialized yet. >>>> >>>> Let me think about how to fix this. >>> >>> I hope that the following simple patch should fix the problem: >>> http://people.freebsd.org/~avg/spa_import_rootpool.version.diff >> >> >> At mount system trap and reboot. >> > > Unexpected. Can you catch the backtrace of the panic? > If you have it on the screen. > > Ah, found another bogosity in the code: --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c @@ -3925,8 +4117,6 @@ spa_import_rootpool(const char *name) return (error); } - spa_history_log_version(spa, LOG_POOL_IMPORT); - spa_config_enter(spa, SCL_ALL, FTAG, RW_WRITER); vdev_free(rvd); spa_config_exit(spa, SCL_ALL, FTAG); This previously "worked" only because the pool version was zero and thus the action was a NOP anyway. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Nov 20 16:18:23 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 81363F82 for ; Tue, 20 Nov 2012 16:18:23 +0000 (UTC) (envelope-from lists@eitanadler.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id DEBF98FC08 for ; Tue, 20 Nov 2012 16:18:22 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id go10so3242114lbb.13 for ; Tue, 20 Nov 2012 08:18:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=eitanadler.com; s=0xdeadbeef; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type; bh=FAhuX9T6y5+ul/1CSmGx6JGvy4FcpZRn6kTvffpiRTE=; b=Wi4SJKMst6I5/7IHVwV1RYfUU6c62bvCeZAjui0f0twaFi8gGSMheF8KeYK+ATyr3J fFxD4YljsT0ly9v6mpF9z+U2LkWj1jkSAaR7WhPC28A/VON5BwSvKa+qkc1mxyfqYmdN NUIHrGNtyAOQUb+J9bmHwkItpNSOpCwT/xbdE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type :x-gm-message-state; bh=FAhuX9T6y5+ul/1CSmGx6JGvy4FcpZRn6kTvffpiRTE=; b=gS2UBcU86JjPagOwvRv0mxuqOHNbELD377StSunX+Q7KaPIx5VuVifpQrHXlAW/r5x CcG9nfQ+LfLVQdkVgjg//QyaLiwtfmIorSkVZzyXfkAaaU2Vlg8Fv+e6BOGmrJ+EuPEv Koa6eZpi2kA/wZ3iyjlY/U1kyj2HB53bwb6Cs2AyncQMMe9+x3uVHRdG/vcXIYquHsDt i1dkLmss2V2Gtli+hGmSU7ByuWkyiZj/O2vgv2bTaqJYsitx2vzC6Dv5YQXt4Dy/tOZb uTn2W1FiBpr8z4FjjN1vWjlgCmhIIg+KYyddvQh0gSkM10uK8OADGOP6FuE4X0t/E7X/ EqaQ== Received: by 10.152.102.234 with SMTP id fr10mr15254296lab.28.1353428301420; Tue, 20 Nov 2012 08:18:21 -0800 (PST) MIME-Version: 1.0 Sender: lists@eitanadler.com Received: by 10.112.25.166 with HTTP; Tue, 20 Nov 2012 08:17:50 -0800 (PST) In-Reply-To: <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> References: <57ac1f$gf3rkl@ipmail05.adl6.internode.on.net> <50A31D48.3000700@shatow.net> <57ac1f$gg70bn@ipmail05.adl6.internode.on.net> <48C81451-B9E7-44B5-8B8A-ED4B1D464EC6@bway.net> From: Eitan Adler Date: Tue, 20 Nov 2012 11:17:50 -0500 X-Google-Sender-Auth: U42zigreYx0-q52hH4CjQ_inG0A Message-ID: Subject: Re: ZFS FAQ (Was: SSD recommendations for ZFS cache/log) To: Charles Sprickman Content-Type: text/plain; charset=UTF-8 X-Gm-Message-State: ALoCoQnEdp/4jWB2kro6oXW1zKcFQe/0Chz8k0OdfeTqQSvoHZ5Am0McJpNFqIxAs5f6BkeJr/cf Cc: FreeBSD FS , Stephen McKay X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Nov 2012 16:18:23 -0000 On 19 November 2012 23:59, Charles Sprickman wrote: > Wonderful to see some work on this. > > One of the great remaining zfs mysteries remains all the tunables > that are under "vfs.zfs.*". Obviously there are plenty of read-only > items there, but conflicting information gathered from random forum > posts and commit messages exist about what exactly one can do > regarding tuning beyond arc sizing. > > If you have any opportunity to work with the people who have ported > and are now maintaining zfs, it would be really wonderful to get > some feedback from them on what knobs are safe to twiddle and why. > I suspect many of the tunable items don't really have meaningful > equivalents in Sun's implementation since the way zfs falls under > the vfs layer in FreeBSD is so different. I'm working in general on the FAQ, not just on the ZFS section. I don't have the time to go in depth on this. :( I'll make you a deal though: If you could work with ZFS folk and provide the content in Q&A form I'll review it, turn it into docbook, and commit it. The entire point my FAQ commits has been to get other people involved. I can't do it all. -- Eitan Adler Source, Ports, Doc committer Bugmeister, Ports Security teams From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 07:25:51 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 67080E3C; Wed, 21 Nov 2012 07:25:51 +0000 (UTC) (envelope-from SRS0=TOXp=JR=googlemail.com=peter.wullinger+freebsd@srs.kundenserver.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.17.8]) by mx1.freebsd.org (Postfix) with ESMTP id CBB158FC08; Wed, 21 Nov 2012 07:25:50 +0000 (UTC) Received: from kaliope.home (nrbg-4d07590e.pool.mediaWays.net [77.7.89.14]) by mrelayeu.kundenserver.de (node=mreu1) with ESMTP (Nemesis) id 0LiZ0S-1SyS5B2H1Q-00cmbu; Wed, 21 Nov 2012 08:25:49 +0100 Received: by kaliope.home (Postfix, from userid 10001) id 74F63CE; Wed, 21 Nov 2012 08:25:46 +0100 (CET) Date: Wed, 21 Nov 2012 08:25:46 +0100 From: Peter Wullinger To: Andriy Gapon Subject: Re: ???BOGOSPAM??? Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable. Message-ID: <20121121072546.GA2992@kaliope.home> References: <201211182220.qAIMK1oY061509@freefall.freebsd.org> <50AA4437.1070509@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable In-Reply-To: <50AA4437.1070509@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-Provags-ID: V02:K0:MJf4HLGTjdws15FXDUzH9b8uDmaD+DUgNLqFCq3nKj5 sW5/1zzEgHb6WIsyTHq5DUWfI2/3xHkcl1pbQq080qTKaGZljP ACddg5egk5SEv7eW7KjREGdUc5fMe4FdrhQ0qQ3l7diH+JPBdH gBHx7a+Bg+SCztuRat7KwXZu7c17hbOhk/2btEOpMEEQZLI/w9 gYbDtfcvkxNaF6KwdTS/1OX3vWbVc80Wk6MExvt2/TmwEL/8mt Ab1xnzZeuiYhg9V4W8lNl8oTp+tyowCdh09ZZtuS00CuQVFnpd 7Wlp235D44ob0kQmjSGs4sZv6big4qYJKgpOFg6n4RQMbYZi8Q ywrIyRkaOpRxujn+9wMw= Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 07:25:51 -0000 In epistula a Andriy Gapon, die horaque Mon Nov 19 15:37:43 2012: > on 19/11/2012 00:20 Peter Wullinger said the following: > > The following reply was made to PR kern/153520; it has been noted by GN= ATS. > >=20 > > From: Peter Wullinger > > To: bug-followup@FreeBSD.org > > Cc: =20 > > Subject: Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 > > unstable. > > Date: Sun, 18 Nov 2012 23:10:42 +0100 > >=20 > > I see this too on two identical HP Proliant 320 DL G6 with 9-STABLE. > > =20 > > The machine usually needs some nudging in the form of warm restarts > > to boot the operating system. >=20 > Could you please check if upgrading to r243217 or later changes anything? > Please be sure that you update the on-disk boot blocks (gpart bootcode ..= =2E). >=20 % uname -v=20 FreeBSD 9.1-PRERELEASE #5 r243290: Mon Nov 19 22:33:42 CET 2012 src@...:/us= r/obj/usr/src/sys/ML350=20 # gpart bootcode -b /boot/pmbr -i 1 -p /boot/gptzfsboot da0 da0 has bootcode # reboot Problem seems to be fixed by r243217. System has now come up without hassle across three reboots, which is more stable than usual. The second machine has just completed its reboot without any problems, too. I'll recheck if the issue persists, probably from a cold boot, once I get the opportunity (probably the next update to -STABLE). --=20 One can be a brother only in something. Where there is no tie that binds men, men are not united but merely lined up. =20 -- Antoine Jean Baptiste Marie Roger de Saint-Exup=E9ry From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 07:51:25 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BCDC45C5; Wed, 21 Nov 2012 07:51:25 +0000 (UTC) (envelope-from andy.lavr@gmail.com) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id C5FE68FC13; Wed, 21 Nov 2012 07:51:24 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id j13so6475567lah.13 for ; Tue, 20 Nov 2012 23:51:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=3vYxNY8Pb5yRjKh6kMV/AJKlm9f9orkguveXpTuKEr4=; b=YycLE6H53Xr/n/Fr8fbXWd9okBjmVW2WStfzPrEHQBpOJAHzhDQzfTEF50JSsgs4zX oTcqUxImimQ5Q1lBsCXvRw/6ldP23Pm/yTd9kUL9ezxhRzt6i8SIfLjRSrF9X2fxzkZC ygzN6vTdyEpNnWsEh43Ag3w0M97fSmeqPF5aOS0aj2wC4m0R7mQ1icBsGCFxt7TG7n2K yf83rFH9gFszdKsDGc1eI9ZEjOA4HkYXAdeokWAeiUZ3PHcBpDrxflq2bMVkGBIbDNSS a5P3OnPFXxKIWmenPJgKgN5W4GZ4yuatP2yYaGwkcYa/2CKMapeA1/2TpTafHaGK/XFM kDBw== MIME-Version: 1.0 Received: by 10.112.88.100 with SMTP id bf4mr621937lbb.49.1353484283468; Tue, 20 Nov 2012 23:51:23 -0800 (PST) Received: by 10.114.5.5 with HTTP; Tue, 20 Nov 2012 23:51:23 -0800 (PST) In-Reply-To: <50ABA91D.9090905@FreeBSD.org> References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> <50AB8C1F.2040108@FreeBSD.org> <50AB90E9.5070102@FreeBSD.org> <50AB9C5B.6030006@FreeBSD.org> <50ABA91D.9090905@FreeBSD.org> Date: Wed, 21 Nov 2012 09:51:23 +0200 Message-ID: Subject: Re: problem booting to multi-vdev root pool From: Andrei Lavreniyuk To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 07:51:25 -0000 2012/11/20 Andriy Gapon : > on 20/11/2012 17:06 Andriy Gapon said the following: >> on 20/11/2012 16:59 Andrei Lavreniyuk said the following: >>>>> Sorry to make you jump through so many hoops. >>>>> Now that I see that the probed config is entirely correct, the problem appears to >>>>> be quite obvious: vdev_alloc is not able to properly use spa_version in this >>>>> context because spa_ubsync is not initialized yet. >>>>> >>>>> Let me think about how to fix this. >>>> >>>> I hope that the following simple patch should fix the problem: >>>> http://people.freebsd.org/~avg/spa_import_rootpool.version.diff >>> >>> >>> At mount system trap and reboot. >>> >> >> Unexpected. Can you catch the backtrace of the panic? >> If you have it on the screen. >> >> > > Ah, found another bogosity in the code: > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c > @@ -3925,8 +4117,6 @@ spa_import_rootpool(const char *name) > return (error); > } > > - spa_history_log_version(spa, LOG_POOL_IMPORT); > - > spa_config_enter(spa, SCL_ALL, FTAG, RW_WRITER); > vdev_free(rvd); > spa_config_exit(spa, SCL_ALL, FTAG); > > > This previously "worked" only because the pool version was zero and thus the > action was a NOP anyway. > Problem solved. Raidz pool mount without zpool.cache. # zpool status -v pool: zsolar state: ONLINE scan: resilvered 2,56M in 0h0m with 0 errors on Tue Nov 20 10:26:35 2012 config: NAME STATE READ WRITE CKSUM zsolar ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 gpt/disk3 ONLINE 0 0 0 errors: No known data errors # uname -a FreeBSD opensolaris.technica-03.local 10.0-CURRENT FreeBSD 10.0-CURRENT #6 r243278M: Wed Nov 21 09:28:51 EET 2012 root@opensolaris.technica-03.local:/usr/obj/usr/src/sys/SMP64R amd64 Thanks! From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 09:02:12 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CC995C6D for ; Wed, 21 Nov 2012 09:02:12 +0000 (UTC) (envelope-from ndenev@gmail.com) Received: from mail-ee0-f54.google.com (mail-ee0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id 1C10C8FC0C for ; Wed, 21 Nov 2012 09:02:11 +0000 (UTC) Received: by mail-ee0-f54.google.com with SMTP id c13so4858370eek.13 for ; Wed, 21 Nov 2012 01:02:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:content-transfer-encoding:date:subject:to :message-id:mime-version:x-mailer; bh=YDILIxHQqKN1pro1BH4brkMSznOVnLZnc5tDtmNKRbg=; b=l8VD9B0qYp4gDtS7VZEMLBp1k9R1IrzuYxVVwtes5APXJ7HFTsoQV8DJcpoDTrxp1x SavQo1TlpcPbTRGi09DELDC/frH2+VNIZFNDfvgxwxVvdOwMQgiwBhJ0trgvcvCijjAD PCyLdjFjkB9eKlBWRfZDAOteCMzv1D2JLwNconR4S20xJazYv59B2k2NPfxszgx0KJtR r96NpyRftfL+xFvnUD8PLJOPbBAx4/Jc/3PrTxIzWlXJp3Zi+DN/hDMI/OvadyyJpUC7 TaFiKa02HOOK/FP/5ba/QTaFJP+XZRVgDZgLZvFRk3RCvXyjWzr7Z7E6Rz6EQD3T3R4M xfVg== Received: by 10.14.203.132 with SMTP id f4mr43303107eeo.11.1353488531000; Wed, 21 Nov 2012 01:02:11 -0800 (PST) Received: from ndenevsa.sf.moneybookers.net (g1.moneybookers.com. [217.18.249.148]) by mx.google.com with ESMTPS id v47sm10190021eeo.9.2012.11.21.01.02.09 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 21 Nov 2012 01:02:10 -0800 (PST) From: Nikolay Denev Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Date: Wed, 21 Nov 2012 11:02:09 +0200 Subject: nfsd hang in sosend_generic To: "freebsd-fs@freebsd.org" Message-Id: Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) X-Mailer: Apple Mail (2.1499) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 09:02:12 -0000 Hello, First of all, I'm not sure if this is actually nfsd issue and not = network stack issue. I've just had nfsd hang in unkillable state while doing some IO from = Linux host running Oracle DB using Oracle's Direct NFS. I was watching from some time how the Direct NFS client loads the NFS = server differently, i.e.: with the linux kernel NFS client I see single TCP session to port 2049 = and all traffic goes there, while the Direct NFS client is much more aggressive and creates multiple TCP sessions, and often was = able to generate pretty big Send/Recv-Q's on FreeBSD's side. I'm mentioning this as probably is related. Here's the procstat -kk of the hanged nfsd process : PID TID COMM TDNAME KSTACK = =20 1221 100550 nfsd nfsd: master mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_run+0x8f nfsrvd_nfsd+0x193 nfssvc_nfsd+0x9b sys_nfssvc+0x90 = amd64_syscall+0x5ea Xfast_syscall+0xf7=20 1221 101286 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sleep+0x2ad = sosend_generic+0x25f svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101287 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101288 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101317 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101318 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101319 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101320 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101321 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101322 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101323 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101324 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101325 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101326 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101327 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101328 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101329 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101330 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101331 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101332 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101333 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101334 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101335 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101336 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101337 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101338 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101339 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101340 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101341 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101342 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101343 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101344 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101345 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101346 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101347 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101348 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101349 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101350 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101351 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101352 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101353 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101354 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101355 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101356 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101357 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101358 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101359 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101360 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101361 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101362 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101363 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101364 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101365 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101366 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101367 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101368 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101369 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101370 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101371 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101372 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101373 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101374 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101375 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101376 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101377 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101378 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101379 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101380 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101381 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101382 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101383 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101384 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101385 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101386 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101387 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101388 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101389 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101390 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101391 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101392 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101393 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101394 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101395 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101396 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101397 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101398 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101399 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101400 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101401 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101402 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101403 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101404 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101405 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101406 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101407 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101408 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101409 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101410 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101411 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101412 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101413 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101414 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101415 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101416 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101417 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101418 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101419 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101420 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101421 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101422 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101423 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101424 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101425 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101426 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101427 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101428 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101429 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101430 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101431 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101432 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101433 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101434 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101435 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101436 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101437 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101438 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 1221 101439 nfsd nfsd: service mi_switch+0x194 = sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 = sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa = svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 = svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe=20 Here is a netstat output for the nfs sessions from FreeBSD server side: Proto Recv-Q Send-Q Local Address Foreign Address = (state) tcp4 0 37215456 10.101.0.1.2049 10.101.0.2.42856 = ESTABLISHED tcp4 0 14561020 10.101.0.1.2049 10.101.0.2.62854 = FIN_WAIT_1 tcp4 0 3068132 10.100.0.1.2049 10.100.0.2.9712 = FIN_WAIT_1 Linux host sees this : tcp 1 0 10.101.0.2:9270 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 477940 0 10.100.0.2:9712 10.100.0.1:2049 = ESTABLISHED=20 tcp 1 0 10.101.0.2:10588 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:12254 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:12438 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:17583 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:20285 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:20678 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:22892 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:28850 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:33851 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 165 0 10.100.0.2:34190 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:35643 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:39498 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:39724 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:40742 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:41674 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:42942 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:42956 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 477976 0 10.101.0.2:42856 10.101.0.1:2049 = ESTABLISHED=20 tcp 1 0 10.100.0.2:42045 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:42048 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:43063 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:44771 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:49568 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:50813 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:51418 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:54507 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:57201 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:58553 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:59638 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.100.0.2:62289 10.100.0.1:2049 = CLOSE_WAIT =20 tcp 1 0 10.101.0.2:61848 10.101.0.1:2049 = CLOSE_WAIT =20 tcp 476952 0 10.101.0.2:62854 10.101.0.1:2049 = ESTABLISHED=20 Then I used "tcpdrop" on FreeBSD's side to drop the sessions, the nfsd = was able to die and be restarted. During the "hanged" period, all NFS mounts from the Linux host were = inaccessible, and IO hanged. The nfsd is running with drc2/drc3 and lkshared patches from Rick = Macklem. From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 14:01:42 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 80D12D44 for ; Wed, 21 Nov 2012 14:01:42 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 681C38FC12 for ; Wed, 21 Nov 2012 14:01:40 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEANLdrFCDaFvO/2dsb2JhbABEhiC8OHOCHgEBAQMBAQEBFwEIBCcgCwUWDgoCAg0ZAikBCSYGCAcEARwEh2YGC6wpkwKBIostgzWBEwOIXIp1gi2BHIgshnmDDYFGFx4 X-IronPort-AV: E=Sophos;i="4.83,293,1352091600"; d="scan'208";a="1248365" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 21 Nov 2012 09:01:33 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 1B6D8B3F62; Wed, 21 Nov 2012 09:01:33 -0500 (EST) Date: Wed, 21 Nov 2012 09:01:33 -0500 (EST) From: Rick Macklem To: Nikolay Denev Message-ID: <1183657468.630412.1353506493075.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: Subject: Re: nfsd hang in sosend_generic MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 14:01:42 -0000 Nikolay Denev wrote: > Hello, > > First of all, I'm not sure if this is actually nfsd issue and not > network stack issue. > > I've just had nfsd hang in unkillable state while doing some IO from > Linux host running Oracle DB using Oracle's Direct NFS. > > I was watching from some time how the Direct NFS client loads the NFS > server differently, i.e.: > with the linux kernel NFS client I see single TCP session to port 2049 > and all traffic goes there, while the Direct NFS client > is much more aggressive and creates multiple TCP sessions, and often > was able to generate pretty big Send/Recv-Q's on FreeBSD's side. > I'm mentioning this as probably is related. > I don't know anything about the Oracle client, but it might be creating new TCP connections to try and recover from a "hung" state. Your netstat for the client below shows that there are several ESTABLISHED TCP connections with large receive queues. I wouldn't expect to see this and it suggests that the Oracle client isn't receiving/reading data off the TCP socket for some reason. Once it isn't receiving/reading an RPC reply off the TCP socket, it might create a new one to attempt a retry of the RPC. (NFSv4 requires that any retry of an RPC be done on a new TCP connection. Although that requirement doesn't exist for NFSv3, it would probably be considered "good practice" and will happen if NFSv3 and NFSv4 share the same RPC socket handling code.) > Here's the procstat -kk of the hanged nfsd process : > > PID TID COMM TDNAME KSTACK > 1221 100550 nfsd nfsd: master mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_run+0x8f nfsrvd_nfsd+0x193 nfssvc_nfsd+0x9b sys_nfssvc+0x90 > amd64_syscall+0x5ea Xfast_syscall+0xf7 > 1221 101286 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sleep+0x2ad > sosend_generic+0x25f svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101287 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101288 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101317 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101318 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101319 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101320 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101321 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101322 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101323 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101324 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101325 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101326 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101327 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101328 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101329 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101330 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101331 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101332 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101333 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101334 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101335 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101336 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101337 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101338 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101339 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101340 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101341 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101342 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101343 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101344 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101345 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101346 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101347 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101348 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101349 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101350 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101351 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101352 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101353 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101354 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101355 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101356 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101357 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101358 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101359 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101360 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101361 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101362 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101363 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101364 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101365 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101366 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101367 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101368 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101369 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101370 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101371 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101372 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101373 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101374 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101375 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101376 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101377 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101378 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101379 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101380 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101381 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101382 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101383 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101384 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101385 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101386 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101387 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101388 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101389 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101390 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101391 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101392 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101393 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101394 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101395 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101396 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101397 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101398 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101399 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101400 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101401 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101402 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101403 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101404 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101405 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101406 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101407 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101408 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101409 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101410 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101411 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101412 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101413 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101414 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101415 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101416 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101417 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101418 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101419 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101420 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101421 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101422 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101423 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101424 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101425 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101426 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101427 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101428 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101429 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101430 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101431 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101432 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101433 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101434 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101435 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101436 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101437 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101438 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > 1221 101439 nfsd nfsd: service mi_switch+0x194 > sleepq_catch_signals+0x343 sleepq_wait_sig+0xc _sx_xlock_hard+0x299 > sosend_generic+0x107 svc_vc_reply+0x16f svc_sendreply_common+0xaa > svc_sendreply_mbuf+0x59 nfssvc_program+0x219 svc_run_internal+0x684 > svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe > It appears that all the nfsd threads are trying to send RPC replies back to the client and are stuck there. As you can see below, the send queues for the TCP sockets are big, so the data isn't getting through to the client. The large receive queue in the ESTABLISHED connections on the Linux client suggests that Oracle isn't taking data off the TCP socket for some reason, which would result in this, once the send window is filled. At least that's my rusty old understanding of TCP. (That would hint at an Oracle client bug, but I don't know anything about the Oracle client.) Why? Well, I can't even guess, but a few things you might try are: - disabling TSO and rx/tx checksum offload on the FreeBSD server's network interface(s). - try a different type of network card, if you have one handy. I doubt these will make a difference, since the large receive queues for the ESTABLISHED TCP connections in the Linux client suggests that the data is getting through. Still might be worth a try, since there might be one packet that isn't getting through and that is causing issues for the Oracle client. - if you can do it, try switching the Oracle client mounts to UDP. (For UDP, you want to start with a rsize, wsize no bigger than 16384 and then be prepared to make it smaller if the "fragments dropped due to timeout" becomes non-zero for UDP when you do a "netstat -s".) - There might be a NFS over TCP bug in the Oracle client. - when it is stuck again, do a "vmstat -z" and "vmstat -m" to see if there is a large "InUse" for anything. - in particular, check mbuf clusters Also, you could try capturing packets when it happens and look at then in wireshark to see if/what related traffic is going on the wire. Focus on the TCP layer as well as NFS. > > Here is a netstat output for the nfs sessions from FreeBSD server > side: > > Proto Recv-Q Send-Q Local Address Foreign Address (state) > tcp4 0 37215456 10.101.0.1.2049 10.101.0.2.42856 ESTABLISHED > tcp4 0 14561020 10.101.0.1.2049 10.101.0.2.62854 FIN_WAIT_1 > tcp4 0 3068132 10.100.0.1.2049 10.100.0.2.9712 FIN_WAIT_1 > > Linux host sees this : > > tcp 1 0 10.101.0.2:9270 10.101.0.1:2049 CLOSE_WAIT > tcp 477940 0 10.100.0.2:9712 10.100.0.1:2049 ESTABLISHED ** These hint that the Oracle client isn't reading the socket for some reason. I'd guess that the send window is now full, so the data is backing up in the send queue in the server. > tcp 1 0 10.101.0.2:10588 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:12254 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:12438 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:17583 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:20285 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:20678 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:22892 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:28850 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:33851 10.100.0.1:2049 CLOSE_WAIT > tcp 165 0 10.100.0.2:34190 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:35643 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:39498 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:39724 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:40742 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:41674 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:42942 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:42956 10.100.0.1:2049 CLOSE_WAIT > tcp 477976 0 10.101.0.2:42856 10.101.0.1:2049 ESTABLISHED > tcp 1 0 10.100.0.2:42045 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:42048 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:43063 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:44771 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:49568 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:50813 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:51418 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:54507 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:57201 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:58553 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:59638 10.101.0.1:2049 CLOSE_WAIT > tcp 1 0 10.100.0.2:62289 10.100.0.1:2049 CLOSE_WAIT > tcp 1 0 10.101.0.2:61848 10.101.0.1:2049 CLOSE_WAIT > tcp 476952 0 10.101.0.2:62854 10.101.0.1:2049 ESTABLISHED > > Then I used "tcpdrop" on FreeBSD's side to drop the sessions, the nfsd > was able to die and be restarted. > During the "hanged" period, all NFS mounts from the Linux host were > inaccessible, and IO hanged. > > The nfsd is running with drc2/drc3 and lkshared patches from Rick > Macklem. > These shouldn't have any effect on the above, unless you've exhausted your mbuf clusters. Once you are out of mbuf clusters, I'm not sure what might happen within the lower layers TCP->network interface. Good luck with it, rick > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 15:27:37 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 2139750C for ; Wed, 21 Nov 2012 15:27:37 +0000 (UTC) (envelope-from ndenev@gmail.com) Received: from mail-ee0-f54.google.com (mail-ee0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id 96BB88FC08 for ; Wed, 21 Nov 2012 15:27:36 +0000 (UTC) Received: by mail-ee0-f54.google.com with SMTP id c13so5128967eek.13 for ; Wed, 21 Nov 2012 07:27:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=pkryNHqlwB7YBlYEN1bV0Vw2d2JUoki+AGJFhp9HiLo=; b=lc7a5PLKJm56AQuopboE4eYOe9XVWQ8L70LQCgka6dDII6VMIVi0iEPoQPz2JcuwfS QfPa7lBo62Pc2/ak1D0ciyRn1nzgtYLx+yiizzuib/aomTnQn0PwlKMAlqu7wmQSqSCD jp7SQPkaQXMniHXeNcbyoX3Nmd7FxceqniYfXXRUj50ogPwk6CFsmkUgyDfQL8nxGvW7 plHozuZXvzf/Lmm4kuEIb0uoemu0BIH6mNXzLbm+L5jGAyAtn8k5mv0FpnIKyyKkXbAR 4Jc3v0W3wdiPh1EjigABi/Iml5TAgy9dnnxOrPyKLzmxUte5H0TMWboqNXvz7oawhhC/ 9nLA== Received: by 10.14.209.201 with SMTP id s49mr46858309eeo.7.1353511655114; Wed, 21 Nov 2012 07:27:35 -0800 (PST) Received: from ndenevsa.sf.moneybookers.net (g1.moneybookers.com. [217.18.249.148]) by mx.google.com with ESMTPS id b44sm781529eep.12.2012.11.21.07.27.33 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 21 Nov 2012 07:27:34 -0800 (PST) Subject: Re: nfsd hang in sosend_generic Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Content-Type: text/plain; charset=windows-1252 From: Nikolay Denev In-Reply-To: <1183657468.630412.1353506493075.JavaMail.root@erie.cs.uoguelph.ca> Date: Wed, 21 Nov 2012 17:27:32 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <8C72CE97-6D19-4847-9A89-DF8A05B984DD@gmail.com> References: <1183657468.630412.1353506493075.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1499) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 15:27:37 -0000 On Nov 21, 2012, at 4:01 PM, Rick Macklem wrote: > Nikolay Denev wrote: >> Hello, >>=20 >> First of all, I'm not sure if this is actually nfsd issue and not >> network stack issue. >>=20 >> I've just had nfsd hang in unkillable state while doing some IO from >> Linux host running Oracle DB using Oracle's Direct NFS. >>=20 >> I was watching from some time how the Direct NFS client loads the NFS >> server differently, i.e.: >> with the linux kernel NFS client I see single TCP session to port = 2049 >> and all traffic goes there, while the Direct NFS client >> is much more aggressive and creates multiple TCP sessions, and often >> was able to generate pretty big Send/Recv-Q's on FreeBSD's side. >> I'm mentioning this as probably is related. >>=20 > I don't know anything about the Oracle client, but it might be = creating > new TCP connections to try and recover from a "hung" state. Your = netstat > for the client below shows that there are several ESTABLISHED TCP = connections > with large receive queues. I wouldn't expect to see this and it = suggests > that the Oracle client isn't receiving/reading data off the TCP socket = for > some reason. Once it isn't receiving/reading an RPC reply off the TCP = socket, > it might create a new one to attempt a retry of the RPC. (NFSv4 = requires that > any retry of an RPC be done on a new TCP connection. Although that = requirement > doesn't exist for NFSv3, it would probably be considered "good = practice" and > will happen if NFSv3 and NFSv4 share the same RPC socket handling = code.) >=20 >> Here's the procstat -kk of the hanged nfsd process : >>=20 >> [... snipped huge procstat output =85] >>=20 > It appears that all the nfsd threads are trying to send RPC replies > back to the client and are stuck there. As you can see below, the > send queues for the TCP sockets are big, so the data isn't getting > through to the client. The large receive queue in the ESTABLISHED > connections on the Linux client suggests that Oracle isn't taking > data off the TCP socket for some reason, which would result in this, > once the send window is filled. At least that's my rusty old > understanding of TCP. (That would hint at an Oracle client bug, > but I don't know anything about the Oracle client.) >=20 > Why? Well, I can't even guess, but a few things you might try are: > - disabling TSO and rx/tx checksum offload on the FreeBSD server's > network interface(s). > - try a different type of network card, if you have one handy. > I doubt these will make a difference, since the large receive queues > for the ESTABLISHED TCP connections in the Linux client suggests that > the data is getting through. Still might be worth a try, since there > might be one packet that isn't getting through and that is causing > issues for the Oracle client. >=20 > - if you can do it, try switching the Oracle client mounts to UDP. > (For UDP, you want to start with a rsize, wsize no bigger than > 16384 and then be prepared to make it smaller if the > "fragments dropped due to timeout" becomes non-zero for UDP when > you do a "netstat -s".) > - There might be a NFS over TCP bug in the Oracle client. > - when it is stuck again, do a "vmstat -z" and "vmstat -m" to > see if there is a large "InUse" for anything. > - in particular, check mbuf clusters >=20 > Also, you could try capturing packets when it > happens and look at then in wireshark to see if/what > related traffic is going on the wire. Focus on the TCP layer > as well as NFS. >=20 Looking at it again, It really looks like a bug in the Oracle client, so for now we've decided to disable the Direct NFS client and switch back = to the standard linux kernel NFS client. Unfortunately testing with UDP won't be possible as I think oracle's NFS = client only support TCP. What is curious is why the kernel NFS mount from the Linux host was also = stuck because of the misbehaving userspace client. I should have tested mounting from another host to see if the NFS server = would respond, as this seems like a DoS attack to the NFS server :) Anyways, I've started collecting and graphing the output of netstat -m = and vmstat -z in case something like this happens again. >>=20 >> Here is a netstat output for the nfs sessions from FreeBSD server >> side: >>=20 >> Proto Recv-Q Send-Q Local Address Foreign Address (state) >> tcp4 0 37215456 10.101.0.1.2049 10.101.0.2.42856 ESTABLISHED >> tcp4 0 14561020 10.101.0.1.2049 10.101.0.2.62854 FIN_WAIT_1 >> tcp4 0 3068132 10.100.0.1.2049 10.100.0.2.9712 FIN_WAIT_1 >>=20 >> Linux host sees this : >>=20 >> tcp 1 0 10.101.0.2:9270 10.101.0.1:2049 CLOSE_WAIT >> tcp 477940 0 10.100.0.2:9712 10.100.0.1:2049 ESTABLISHED > ** These hint that the Oracle client isn't reading the socket > for some reason. I'd guess that the send window is now full, > so the data is backing up in the send queue in the server. >> tcp 1 0 10.101.0.2:10588 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:12254 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:12438 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:17583 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:20285 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:20678 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:22892 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:28850 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:33851 10.100.0.1:2049 CLOSE_WAIT >> tcp 165 0 10.100.0.2:34190 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:35643 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:39498 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:39724 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:40742 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:41674 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:42942 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:42956 10.100.0.1:2049 CLOSE_WAIT >> tcp 477976 0 10.101.0.2:42856 10.101.0.1:2049 ESTABLISHED >> tcp 1 0 10.100.0.2:42045 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:42048 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:43063 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:44771 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:49568 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:50813 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:51418 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:54507 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:57201 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:58553 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:59638 10.101.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.100.0.2:62289 10.100.0.1:2049 CLOSE_WAIT >> tcp 1 0 10.101.0.2:61848 10.101.0.1:2049 CLOSE_WAIT >> tcp 476952 0 10.101.0.2:62854 10.101.0.1:2049 ESTABLISHED >>=20 >> Then I used "tcpdrop" on FreeBSD's side to drop the sessions, the = nfsd >> was able to die and be restarted. >> During the "hanged" period, all NFS mounts from the Linux host were >> inaccessible, and IO hanged. >>=20 >> The nfsd is running with drc2/drc3 and lkshared patches from Rick >> Macklem. >>=20 > These shouldn't have any effect on the above, unless you've exhausted > your mbuf clusters. Once you are out of mbuf clusters, I'm not sure > what might happen within the lower layers TCP->network interface. >=20 > Good luck with it, rick >=20 Thank you for the response! Cheers, Nikolay >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 16:49:24 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 28348CD1; Wed, 21 Nov 2012 16:49:24 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 3EA7F8FC0C; Wed, 21 Nov 2012 16:49:22 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA13299; Wed, 21 Nov 2012 18:49:20 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AD0610.1060101@FreeBSD.org> Date: Wed, 21 Nov 2012 18:49:20 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Andrei Lavreniyuk Subject: Re: problem booting to multi-vdev root pool References: <50AB5202.4070906@FreeBSD.org> <50AB71A7.7050101@FreeBSD.org> <50AB80C6.1090507@FreeBSD.org> <50AB8C1F.2040108@FreeBSD.org> <50AB90E9.5070102@FreeBSD.org> <50AB9C5B.6030006@FreeBSD.org> <50ABA91D.9090905@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-current@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 16:49:24 -0000 on 21/11/2012 09:51 Andrei Lavreniyuk said the following: > Problem solved. Raidz pool mount without zpool.cache. > > > # zpool status -v > pool: zsolar > state: ONLINE > scan: resilvered 2,56M in 0h0m with 0 errors on Tue Nov 20 10:26:35 2012 > config: > > NAME STATE READ WRITE CKSUM > zsolar ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > gpt/disk0 ONLINE 0 0 0 > gpt/disk2 ONLINE 0 0 0 > gpt/disk3 ONLINE 0 0 0 > > errors: No known data errors > > # uname -a > FreeBSD opensolaris.technica-03.local 10.0-CURRENT FreeBSD > 10.0-CURRENT #6 r243278M: Wed Nov 21 09:28:51 EET 2012 > root@opensolaris.technica-03.local:/usr/obj/usr/src/sys/SMP64R amd64 > > > Thanks! Thank you for testing! -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 16:49:49 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1452EE5A for ; Wed, 21 Nov 2012 16:49:49 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 5B8E88FC16 for ; Wed, 21 Nov 2012 16:49:48 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA13304; Wed, 21 Nov 2012 18:49:46 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <50AD0629.1080708@FreeBSD.org> Date: Wed, 21 Nov 2012 18:49:45 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Peter Wullinger Subject: Re: ???BOGOSPAM??? Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable. References: <201211182220.qAIMK1oY061509@freefall.freebsd.org> <50AA4437.1070509@FreeBSD.org> <20121121072546.GA2992@kaliope.home> In-Reply-To: <20121121072546.GA2992@kaliope.home> X-Enigmail-Version: 1.4.5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 16:49:49 -0000 on 21/11/2012 09:25 Peter Wullinger said the following: > % uname -v > FreeBSD 9.1-PRERELEASE #5 r243290: Mon Nov 19 22:33:42 CET 2012 src@...:/usr/obj/usr/src/sys/ML350 > # gpart bootcode -b /boot/pmbr -i 1 -p /boot/gptzfsboot da0 > da0 has bootcode > # reboot > > Problem seems to be fixed by r243217. > > System has now come up without hassle across three reboots, which is > more stable than usual. The second machine has just completed its > reboot without any problems, too. > > I'll recheck if the issue persists, probably from a cold boot, > once I get the opportunity (probably the next update to -STABLE). Thank you for testing! -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Wed Nov 21 16:51:55 2012 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 43317EFA; Wed, 21 Nov 2012 16:51:55 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 0FB088FC13; Wed, 21 Nov 2012 16:51:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qALGpsUi023617; Wed, 21 Nov 2012 16:51:54 GMT (envelope-from avg@freefall.freebsd.org) Received: (from avg@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qALGpsxq023613; Wed, 21 Nov 2012 16:51:54 GMT (envelope-from avg) Date: Wed, 21 Nov 2012 16:51:54 GMT Message-Id: <201211211651.qALGpsxq023613@freefall.freebsd.org> To: avg@FreeBSD.org, freebsd-fs@FreeBSD.org, avg@FreeBSD.org From: avg@FreeBSD.org Subject: Re: kern/153520: [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Nov 2012 16:51:55 -0000 Synopsis: [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable Responsible-Changed-From-To: freebsd-fs->avg Responsible-Changed-By: avg Responsible-Changed-When: Wed Nov 21 16:51:30 UTC 2012 Responsible-Changed-Why: Watch. This problem is potentially fixed. http://www.freebsd.org/cgi/query-pr.cgi?pr=153520 From owner-freebsd-fs@FreeBSD.ORG Fri Nov 23 11:30:01 2012 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 5595EFC2 for ; Fri, 23 Nov 2012 11:30:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 3BDA28FC0C for ; Fri, 23 Nov 2012 11:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qANBU1Qw060776 for ; Fri, 23 Nov 2012 11:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qANBU1e5060775; Fri, 23 Nov 2012 11:30:01 GMT (envelope-from gnats) Date: Fri, 23 Nov 2012 11:30:01 GMT Message-Id: <201211231130.qANBU1e5060775@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andriy Gapon Subject: Re: kern/167066: [zfs] ZVOLs not appearing in /dev/zvol X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Andriy Gapon List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Nov 2012 11:30:01 -0000 The following reply was made to PR kern/167066; it has been noted by GNATS. From: Andriy Gapon To: bug-followup@FreeBSD.org, rimbalza@gmail.com, Andreas Nilsson Cc: Subject: Re: kern/167066: [zfs] ZVOLs not appearing in /dev/zvol Date: Fri, 23 Nov 2012 13:22:26 +0200 Please try the following patch: --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c @@ -3824,6 +3824,11 @@ zfs_ioc_recv(zfs_cmd_t *zc) error = 1; } #endif + +#ifdef __FreeBSD__ + if (error == 0) + zvol_create_minors(tofs); +#endif /* * On error, restore the original props. */ -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri Nov 23 13:10:01 2012 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 521521D5 for ; Fri, 23 Nov 2012 13:10:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 361418FC0C for ; Fri, 23 Nov 2012 13:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qANDA1DH064201 for ; Fri, 23 Nov 2012 13:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qANDA1EE064197; Fri, 23 Nov 2012 13:10:01 GMT (envelope-from gnats) Date: Fri, 23 Nov 2012 13:10:01 GMT Message-Id: <201211231310.qANDA1EE064197@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andreas Nilsson Subject: Re: kern/167066: [zfs] ZVOLs not appearing in /dev/zvol X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Andreas Nilsson List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Nov 2012 13:10:01 -0000 The following reply was made to PR kern/167066; it has been noted by GNATS. From: Andreas Nilsson To: Andriy Gapon Cc: bug-followup@freebsd.org, rimbalza@gmail.com Subject: Re: kern/167066: [zfs] ZVOLs not appearing in /dev/zvol Date: Fri, 23 Nov 2012 14:04:16 +0100 --047d7b677c8e9376ea04cf2938a3 Content-Type: text/plain; charset=ISO-8859-1 On Fri, Nov 23, 2012 at 12:22 PM, Andriy Gapon wrote: > > Please try the following patch: > > --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c > +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c > @@ -3824,6 +3824,11 @@ zfs_ioc_recv(zfs_cmd_t *zc) > error = 1; > } > #endif > + > +#ifdef __FreeBSD__ > + if (error == 0) > + zvol_create_minors(tofs); > +#endif > /* > * On error, restore the original props. > */ > > -- > Andriy Gapon > Thanks :) The patch did not apply on new checkout of 9-stable, but I added the code via editor. Svn thinks this of my edit: Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c (revision 243443) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c (working copy) @@ -3803,6 +3803,11 @@ error = 1; } #endif + +#ifdef __FreeBSD__ + if (error == 0) + zvol_create_minors(tofs); +#endif /* * On error, restore the original props. */ Before receive: $ ls /dev/zvol/data/ usb usb@2 usb@2s1 usbs1 After receive: $ ls /dev/zvol/data/ master master@1p1 master@1p3 masterp2 usb usb@2s1 master@1 master@1p2 masterp1 masterp3 usb@2 usbs1 which is what I expected. Great work :) Best regards Andreas --047d7b677c8e9376ea04cf2938a3 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
On Fri, Nov 23, 2= 012 at 12:22 PM, Andriy Gapon <avg@freebsd.org> wrote:

Please try the following patch:

--- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c
+++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c
@@ -3824,6 +3824,11 @@ zfs_ioc_recv(zfs_cmd_t *zc)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 error =3D 1;
=A0 =A0 =A0 =A0 }
=A0#endif
+
+#ifdef __FreeBSD__
+ =A0 =A0 =A0 if (error =3D=3D 0)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 zvol_create_minors(tofs);
+#endif
=A0 =A0 =A0 =A0 /*
=A0 =A0 =A0 =A0 =A0* On error, restore the original props.
=A0 =A0 =A0 =A0 =A0*/

--
Andriy Gapon

Thank= s :)

T= he patch did not apply on new checkout of 9-stable, but I added the code vi= a editor. Svn thinks this of my edit:

Index: sys/= cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- sys/cddl/contrib/opensolaris/uts/common/fs/z= fs/zfs_ioctl.c =A0(revision 243443)
+++ sys= /cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c =A0(working copy)
@@ -3803,6 +3803,11 @@
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 error =3D 1;
=A0 =A0 =A0 =A0 }
=A0#endif
<= div class=3D"gmail_extra">+
+#ifdef __FreeBSD__
+ =A0 =A0 =A0 if (error= =3D=3D 0)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 zv= ol_create_minors(tofs);
+#endif
=A0 =A0 =A0 =A0 /*
=A0 =A0 =A0 =A0 =A0* On error, restore the origi= nal props.
=A0 =A0 =A0 =A0 =A0*/

Before receive:<= /div>
$ ls /dev/zvol/data/
usb =A0 =A0 usb@2 =A0 usb@2s1 usbs1

After rec= eive:
$ ls /dev/zvol/data/
master =A0 =A0 =A0 =A0 = =A0master@1p1 =A0 =A0 =A0master@1p3 =A0 =A0 =A0masterp2 =A0 =A0 =A0 =A0usb = =A0 =A0 =A0 =A0 =A0 =A0 usb@2s1
master@1 =A0 =A0 =A0 =A0master@1p2 =A0 =A0 =A0masterp1 =A0 =A0 =A0 =A0= masterp3 =A0 =A0 =A0 =A0usb@2 =A0 =A0 =A0 =A0 =A0 usbs1
which is what I expected. Great work :)

Best regards
Andreas
--047d7b677c8e9376ea04cf2938a3-- From owner-freebsd-fs@FreeBSD.ORG Sat Nov 24 13:13:58 2012 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id CE270743; Sat, 24 Nov 2012 13:13:58 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 9B7B88FC08; Sat, 24 Nov 2012 13:13:58 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id qAODDw3T048470; Sat, 24 Nov 2012 13:13:58 GMT (envelope-from avg@freefall.freebsd.org) Received: (from avg@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id qAODDwld048466; Sat, 24 Nov 2012 13:13:58 GMT (envelope-from avg) Date: Sat, 24 Nov 2012 13:13:58 GMT Message-Id: <201211241313.qAODDwld048466@freefall.freebsd.org> To: avg@FreeBSD.org, freebsd-fs@FreeBSD.org, avg@FreeBSD.org From: avg@FreeBSD.org Subject: Re: kern/167066: [zfs] ZVOLs not appearing in /dev/zvol X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Nov 2012 13:13:58 -0000 Synopsis: [zfs] ZVOLs not appearing in /dev/zvol Responsible-Changed-From-To: freebsd-fs->avg Responsible-Changed-By: avg Responsible-Changed-When: Sat Nov 24 13:13:49 UTC 2012 Responsible-Changed-Why: I am handling this. http://www.freebsd.org/cgi/query-pr.cgi?pr=167066