From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 01:02:04 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id CF79A9B4; Mon, 9 Sep 2013 01:02:04 +0000 (UTC) (envelope-from jdavidlists@gmail.com) Received: from mail-ie0-x229.google.com (mail-ie0-x229.google.com [IPv6:2607:f8b0:4001:c03::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9AA1B2A86; Mon, 9 Sep 2013 01:02:04 +0000 (UTC) Received: by mail-ie0-f169.google.com with SMTP id tp5so11633801ieb.0 for ; Sun, 08 Sep 2013 18:02:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=QeEcWjU1DUYd2OwVU+RYtFoMui1dzh2RgLqjU2eQKAw=; b=Pfo5HzJOGzDw2p4L8U8ycqduj/yiguBKoM24kj7MdUL8cpxO2IcunqhQb1OTXgt35y TCd1hNnurQ7gfCywsR5HF3CRBcVXHVicMyC+F97XxG6GIyZNCetpvttgKUPMajf8zscc jJXUhyjwc2w4Z1GDuydh1RbQzpp5Ycz3+k02OESd6UuuKd93uSJp1vOfOyEcClsfIhNF 9F1FyaeZXOoE3IISXtoB5eiKM4Qu4X6Qtt5Sw05D8QtTJ1VsJ1K/S0S5/4B2FgWKYSXp 4BYh4XEyQaGJ45azdPKqQIlj79mnIh7TgvrxLEumy4hNBCtZhsEgqRfdBHKhCWeOgPb1 ckeQ== MIME-Version: 1.0 X-Received: by 10.50.72.33 with SMTP id a1mr6188153igv.58.1378688524108; Sun, 08 Sep 2013 18:02:04 -0700 (PDT) Sender: jdavidlists@gmail.com Received: by 10.43.157.8 with HTTP; Sun, 8 Sep 2013 18:02:04 -0700 (PDT) Date: Sun, 8 Sep 2013 21:02:04 -0400 X-Google-Sender-Auth: 8IW3EzEXL7LGf4QY3M3-LAIqu-k Message-ID: Subject: zfs_enable vs zfs_load in loader.conf (but neither works) From: J David To: "freebsd-fs@freebsd.org" , freebsd-stable Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 01:02:04 -0000 After setting up a new machine to boot from a ZFS root using the 9.1 install, it worked fine, but when the kernel & world was updated to releng/9.2, it stopped booting. The pool is called "data" and the root partition is "data/root." Under 9.1 it had in loader.conf: zfs_load=3D"YES" vfs.root.mountfrom=3D"zfs:data/root" Under 9.2-RC3, the same config results in a panic: Trying to mount root from zfs:data/root []=85 init: not found in path /sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init:/stand/sysinstall panic: no init If this is changed (as many Google hits recommend) to: zfs_enable=3D"YES" vfs.root.mountfrom=3D"zfs:data/root" It seems like ZFS doesn't get loaded, so it fails instead with: Trying to mount root from zfs:data/root []=85 Mounting from zfs:data/root failed with error 2: unknown file system. If the "?" mountroot> option is used, 50 devices are listed, none of which are ZFS. And the "unknown file system" response comes from vfs_byname returning NULL for zfs. (If both zfs_enable and zfs_load are set to "YES" then it fails as the zfs_load case.) The system is using update-to-date zpool (v5000 / feature flags), and all the updated bootblocks from the releng/9.2 build. zpool.cache is correct, the zpool imports fine from the 9.2-RC3 live cd. The zpool's bootfs is set correctly, the zfs mountpoint of data/root is / . And, of course, init is present and health in data/root. The system booted fine until updating to 9.2. Which loader.conf entry is actually correct for ZFS roots on 9.2, and what (else) needs to happen to make this system bootable again? Thanks for any advice! From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 02:22:13 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id A40515DA; Mon, 9 Sep 2013 02:22:13 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from yoshi.bluerosetech.com (yoshi.bluerosetech.com [174.136.100.66]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8B5E52DC2; Mon, 9 Sep 2013 02:22:13 +0000 (UTC) Received: from chombo.houseloki.net (unknown [IPv6:2001:558:6025:2d:68f0:67e3:f35d:f840]) by yoshi.bluerosetech.com (Postfix) with ESMTPSA id 6F638E6040; Sun, 8 Sep 2013 19:22:10 -0700 (PDT) Received: from [127.0.0.1] (ivy.houseloki.net [10.9.70.20]) by chombo.houseloki.net (Postfix) with ESMTPSA id 7FB5B24F; Sun, 8 Sep 2013 19:22:06 -0700 (PDT) Message-ID: <522D30C9.8000203@bluerosetech.com> Date: Sun, 08 Sep 2013 19:22:01 -0700 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130620 Thunderbird/17.0.7 MIME-Version: 1.0 To: J David Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: "freebsd-fs@freebsd.org" , freebsd-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 02:22:13 -0000 On 9/8/2013 6:02 PM, J David wrote: > Trying to mount root from zfs:data/root []… > Mounting from zfs:data/root failed with error 2: unknown file system. Did you build and install new boot blocks? From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 02:52:18 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 658E6C54; Mon, 9 Sep 2013 02:52:18 +0000 (UTC) (envelope-from jdavidlists@gmail.com) Received: from mail-ie0-x22b.google.com (mail-ie0-x22b.google.com [IPv6:2607:f8b0:4001:c03::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2CE772EF2; Mon, 9 Sep 2013 02:52:18 +0000 (UTC) Received: by mail-ie0-f171.google.com with SMTP id 16so534038iea.30 for ; Sun, 08 Sep 2013 19:52:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=Ubc+d7Q158p8jgVQoQu1+26i6YuTLPzOdvHah/mQ5yw=; b=tU/Boi3m8nMN811Zdki5UnIW13jivM4bd/se7QINvuXb6s+DsVUdG6a32FvKAroRAV wZDupKacSMQNjgj72EjXbvBFPO20cdjuZS3Uyqmqlct6drqLn/bUdRMzJ12m8/I9ynWw Y9andEspDvA7z6FCuIgVuouVeVTESPxCuD9YhvSMeH8sQ5GlTP1IcEraA7/Yycp8summ zKiJ2PMW7CNb+wWyJ5HRKObdqw4CsDu0YsWqCCP4PZ2/RL4LVjRf8SPH2MwwqeJZIWqw SZ3VOPFo4L0X3OZANP9SQSJT6LvyVm0JMUAC+Om/wpa61pS+0OW+CoiEl/UZbuycjYXd N4KA== MIME-Version: 1.0 X-Received: by 10.50.120.6 with SMTP id ky6mr6384038igb.58.1378695137747; Sun, 08 Sep 2013 19:52:17 -0700 (PDT) Sender: jdavidlists@gmail.com Received: by 10.43.157.8 with HTTP; Sun, 8 Sep 2013 19:52:17 -0700 (PDT) In-Reply-To: <522D30C9.8000203@bluerosetech.com> References: <522D30C9.8000203@bluerosetech.com> Date: Sun, 8 Sep 2013 22:52:17 -0400 X-Google-Sender-Auth: fz4Zs4VkyQJ2IEp3cv5bWXWjEX4 Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: J David To: Darren Pilgrim Content-Type: text/plain; charset=ISO-8859-1 Cc: "freebsd-fs@freebsd.org" , freebsd-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 02:52:18 -0000 On Sun, Sep 8, 2013 at 10:22 PM, Darren Pilgrim wrote: > Did you build and install new boot blocks? Yes. Oddly, setting: zfs set mountpoint=legacy data/root (plus the appropriate fstab entry) instead of zfs set mountpoint=/ data/root seems to produce a bootable system, although it absolutely should not be necessary to do things that way anymore. Weird. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 03:11:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 7DF54152; Mon, 9 Sep 2013 03:11:57 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from yoshi.bluerosetech.com (yoshi.bluerosetech.com [IPv6:2607:f2f8:a450::66]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 64B232086; Mon, 9 Sep 2013 03:11:57 +0000 (UTC) Received: from chombo.houseloki.net (unknown [IPv6:2001:558:6025:2d:68f0:67e3:f35d:f840]) by yoshi.bluerosetech.com (Postfix) with ESMTPSA id A9D07E6040; Sun, 8 Sep 2013 20:11:53 -0700 (PDT) Received: from [127.0.0.1] (ivy.houseloki.net [10.9.70.20]) by chombo.houseloki.net (Postfix) with ESMTPSA id 78539256; Sun, 8 Sep 2013 20:11:52 -0700 (PDT) Message-ID: <522D3C76.1030705@bluerosetech.com> Date: Sun, 08 Sep 2013 20:11:50 -0700 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130620 Thunderbird/17.0.7 MIME-Version: 1.0 To: J David Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) References: <522D30C9.8000203@bluerosetech.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" , freebsd-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 03:11:57 -0000 On 9/8/2013 7:52 PM, J David wrote: > On Sun, Sep 8, 2013 at 10:22 PM, Darren Pilgrim > wrote: >> Did you build and install new boot blocks? > > Yes. > > Oddly, setting: > > zfs set mountpoint=legacy data/root (plus the appropriate fstab entry) You can use zfs.root.mountfrom="zfs:data/root" in /boot/loader.conf instead of an fstab entry. Mountpoint=legacy is required either way. > instead of > > zfs set mountpoint=/ data/root This only applies to Solaris, IIRC. > seems to produce a bootable system, although it absolutely should not > be necessary to do things that way anymore. I ran into that problem as well. The instructions for root-on-zfs for 9.x (at least as of 9.1) are wrong--you need to use the 8.x-style instructions with mountpoint=legacy for / and, for fresh installs, leaving the pool imported and copying over /boot/zfs/zpool.cache. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 06:17:06 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 11672A59 for ; Mon, 9 Sep 2013 06:17:06 +0000 (UTC) (envelope-from m.seaman@infracaninophile.co.uk) Received: from smtp.infracaninophile.co.uk (smtp6.infracaninophile.co.uk [IPv6:2001:8b0:151:1:3cd3:cd67:fafa:3d78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9480E2885 for ; Mon, 9 Sep 2013 06:17:05 +0000 (UTC) Received: from seedling.black-earth.co.uk (seedling.black-earth.co.uk [81.2.117.99]) (authenticated bits=0) by smtp.infracaninophile.co.uk (8.14.7/8.14.7) with ESMTP id r896GxxS096495 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Mon, 9 Sep 2013 07:16:59 +0100 (BST) (envelope-from m.seaman@infracaninophile.co.uk) DKIM-Filter: OpenDKIM Filter v2.8.3 smtp.infracaninophile.co.uk r896GxxS096495 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=infracaninophile.co.uk; s=201001-infracaninophile; t=1378707420; bh=dpIccGK1xJHA3PpWvSjUwSHA/nsafhQMh8alM6e2kmI=; h=Date:From:To:Subject:References:In-Reply-To; z=Date:=20Mon,=2009=20Sep=202013=2007:16:59=20+0100|From:=20Matthew =20Seaman=20|To:=20freebsd-fs@fre ebsd.org|Subject:=20Re:=20zfs_enable=20vs=20zfs_load=20in=20loader .conf=20(but=20neither=20works)|References:=20|In-Reply- To:=20; b=DY/6ZGFR7A7nPfT0HQEr7imxHQBRChixVjBdeWd7Q5hzJpfqzuzGeJwFpIzUIVXOh 6d1sinIU5s7RCk0xDasJ0A4WaoirEj2lbdYLqty36Px68FuxLt8oza/2wHlpcyLQ6c lnfB5vhUa8fRhzWC6TyoVWxMvdJQrfO684rIhNKY= Message-ID: <522D67DB.7060404@infracaninophile.co.uk> Date: Mon, 09 Sep 2013 07:16:59 +0100 From: Matthew Seaman User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) References: In-Reply-To: X-Enigmail-Version: 1.5.2 OpenPGP: id=60AE908C Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="F1hUPxSKWLdgxXhfebia9wlcOuT5mVgvn" X-Virus-Scanned: clamav-milter 0.97.8 at lucid-nonsense.infracaninophile.co.uk X-Virus-Status: Clean X-Spam-Status: No, score=-2.3 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU autolearn=ham version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on lucid-nonsense.infracaninophile.co.uk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 06:17:06 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --F1hUPxSKWLdgxXhfebia9wlcOuT5mVgvn Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 09/09/2013 02:02, J David wrote: > Under 9.1 it had in loader.conf: >=20 > zfs_load=3D"YES" > vfs.root.mountfrom=3D"zfs:data/root" >=20 > Under 9.2-RC3, the same config results in a panic: >=20 > Trying to mount root from zfs:data/root []=85 > init: not found in path > /sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init:/stand/sysinstall > panic: no init >=20 > If this is changed (as many Google hits recommend) to: >=20 > zfs_enable=3D"YES" > vfs.root.mountfrom=3D"zfs:data/root" >=20 > It seems like ZFS doesn't get loaded, so it fails instead with: >=20 zfs_load=3D"YES" is correct in /boot/loader.conf -- it causes the zfs modules to be loaded into the kernel early in the boot process. zfs_enable=3D"YES" is correct in /etc/rc.conf -- it enables various ZFS related stuff run by the rc scripts. You want both. Cheers, Matthew --=20 Dr Matthew J Seaman MA, D.Phil. PGP: http://www.infracaninophile.co.uk/pgpkey JID: matthew@infracaninophile.co.uk --F1hUPxSKWLdgxXhfebia9wlcOuT5mVgvn Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.16 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlItZ9sACgkQ8Mjk52CukIxZzQCfewQdkbF9KoLt2ZN2wXL567bU UmIAn1dnNyhMIeo+VpQdT+gGQsuKtwoC =H3oA -----END PGP SIGNATURE----- --F1hUPxSKWLdgxXhfebia9wlcOuT5mVgvn-- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 09:19:49 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 41B56E39; Mon, 9 Sep 2013 09:19:49 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wg0-x22d.google.com (mail-wg0-x22d.google.com [IPv6:2a00:1450:400c:c00::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9D61C2252; Mon, 9 Sep 2013 09:19:48 +0000 (UTC) Received: by mail-wg0-f45.google.com with SMTP id y10so5044258wgg.0 for ; Mon, 09 Sep 2013 02:19:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=/th8GnQzNiG1SinHZaXrQLTfCST+/oz5nkNb4ITGvQg=; b=IP5CwLF1zy2vo04ddXFLj97AhqkmIjcARLxf0gT4Lj+V62Z5tb/L8o2B/DFUnQDppo sINXjrLOQKkdgkAQOkTKBih/mCJGcjIi72VKrg9jP6YuG7qH6qZC55jT/iwvWjJh9jF7 sk5vJgywrtlLqmsUQqQBrICkDD/4FTWjXZO8dFAjO5f0PlK36ok91Fe5kisc1agPaQTb u+0vN4krybjDK6XQOus1WD1D4SCRHGNvF0zX/ADmUZQcZ5gVOtvYZGd+i+31SpR8HJu9 6Xf6GOGgFtE/cN/MqW1E4UPFeSDqydmnZKyX/sPxmLlckUfLEPDM+PIWICk+7vjThMTU +9hA== MIME-Version: 1.0 X-Received: by 10.194.249.97 with SMTP id yt1mr452279wjc.49.1378718387172; Mon, 09 Sep 2013 02:19:47 -0700 (PDT) Received: by 10.216.235.9 with HTTP; Mon, 9 Sep 2013 02:19:47 -0700 (PDT) In-Reply-To: References: Date: Mon, 9 Sep 2013 10:19:47 +0100 Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: krad To: J David Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" , freebsd-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 09:19:49 -0000 once you have it all working and understood have a look at the following port ''usr/ports/sysutils/beadm'' It may make things a little easier to manage in the future. In my experience BE's on zfs rock. On 9 September 2013 02:02, J David wrote: > After setting up a new machine to boot from a ZFS root using the 9.1 > install, it worked fine, but when the kernel & world was updated to > releng/9.2, it stopped booting. The pool is called "data" and the > root partition is "data/root." > > Under 9.1 it had in loader.conf: > > zfs_load=3D"YES" > vfs.root.mountfrom=3D"zfs:data/root" > > Under 9.2-RC3, the same config results in a panic: > > Trying to mount root from zfs:data/root []=85 > init: not found in path > /sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init:/stand/sysinstall > panic: no init > > If this is changed (as many Google hits recommend) to: > > zfs_enable=3D"YES" > vfs.root.mountfrom=3D"zfs:data/root" > > It seems like ZFS doesn't get loaded, so it fails instead with: > > Trying to mount root from zfs:data/root []=85 > Mounting from zfs:data/root failed with error 2: unknown file system. > > If the "?" mountroot> option is used, 50 devices are listed, none of > which are ZFS. And the "unknown file system" response comes from > vfs_byname returning NULL for zfs. > > (If both zfs_enable and zfs_load are set to "YES" then it fails as the > zfs_load case.) > > The system is using update-to-date zpool (v5000 / feature flags), and > all the updated bootblocks from the releng/9.2 build. zpool.cache is > correct, the zpool imports fine from the 9.2-RC3 live cd. The zpool's > bootfs is set correctly, the zfs mountpoint of data/root is / . And, > of course, init is present and health in data/root. The system booted > fine until updating to 9.2. > > Which loader.conf entry is actually correct for ZFS roots on 9.2, and > what (else) needs to happen to make this system bootable again? > > Thanks for any advice! > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 11:06:46 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 98DABD82 for ; Mon, 9 Sep 2013 11:06:46 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 786DF29EE for ; Mon, 9 Sep 2013 11:06:46 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id r89B6k0i005432 for ; Mon, 9 Sep 2013 11:06:46 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id r89B6k4M004654 for freebsd-fs@FreeBSD.org; Mon, 9 Sep 2013 11:06:46 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 9 Sep 2013 11:06:46 GMT Message-Id: <201309091106.r89B6k4M004654@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 11:06:46 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS o kern/178713 fs [nfs] [patch] Correct WebNFS support in NFS server and s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/178103 fs [kernel] [nfs] [patch] Correct support of index files o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o kern/121385 fs [unionfs] unionfs cross mount -> kernel panic o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 336 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 11:50:02 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 8193FC63; Mon, 9 Sep 2013 11:50:02 +0000 (UTC) (envelope-from jdavidlists@gmail.com) Received: from mail-ie0-x230.google.com (mail-ie0-x230.google.com [IPv6:2607:f8b0:4001:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 49E632DB2; Mon, 9 Sep 2013 11:50:02 +0000 (UTC) Received: by mail-ie0-f176.google.com with SMTP id s9so11843956iec.7 for ; Mon, 09 Sep 2013 04:50:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=4ei2cNMOxRpLdAcFNGEVqeNpjL+VUMI/AdKpgHr9o8s=; b=fDMe2Xh5v7qUbYsuAEMVqzY/G3JgSKYAzEUmLUFkOZTbLKvJUDrvwzY6ilgxb97QjO kWvMf+69QTIdO9A5UZsvpOh3EORKfXGALTb0jMHXLfshtC9d+W1KlVjyWQ0iURsxsyoH y/n9cvRs7Iie3m+oSERG0LOE7Rnyw9JlFryiGTiXQu30EDsKAnrJv3ZnZ1hS6po/5qKi i3Ttj7mIEVSDNIkpuZaCefP0MnEP/6SgEv84Y7MX1fT6iOXlxOZdiYdWQ3JFY8wKUREZ eScTfjz7RGcxtssbRZsdtfPZX2xYMt7zPofsbwmzrtQCu04N4pcR2LwxztWLmy7EbZoR AtfQ== MIME-Version: 1.0 X-Received: by 10.50.178.234 with SMTP id db10mr7647925igc.35.1378727401711; Mon, 09 Sep 2013 04:50:01 -0700 (PDT) Sender: jdavidlists@gmail.com Received: by 10.43.157.8 with HTTP; Mon, 9 Sep 2013 04:50:01 -0700 (PDT) In-Reply-To: <522D3C76.1030705@bluerosetech.com> References: <522D30C9.8000203@bluerosetech.com> <522D3C76.1030705@bluerosetech.com> Date: Mon, 9 Sep 2013 07:50:01 -0400 X-Google-Sender-Auth: uoYP0pNMY6VEwxOEbmxBYcvd5pY Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: J David To: Darren Pilgrim Content-Type: text/plain; charset=ISO-8859-1 Cc: "freebsd-fs@freebsd.org" , freebsd-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 11:50:02 -0000 On Sun, Sep 8, 2013 at 11:11 PM, Darren Pilgrim wrote: > You can use zfs.root.mountfrom="zfs:data/root" in /boot/loader.conf instead > of an fstab entry. That has been in loader.conf the whole time. > Mountpoint=legacy is required either way. It isn't. There is another machine right next to it running 9.2-RC1 and it works fine with the mountpoint=/ setting and an empty fstab. And it worked fine on *this* machine on 9.1. It's not clear what changed to stop it from working after upgrading to 9.2-RC3. If time permits, we will update our PXE install environment to 9.2 and rebuild the whole thing. That would eliminate the upgrade step and hopefully give a result others could reproduce. > This only applies to Solaris, IIRC. [citation needed] Setting mountpoint=/ is necessary in order for "zpool import -o altroot=/mnt data" to do the right thing at maintenance time. Thanks! From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 12:55:24 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 4E20C6A8 for ; Mon, 9 Sep 2013 12:55:24 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wg0-x22c.google.com (mail-wg0-x22c.google.com [IPv6:2a00:1450:400c:c00::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id DEFE7227B for ; Mon, 9 Sep 2013 12:55:23 +0000 (UTC) Received: by mail-wg0-f44.google.com with SMTP id b12so4158704wgh.23 for ; Mon, 09 Sep 2013 05:55:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=uoynXNbTAspfhJHMqikXGOY076ItVxL6N+w9jPpaQbE=; b=LqePsdrgRHYagdu6DE3LhvEKY3kwNFaMhmN23SPMemXMSY0R6VB3Cq3nRpdIs5nO3d em9MeMH7igntLccbeh5RPOrKjZ2sbEfNEOQPaGQB+NE57cLuYMIGKTrRY4ISYoqd5iN7 Qioqq+Rom1pUeKO58gO4VhTXQwDtM7/l+LbZUzGBlEo/LRrKC2xu1d6VoY/lxr1vPFGO DkJ2tXZZ3KWMC0nqJeqEE0O6CDe7igc9AeuzOGgpfXF0ivXQM/BDxJ0U+0pDmLo1PMaa JcWTQRexunzsnYnq1JtyQTu3PXIhFYRRNtlHOhXea0C4Wt8BqUI8wZmtiKse/4p1Gayu CQCw== MIME-Version: 1.0 X-Received: by 10.180.189.17 with SMTP id ge17mr6034620wic.53.1378731322324; Mon, 09 Sep 2013 05:55:22 -0700 (PDT) Received: by 10.216.235.9 with HTTP; Mon, 9 Sep 2013 05:55:22 -0700 (PDT) In-Reply-To: <522D67DB.7060404@infracaninophile.co.uk> References: <522D67DB.7060404@infracaninophile.co.uk> Date: Mon, 9 Sep 2013 13:55:22 +0100 Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: krad To: Matthew Seaman Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 12:55:24 -0000 you will find without 'zfs_enable=3D"YES" ' set a lot of the zfs datasets might not get mounted On 9 September 2013 07:16, Matthew Seaman wrote: > On 09/09/2013 02:02, J David wrote: > > Under 9.1 it had in loader.conf: > > > > zfs_load=3D"YES" > > vfs.root.mountfrom=3D"zfs:data/root" > > > > Under 9.2-RC3, the same config results in a panic: > > > > Trying to mount root from zfs:data/root []=85 > > init: not found in path > > /sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init:/stand/sysinstall > > panic: no init > > > > If this is changed (as many Google hits recommend) to: > > > > zfs_enable=3D"YES" > > vfs.root.mountfrom=3D"zfs:data/root" > > > > It seems like ZFS doesn't get loaded, so it fails instead with: > > > > zfs_load=3D"YES" is correct in /boot/loader.conf -- it causes the zfs > modules to be loaded into the kernel early in the boot process. > > zfs_enable=3D"YES" is correct in /etc/rc.conf -- it enables various ZFS > related stuff run by the rc scripts. > > You want both. > > Cheers, > > Matthew > > -- > Dr Matthew J Seaman MA, D.Phil. > > PGP: http://www.infracaninophile.co.uk/pgpkey > JID: matthew@infracaninophile.co.uk > > From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 16:30:03 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 89F7FCA9 for ; Mon, 9 Sep 2013 16:30:03 +0000 (UTC) (envelope-from longwitz@incore.de) Received: from dss.incore.de (dss.incore.de [195.145.1.138]) by mx1.freebsd.org (Postfix) with ESMTP id 2E5A822F2 for ; Mon, 9 Sep 2013 16:30:03 +0000 (UTC) Received: from inetmail.dmz (inetmail.dmz [10.3.0.3]) by dss.incore.de (Postfix) with ESMTP id 65D7C5DCCD for ; Mon, 9 Sep 2013 18:22:03 +0200 (CEST) X-Virus-Scanned: amavisd-new at incore.de Received: from dss.incore.de ([10.3.0.3]) by inetmail.dmz (inetmail.dmz [10.3.0.3]) (amavisd-new, port 10024) with LMTP id AGbQQ0dfFeKV for ; Mon, 9 Sep 2013 18:22:02 +0200 (CEST) Received: from mail.incore (fwintern.dmz [10.0.0.253]) by dss.incore.de (Postfix) with ESMTP id E67475DCCC for ; Mon, 9 Sep 2013 18:22:01 +0200 (CEST) Received: from bsdlo.incore (bsdlo.incore [192.168.0.84]) by mail.incore (Postfix) with ESMTP id DBAF350B65 for ; Mon, 9 Sep 2013 18:22:01 +0200 (CEST) Message-ID: <522DF5A9.4070103@incore.de> Date: Mon, 09 Sep 2013 18:22:01 +0200 From: Andreas Longwitz User-Agent: Thunderbird 2.0.0.19 (X11/20090113) MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: zfs panic during find(1) on zfs snapshot directory Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 16:30:03 -0000 Hello, I run Freebsd 8.4-Stable r253040 completed with adapted r244795, r244925 and r245286 from head. During an amanda backup which created the zfs snapshot /backup/jail/db1/.zfs/snapshot/amanda-mpool_jail_db1_backup-current the command find /backup -name 'sysout.*' -type f -mtime +100 -exec rm {} caused a panic on the lstat system call for the name of the snapshot directory. Console output: panic: __lockmgr_args: recursing on non recursive lockmgr zfs @ /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/gfs.c:451 cpuid = 2 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a kdb_backtrace() at kdb_backtrace+0x37 panic() at panic+0x1ce __lockmgr_args() at __lockmgr_args+0xb68 vop_stdlock() at vop_stdlock+0x39 VOP_LOCK1_APV() at VOP_LOCK1_APV+0x70 _vn_lock() at _vn_lock+0x47 gfs_lookup_dot() at gfs_lookup_dot+0xa9 gfs_dir_lookup() at gfs_dir_lookup+0x49 zfsctl_snapshot_inactive() at zfsctl_snapshot_inactive+0x81 VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x68 vinactive() at vinactive+0x71 vputx() at vputx+0x2d8 traverse() at traverse+0xa3 zfsctl_snapdir_lookup() at zfsctl_snapdir_lookup+0x1bb VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0x62 lookup() at lookup+0x44c namei() at namei+0x53d kern_statat_vnhook() at kern_statat_vnhook+0x8f kern_statat() at kern_statat+0x15 lstat() at lstat+0x2a amd64_syscall() at amd64_syscall+0x1f4 Xfast_syscall() at Xfast_syscall+0xfc --- syscall (190, FreeBSD ELF64, lstat), rip = 0x18073597c, rsp = 0x7fffffffea68, rbp = 0x180a2d350 --- KDB: enter: panic 0xffffff0091bf6588: tag zfs, type VDIR usecount 5, writecount 0, refcount 5 mountedhere 0 flags () lock type zfs: EXCL by thread 0xffffff01aabf6470 (pid 29384) 0xffffff01940fc1d8: tag zfs, type VDIR usecount 0, writecount 0, refcount 1 mountedhere 0 flags (VI_DOINGINACT) lock type zfs: EXCL by thread 0xffffff01aabf6470 (pid 29384) >From kerneldump: (kgdb) bt #0 doadump () at /usr/src/sys/kern/kern_shutdown.c:266 #1 0xffffffff801f87fc in db_fncall (dummy1=, dummy2=, dummy3=, dummy4=) at /usr/src/sys/ddb/db_command.c:548 #2 0xffffffff801f8aad in db_command (last_cmdp=0xffffffff8086bdc0, cmd_table=, dopager=0) at /usr/src/sys/ddb/db_command.c:445 #3 0xffffffff801fd163 in db_script_exec (scriptname=0xffffff8245eeec00 "kdb.enter.panic", warnifnotfound=0) at /usr/src/sys/ddb/db_script.c:302 #4 0xffffffff801fd232 in db_script_kdbenter (eventname=) at /usr/src/sys/ddb/db_script.c:324 #5 0xffffffff801fae44 in db_trap (type=, code=) at /usr/src/sys/ddb/db_main.c:230 #6 0xffffffff80432e61 in kdb_trap (type=3, code=0, tf=0xffffff8245eeee30) at /usr/src/sys/kern/subr_kdb.c:654 #7 0xffffffff805dc82f in trap (frame=0xffffff8245eeee30) at /usr/src/sys/amd64/amd64/trap.c:574 #8 0xffffffff805c2ba4 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:228 #9 0xffffffff804328fb in kdb_enter (why=0xffffffff8069543a "panic", msg=0xa
) at cpufunc.h:63 #10 0xffffffff803ff367 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:616 #11 0xffffffff803e7df8 in __lockmgr_args (lk=0xffffff0091bf6620, flags=0, ilk=0xffffff0091bf6648, wmesg=, pri=982528, timo=1173286416, file=0xffffffff80b68978 "/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/gfs.c", line=451) at /usr/src/sys/kern/kern_lock.c:704 #12 0xffffffff804833e9 in vop_stdlock (ap=) at lockmgr.h:94 #13 0xffffffff8063a870 in VOP_LOCK1_APV (vop=0xffffffff8082af60, a=0xffffff8245eef150) at vnode_if.c:2052 #14 0xffffffff804a3137 in _vn_lock (vp=0xffffff0091bf6588, flags=525312, file=0xffffffff80b68978 "/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/gfs.c", line=451) at vnode_if.h:859 #15 0xffffffff80a66489 in gfs_lookup_dot (vpp=0xffffff8245eef2c8, dvp=0xffffff0091bf6588, pvp=0xffffff0091bf6588, nm=0xffffffff80b7ab31 "..") at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/gfs.c:451 #16 0xffffffff80a664e9 in gfs_dir_lookup (dvp=0xffffff01940fc1d8, nm=0xffffffff80b7ab31 "..", vpp=0xffffff8245eef2c8, cr=0xffffff01e58b6300, flags=0, direntflags=0x0, realpnp=0x0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/gfs.c:920 #17 0xffffffff80af4ad1 in zfsctl_snapshot_inactive (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c:1470 #18 0xffffffff80639638 in VOP_INACTIVE_APV (vop=0xffffffff80b83f60, a=0xffffff8245eef340) at vnode_if.c:1923 #19 0xffffffff80491671 in vinactive (vp=0xffffff01940fc1d8, td=0xffffff01aabf6470) at vnode_if.h:807 #20 0xffffffff80496038 in vputx (vp=0xffffff01940fc1d8, func=1) at /usr/src/sys/kern/vfs_subr.c:2347 #21 0xffffffff80a63d73 in traverse (cvpp=0xffffff8245eef930, lktype=525312) at /usr/src/sys/modules/zfs/../../cddl/compat/opensolaris/kern/opensolaris_lookup.c:98 #22 0xffffffff80af4dbb in zfsctl_snapdir_lookup (ap=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c:1007 #23 0xffffffff8063a372 in VOP_LOOKUP_APV (vop=0xffffffff80b845a0, a=0xffffff8245eef7c0) at vnode_if.c:126 #24 0xffffffff8048898c in lookup (ndp=0xffffff8245eef900) at vnode_if.h:54 #25 0xffffffff80489b0d in namei (ndp=0xffffff8245eef900) at /usr/src/sys/kern/vfs_lookup.c:269 #26 0xffffffff8049a47f in kern_statat_vnhook (td=0xffffff01aabf6470, flag=, fd=, path=, pathseg=, sbp=0xffffff8245eefa80, hook=0) at /usr/src/sys/kern/vfs_syscalls.c:2339 #27 0xffffffff8049a645 in kern_statat (td=, flag=, fd=, path=, pathseg=, sbp=) at /usr/src/sys/kern/vfs_syscalls.c:2320 #28 0xffffffff8049a70a in lstat (td=, uap=0xffffff8245eefbb0) at /usr/src/sys/kern/vfs_syscalls.c:2383 #29 0xffffffff805db824 in amd64_syscall (td=0xffffff01aabf6470, traced=0) at subr_syscall.c:114 #30 0xffffffff805c2e9c in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:387 #31 0x000000018073597c in ?? () (kgdb) f 25 #25 0xffffffff80489b0d in namei (ndp=0xffffff8245eef900) at /usr/src/sys/kern/vfs_lookup.c:269 269 error = lookup(ndp); (kgdb) l 264 VREF(dp); 265 } 266 if (vfslocked) 267 ndp->ni_cnd.cn_flags |= GIANTHELD; 268 ndp->ni_startdir = dp; 269 error = lookup(ndp); 270 if (error) { 271 uma_zfree(namei_zone, cnp->cn_pnbuf); 272 #ifdef DIAGNOSTIC 273 cnp->cn_pnbuf = NULL; (kgdb) p *ndp $23 = {ni_dirp = 0x180a2d3c8
, ni_segflg = UIO_USERSPACE, ni_startdir = 0x0, ni_rootdir = 0xffffff0002c493b0, ni_topdir = 0x0, ni_dirfd = -100, ni_vp = 0xffffff01940fc1d8, ni_dvp = 0xffffff0091bf6588, ni_pathlen = 1, ni_next = 0xffffff0026476424 "", ni_loopcnt = 0, ni_cnd = {cn_nameiop = 0, cn_flags = 83935492, cn_thread = 0xffffff01aabf6470, cn_cred = 0xffffff01e58b6300, cn_lkflags = 2097152, cn_pnbuf = 0xffffff0026476400 "amanda-mpool_jail_db1_backup-current", cn_nameptr = 0xffffff0026476400 "amanda-mpool_jail_db1_backup-current", cn_namelen = 36, cn_consume = 0}} I would like to know if this panic is a known issue and can give more information from the kerneldump. -- Andreas Longwitz From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 17:11:34 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id BEBFC91F for ; Mon, 9 Sep 2013 17:11:34 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 163512609 for ; Mon, 9 Sep 2013 17:11:33 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id UAA09541; Mon, 09 Sep 2013 20:11:24 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VJ4zk-0005xM-I1; Mon, 09 Sep 2013 20:11:24 +0300 Message-ID: <522E0118.5020106@FreeBSD.org> Date: Mon, 09 Sep 2013 20:10:48 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130810 Thunderbird/17.0.8 MIME-Version: 1.0 To: Andreas Longwitz Subject: Re: zfs panic during find(1) on zfs snapshot directory References: <522DF5A9.4070103@incore.de> In-Reply-To: <522DF5A9.4070103@incore.de> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 17:11:34 -0000 on 09/09/2013 19:22 Andreas Longwitz said the following: > I would like to know if this panic is a known issue and can give more > information from the kerneldump. My personal recommendation is to keep .zfs directory hidden and/or perform only basic operations on entries under it while ensuring that there is only one process at a time that peeks there. The gfs stuff that handles .zfs operations is really very broken on FreeBSD[*]. If you are interested, I have a patch that should some of the mess, but not all. [*] To see what I mean run several of the following shell loops in parallel: while true; do ls -l /pool/fs/.zfs/ >/dev/null; done -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 17:15:19 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 2860EADE for ; Mon, 9 Sep 2013 17:15:19 +0000 (UTC) (envelope-from jdavidlists@gmail.com) Received: from mail-ie0-x233.google.com (mail-ie0-x233.google.com [IPv6:2607:f8b0:4001:c03::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EE10C2648 for ; Mon, 9 Sep 2013 17:15:18 +0000 (UTC) Received: by mail-ie0-f179.google.com with SMTP id m16so10614054ieq.24 for ; Mon, 09 Sep 2013 10:15:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=T9qhDGRsSUzQNlICMJsZdtSswV3AteZ8NgA6myIaEyc=; b=sJOjugUk0U7B5jDJpXDQwvqQCw6SmJVFDMueCU+OieA40QeQFhGkaWk1vGHBzuQcKA CAyEFL1XtesnbK9DvTNtErQNRdQt5akN5mXPLbFZeO7Xn2vREwPKSaIRmrJzNwTNsTmf Uqxtj/vonLoEYQ0O6dLDRsrGzGUBMd9g4wjmZCWUz3RSQzxa/S1VDZyGAMnCczwm14UB N5z/+QhO30nuzTokU8aNPXnkcAcFpgWrNBoEttORiAhvQtNuVGGWzlrkPAtJ7dt/H8JI X27wSHEhtBuNMPZScY0ZFu2VvuqkmdDFEv1rHJEQ66Aubkic+BwjQEU9Ij2RPip/MvC2 uqEQ== MIME-Version: 1.0 X-Received: by 10.42.53.18 with SMTP id l18mr33279icg.78.1378746917632; Mon, 09 Sep 2013 10:15:17 -0700 (PDT) Sender: jdavidlists@gmail.com Received: by 10.43.157.8 with HTTP; Mon, 9 Sep 2013 10:15:17 -0700 (PDT) In-Reply-To: References: <522D67DB.7060404@infracaninophile.co.uk> Date: Mon, 9 Sep 2013 13:15:17 -0400 X-Google-Sender-Auth: X3ONUog59x48KAhpOLeCmDbJjng Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: J David To: krad Content-Type: text/plain; charset=ISO-8859-1 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 17:15:19 -0000 On Mon, Sep 9, 2013 at 8:55 AM, krad wrote: > you will find without 'zfs_enable="YES" ' set a lot of the zfs datasets > might not get mounted Matthew has the same understanding of this that I do: zfs_load goes in loader.conf and zfs_enable goes in rc.conf. zfs_load causes the loader to load zfs.ko and opensolaris.ko so that the kernel can access the zpool (e.g. to mount the root filesystem) after /boot/zfsloader finishes. zfs_enable in rc.conf activates the /etc/rc.d/zfs and /etc/rc.d/zvol scripts. (And tweaks mountd on nfs servers.) There are several online ZFS-root recipies that say differently (mainly that using zfs_load has been replaced by zfs_enable in loader.conf), but I haven't found any authoritative references that support that. Have you? (Also, it doesn't work in testing; the two .ko's aren't loaded if zfs_load is not present.) In the absence of new info, that seems like the right way to do it. Things get sticky when it comes to establishing the ZFS root filesystem. There are at least four ways to go about it: 1) Set vfs.root.mountfrom="zfs:data/root" in loader.conf. 2) Run "zpool set bootfs=data/root data" on the pool. 3) Run "zfs set mountpoint=/ data/root" on the root filesystem. 4) Run "zfs set mountpoint=legacy data/root" on the root filesystem and an /etc/fstab entry. Unfortunately, some of these are not sufficient by themselves, or they don't work at all. So the question is, for 9.2, which (combination of?) these is the authoritatively correct way to identify the ZFS root filesystem? And, for the sake of release engineering, how does 9.2 differ from 9.1 in this regard? Thanks! From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 17:23:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 5FB0AC00 for ; Mon, 9 Sep 2013 17:23:26 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qc0-x22b.google.com (mail-qc0-x22b.google.com [IPv6:2607:f8b0:400d:c01::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1F83E26BE for ; Mon, 9 Sep 2013 17:23:26 +0000 (UTC) Received: by mail-qc0-f171.google.com with SMTP id x19so2954423qcw.16 for ; Mon, 09 Sep 2013 10:23:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=f/FGVYnJ3VoEigs3A615MmhQN15BSvx+wr9vt3qbl28=; b=HDahdIWr85mgQL0x+93V8BX8kWjfcOAcbrrltEruEGRWGJZ8u6++A4AgaJkzWTLrQV c/2caq7oQrux/j3I7r0jP6mkrmDI8PSI5eT3WPUDmupmvrp3is6W5clb6zzQiyv2O6Eb j2sBJxX1INkkvD4zPorVsaQBollJagRh0X5qpoaReSVJhrUdIIBneZU5B2VLexFDAMID 5YAnhqMVo0+s90+6BcfAjFzPHOAuBLXD6qn00TD0JGzF9SHGrzLzUmAuvQEvjMIkfR4Q iH/aRp351ajKNHPDrX7OqdfR6bCFHq/iJZ87zoWfZnkaDjEnGVEw6GGngqPPF5dGo/HI EHPQ== MIME-Version: 1.0 X-Received: by 10.49.109.170 with SMTP id ht10mr25394014qeb.27.1378747405021; Mon, 09 Sep 2013 10:23:25 -0700 (PDT) Received: by 10.49.39.33 with HTTP; Mon, 9 Sep 2013 10:23:24 -0700 (PDT) In-Reply-To: References: <522D67DB.7060404@infracaninophile.co.uk> Date: Mon, 9 Sep 2013 10:23:24 -0700 Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: Freddie Cash To: J David Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 17:23:26 -0000 On Mon, Sep 9, 2013 at 10:15 AM, J David wrote: > On Mon, Sep 9, 2013 at 8:55 AM, krad wrote: > > you will find without 'zfs_enable="YES" ' set a lot of the zfs datasets > > might not get mounted > > Matthew has the same understanding of this that I do: zfs_load goes in > loader.conf and zfs_enable goes in rc.conf. > > zfs_load causes the loader to load zfs.ko and opensolaris.ko so that > the kernel can access the zpool (e.g. to mount the root filesystem) > after /boot/zfsloader finishes. > > zfs_enable in rc.conf activates the /etc/rc.d/zfs and /etc/rc.d/zvol > scripts. (And tweaks mountd on nfs servers.) > > There are several online ZFS-root recipies that say differently > (mainly that using zfs_load has been replaced by zfs_enable in > loader.conf), but I haven't found any authoritative references that > support that. Have you? (Also, it doesn't work in testing; the two > .ko's aren't loaded if zfs_load is not present.) > > In the absence of new info, that seems like the right way to do it. > > Things get sticky when it comes to establishing the ZFS root > filesystem. There are at least four ways to go about it: > > 1) Set vfs.root.mountfrom="zfs:data/root" in loader.conf. > 2) Run "zpool set bootfs=data/root data" on the pool. > 3) Run "zfs set mountpoint=/ data/root" on the root filesystem. > 4) Run "zfs set mountpoint=legacy data/root" on the root filesystem > and an /etc/fstab entry. > > Unfortunately, some of these are not sufficient by themselves, or they > don't work at all. So the question is, for 9.2, which (combination > of?) these is the authoritatively correct way to identify the ZFS root > filesystem? > > And, for the sake of release engineering, how does 9.2 differ from 9.1 > in this regard? > The following works on my 9.2-STABLE system (upgraded from 9.1 without any changes): /boot/loader.conf: zfs_load="YES" vfs.root.mountfrom="zfs:pool/ROOT/default" /etc/rc.conf: zfs_enable="YES" zpool get bootfs pool: NAME PROPERTY VALUE SOURCE pool bootfs pool/ROOT/default local zfs get mountpoint pool/ROOT/default: NAME PROPERTY VALUE SOURCE pool/ROOT/default mountpoint legacy local /etc/fstab is completely empty. The above works with or without beadm installed from ports, but is fully supported by beadm making upgrades much simpler. With gptzfsloader installed as the boot loader, due to using GPT to partition the disks in the pool. There are 4x harddrives in the pool, configured as two mirror vdevs (no log or cache devices). -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 17:44:58 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 13DBA32D for ; Mon, 9 Sep 2013 17:44:58 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id C6B762820 for ; Mon, 9 Sep 2013 17:44:56 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id D35282BC178; Mon, 9 Sep 2013 19:39:00 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 9342A2BC0C4 for ; Mon, 9 Sep 2013 19:39:00 +0200 (CEST) Message-ID: <522E07B1.5030205@platinum.linux.pl> Date: Mon, 09 Sep 2013 19:38:57 +0200 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) References: <522D67DB.7060404@infracaninophile.co.uk> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 17:44:58 -0000 On 2013-09-09 19:15, J David wrote: > On Mon, Sep 9, 2013 at 8:55 AM, krad wrote: >> you will find without 'zfs_enable="YES" ' set a lot of the zfs datasets >> might not get mounted > > Matthew has the same understanding of this that I do: zfs_load goes in > loader.conf and zfs_enable goes in rc.conf. > > zfs_load causes the loader to load zfs.ko and opensolaris.ko so that > the kernel can access the zpool (e.g. to mount the root filesystem) > after /boot/zfsloader finishes. > > zfs_enable in rc.conf activates the /etc/rc.d/zfs and /etc/rc.d/zvol > scripts. (And tweaks mountd on nfs servers.) > > There are several online ZFS-root recipies that say differently > (mainly that using zfs_load has been replaced by zfs_enable in > loader.conf), but I haven't found any authoritative references that > support that. Have you? (Also, it doesn't work in testing; the two > .ko's aren't loaded if zfs_load is not present.) > > In the absence of new info, that seems like the right way to do it. > > Things get sticky when it comes to establishing the ZFS root > filesystem. There are at least four ways to go about it: > > 1) Set vfs.root.mountfrom="zfs:data/root" in loader.conf. > 2) Run "zpool set bootfs=data/root data" on the pool. > 3) Run "zfs set mountpoint=/ data/root" on the root filesystem. > 4) Run "zfs set mountpoint=legacy data/root" on the root filesystem > and an /etc/fstab entry. > > Unfortunately, some of these are not sufficient by themselves, or they > don't work at all. So the question is, for 9.2, which (combination > of?) these is the authoritatively correct way to identify the ZFS root > filesystem? zfs set mountpoint=legacy data/root together with zpool set bootfs=data/root data setting vfs.root.mountfrom is not required - this is handled by the bootfs property, as is listing / in fstab From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 17:56:59 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 04A66EB1 for ; Mon, 9 Sep 2013 17:56:59 +0000 (UTC) (envelope-from Mark.Martinec+freebsd@ijs.si) Received: from mail.ijs.si (mail.ijs.si [IPv6:2001:1470:ff80::25]) by mx1.freebsd.org (Postfix) with ESMTP id AC7F928EF for ; Mon, 9 Sep 2013 17:56:58 +0000 (UTC) Received: from amavis-proxy-ori.ijs.si (localhost [IPv6:::1]) by mail.ijs.si (Postfix) with ESMTP id 3cYcWr4T3lzGN6j for ; Mon, 9 Sep 2013 19:56:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ijs.si; h= message-id:content-transfer-encoding:content-type:content-type :mime-version:in-reply-to:references:user-agent:date:date :subject:subject:organization:from:from:received:received :received:vbr-info; s=jakla2; t=1378749414; x=1381341415; bh=oIw F1vfd7rps25zeeOROBECLn3dluI6puAQcRgyvihM=; b=GmqvreFEVcfhLfNXl1R 26L0+BaN0yo4ycPPuIzNXIZahiIsF61YQRummLHPHzV5yQ52UcrpjuhhGdmLZds5 FX9AaByFt2sRF4/ygPyxPuIaekP9jmfxvHNq452ARY/bjD9HMT9aT9G981wLvMtT fFCqKdSrZi8MkeVhFU8yIKdM= VBR-Info: md=ijs.si; mc=all; mv=dwl.spamhaus.org; X-Virus-Scanned: amavisd-new at ijs.si Received: from mail.ijs.si ([IPv6:::1]) by amavis-proxy-ori.ijs.si (mail.ijs.si [IPv6:::1]) (amavisd-new, port 10012) with ESMTP id A0Chhbp8ODBk for ; Mon, 9 Sep 2013 19:56:54 +0200 (CEST) Received: from mildred.ijs.si (mailbox.ijs.si [IPv6:2001:1470:ff80::143:1]) by mail.ijs.si (Postfix) with ESMTP for ; Mon, 9 Sep 2013 19:56:54 +0200 (CEST) Received: from neli.ijs.si (neli.ijs.si [IPv6:2001:1470:ff80:88:21c:c0ff:feb1:8c91]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mildred.ijs.si (Postfix) with ESMTPSA id DD24EEF2 for ; Mon, 9 Sep 2013 19:56:54 +0200 (CEST) From: Mark Martinec Organization: J. Stefan Institute To: freebsd-fs@freebsd.org Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) Date: Mon, 9 Sep 2013 19:56:53 +0200 User-Agent: KMail/1.13.7 (FreeBSD/9.2-PRERELEASE; KDE/4.10.5; amd64; ; ) References: <522E07B1.5030205@platinum.linux.pl> In-Reply-To: <522E07B1.5030205@platinum.linux.pl> MIME-Version: 1.0 Content-Type: Text/Plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Message-Id: <201309091956.53759.Mark.Martinec+freebsd@ijs.si> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 17:56:59 -0000 Adam Nowacki writes: > zfs set mountpoint=legacy data/root > together with > zpool set bootfs=data/root data > > setting vfs.root.mountfrom is not required - this is handled by the > bootfs property, as is listing / in fstab So what happens if multiple pools each have their bootfs set? Mark From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 18:18:47 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id DB373759 for ; Mon, 9 Sep 2013 18:18:47 +0000 (UTC) (envelope-from erif-freebsd-fs@z42.net) Received: from s.lundagatan.com (s.lundagatan.com [91.95.26.27]) by mx1.freebsd.org (Postfix) with SMTP id 3D9BB2A4F for ; Mon, 9 Sep 2013 18:18:46 +0000 (UTC) Received: (qmail 24048 invoked by uid 1013); 9 Sep 2013 17:52:04 -0000 Date: Mon, 9 Sep 2013 19:52:04 +0200 From: erif To: freebsd-fs@freebsd.org Subject: ZFS recv user unable to mount filesystems Message-ID: <20130909175204.GA5617@s.lundagatan.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Operating-System: NetBSD 3.1 X-Eric-Conspiracy: There is no conspiracy User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 18:18:47 -0000 Hi, We have set up two systems, at remote locations, with FreeBSD 9.1-RELEASE-p4 and ZFS. They have their own zpool and two main filesystems, one to keep local filesystems and the other (read-only, which is inherited to underlying filesystems) to keep replicas of the other nodes locally used filesystems. To keep the filesystems in sync between the two hosts we intend to have two users in each end, running cron jobs and scripts, one for taking snapshots and sending them (over ssh) and one to receive snapshots and mount them. It looks like this, zhost0 has main filesystems zpool0/zfs0/a and zpool0/zfs1/b, and zhost1 has main filesystems zpool1/zfs1/b and zpool1/zfs0/a, where zpool0/zfs1 and zpool1/zfs0 have the property readonly which is inherited by a and b, the filesystems and descendants we intend to sync snapshots of (zfs0 and zfs1 have no mountpoints, a and b do). We have the two users zsend and zrecv with these allow permissions (zhost0) ---- Permissions on zpool0/zfs0 ---------------------------------------- Local+Descendent permissions: user zsend hold,mount,send,snapshot ---- Permissions on zpool0/zfs1 ---------------------------------------- Local+Descendent permissions: user zrecv create,mount,receive and vfs.usermount is set to 1. All is well until the receiving user has gotten the data and tries to mount a newly received, and previously non-existent, filesystem cannot mount 'zpool0/zfs1/b': Insufficient privileges However, zrecv can unmount a previously (by superuser) mounted filesystem, for which it has allow permission mount (it cannot unmount it if vfs.usermount=0). Also, the zrecv user can mount and unmount zpool0/zfs1/b just fine (and likewise, that user on zhost1, zpool1/zfs0/a) if it is the owner of the mountpoint directory, but for us this is not a solution. As a temporary workaround, we will probably let the zrecv user run 'sudo zfs mount -a' in the script run by the cron job. -- Fredrik From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 18:46:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id B24369E7 for ; Mon, 9 Sep 2013 18:46:46 +0000 (UTC) (envelope-from jdavidlists@gmail.com) Received: from mail-ie0-x235.google.com (mail-ie0-x235.google.com [IPv6:2607:f8b0:4001:c03::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8346E2C30 for ; Mon, 9 Sep 2013 18:46:46 +0000 (UTC) Received: by mail-ie0-f181.google.com with SMTP id y16so8144518ieg.26 for ; Mon, 09 Sep 2013 11:46:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=TT1Wra7/iFwerecfwiMgMBEAdArTzcWmyd1fmCbmVFA=; b=TvIb2tahxfCeMr6RaMYcaRbu49JuoGCa0WDu/sxCjho+2d5vGrYlM+KGg5P7kq4TqU pmDpX03a9o+H6t2b8qEGCA77TLf1xG2CQyP089/IKD9ZdwTLnBIM8KD8ny7w7EMsoIii Rua0AdNy2LZLOJj8Tq4weeNzTlpPAlf+egm+/BWtBs1QyKYqP2DR6loj2TaAn/6N44te cdxrDcTlHuM+5+N7NNky9xR9nw7wl8uk4Anj+ZaEoTD30AQ/ooGS7g68He662/Lfg1qa Mg8yOPZNiXZmRholVhTniJ8JvfDMNjj740BPLxq48PKwWlhS2g3an71bUVRAIrIfVo48 y1Og== MIME-Version: 1.0 X-Received: by 10.50.20.195 with SMTP id p3mr9205909ige.26.1378752406064; Mon, 09 Sep 2013 11:46:46 -0700 (PDT) Sender: jdavidlists@gmail.com Received: by 10.43.157.8 with HTTP; Mon, 9 Sep 2013 11:46:45 -0700 (PDT) In-Reply-To: <522E07B1.5030205@platinum.linux.pl> References: <522D67DB.7060404@infracaninophile.co.uk> <522E07B1.5030205@platinum.linux.pl> Date: Mon, 9 Sep 2013 14:46:45 -0400 X-Google-Sender-Auth: mg9GoBzDNMAv4PrwYOIovvCeoPY Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: J David To: Adam Nowacki Content-Type: text/plain; charset=ISO-8859-1 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 18:46:46 -0000 On Mon, Sep 9, 2013 at 1:38 PM, Adam Nowacki wrote: > zfs set mountpoint=legacy data/root > together with > zpool set bootfs=data/root data This does appear to work, thanks. So the key steps seem to be: 1) zfs_load="YES" in loader.conf 2) zfs_enable="YES" in rc.conf 3) Set bootfs=data/root in the zpool. 4) Set mountpoint=legacy on the root fs Using mountpoint=legacy seems a little conceptually challenged, especially given that the description of a legacy mount is: "If a file system's mount point is set to legacy, ZFS makes no attempt to manage the file system, and the administrator is responsible for mounting and unmounting the file system." Is this bending things to claim setting bootfs is adequate example of the administrator's responsibility to mount the file system? (Even though that is clearly also part of ZFS.) How does mountpoint=legacy interact with importing the pool on another system, or from a LiveCD, with " -o altroot=/mnt " ? (A case where mountpoint=/ works perfectly.) And, finally, what would have to change to support a ZFS root filesystem set as mountpoint=/ instead of mountpoint=legacy ? Thanks! From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 22:34:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 484DE366 for ; Mon, 9 Sep 2013 22:34:10 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A76A128BF for ; Mon, 9 Sep 2013 22:34:09 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 63C5811AB66C; Tue, 10 Sep 2013 00:25:13 +0200 (CEST) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 13.2789] X-CRM114-CacheID: sfid-20130910_00251_F20658CF X-CRM114-Status: Good ( pR: 13.2789 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Tue Sep 10 00:25:13 2013 X-DSPAM-Confidence: 0.8524 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 522e4ac9845121995516448 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00010, cache, 0.00177, 215, 0.00442, 209, 0.00442, the+machine, 0.00482, 208, 0.00482, Doing, 0.00482, From*Attila, 0.00530, threads, 0.00589, CPUs, 0.00662, I+won't, 0.00662, machines, 0.00706, machines, 0.00706, doesn't+really, 0.00756, 2563, 0.00756, (it, 0.00756, 870, 0.00756, 233, 0.00881, 271, 0.00881, Received*online.co.hu+[195.228.243.99]), 0.01000, Subject*be+a, 0.99000, 2516, 0.01000, Any+ideas, 0.01000, 2865, 0.01000, Received*[195.228.243.99]), 0.01000, 32), 0.01000, X-Spambayes-Classification: ham; 0.00 Received: from [192.168.3.2] (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id 5F19F11AB661 for ; Tue, 10 Sep 2013 00:25:12 +0200 (CEST) Message-ID: <522E4AC5.4040606@fsn.hu> Date: Tue, 10 Sep 2013 00:25:09 +0200 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: freebsd-fs Subject: High CPU usage with newnfs(d) - seems to be a cache issue Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 22:34:10 -0000 Hi, I've observed some insane CPU usage on stable/9@r255367. About the machine: CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (2400.14-MHz K8-class CPU) real memory = 34359738368 (32768 MB) FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 SMT threads It does some NFS serving like this (now running oldnfs) -not quite peak times actually: # nfsstat -w 1 -os GtAttr Lookup Rdlink Read Write Rename Access Rddir 763 7206 1 175 92 0 915 3589 748 7665 10 131 60 0 905 2923 787 9657 23 204 50 0 974 2387 517 9881 9 150 41 0 572 2321 709 8708 71 235 70 0 1220 3271 621 9157 9 254 208 0 928 2563 699 5336 29 271 103 0 1242 3448 656 4291 11 201 209 0 1119 3908 506 3722 0 215 183 0 970 2516 698 1476 1 151 66 0 903 2094 501 2865 11 268 117 0 995 1392 638 6284 46 233 47 0 1096 4847 893 7909 47 175 73 0 870 4070 651 3936 48 255 51 0 955 2514 424 4211 17 223 29 0 745 1458 589 8197 26 199 39 0 918 2983 It's being hammered by about 40 machines on multiple connections (it has 35 UFS file systems exported). When running newnfs (admittedly in some stupid way, with -n 32, the profiling was made with this, maybe this causes some lock contention), it occasionally eats 1600% CPU (means: 0 idle). Lowering the thread number doesn't really solves the problem, I've seen -n X*100 CPU usage peaks lately on machines with lower (4-8) -n counts... Doing a profiling with pmc shows that most of the time is spent in nfsrvd_updatecache and nfsrvd_getcache: http://pastebin.com/knyppv4d Switching back to oldnfsd (even with -n 32) gives a stable 50-60% CPU usage (out of the "possible" 1600%) when loaded. I know that there are some changes regarding this cache in the CURRENT code (along with the possibility to set some values with sysctls), but I can't run CURRENT. Any ideas on how to improve newnfsd, so we can continue serving NFS in the future days, where I won't be able to switch back to the old one? :) Thanks, From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 22:58:06 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 58DDECE3 for ; Mon, 9 Sep 2013 22:58:06 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 2125629D8 for ; Mon, 9 Sep 2013 22:58:05 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqUEAJJRLlKDaFve/2dsb2JhbABYAxaDKUsGgyq+dIE7dIIlAQEBAwEBAQEgKyALBRYOCgICDRkCKQEJJgYIBwQBHASHWwYHBbEpkgmBKY0JgQUkEAcRgliBNAOVLIN4iwuFLIM8IDKBAzk X-IronPort-AV: E=Sophos;i="4.90,874,1371096000"; d="scan'208";a="50744805" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 09 Sep 2013 18:57:59 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 06311B3F2E; Mon, 9 Sep 2013 18:57:59 -0400 (EDT) Date: Mon, 9 Sep 2013 18:57:59 -0400 (EDT) From: Rick Macklem To: Attila Nagy Message-ID: <1721695444.20803113.1378767479011.JavaMail.root@uoguelph.ca> In-Reply-To: <522E4AC5.4040606@fsn.hu> Subject: Re: High CPU usage with newnfs(d) - seems to be a cache issue MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 22:58:06 -0000 Attila Nagy wrote: > Hi, > > I've observed some insane CPU usage on stable/9@r255367. > About the machine: > CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (2400.14-MHz > K8-class CPU) > real memory = 34359738368 (32768 MB) > FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs > FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 SMT threads > > It does some NFS serving like this (now running oldnfs) -not quite > peak > times actually: > # nfsstat -w 1 -os > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 763 7206 1 175 92 0 915 3589 > 748 7665 10 131 60 0 905 2923 > 787 9657 23 204 50 0 974 2387 > 517 9881 9 150 41 0 572 2321 > 709 8708 71 235 70 0 1220 3271 > 621 9157 9 254 208 0 928 2563 > 699 5336 29 271 103 0 1242 3448 > 656 4291 11 201 209 0 1119 3908 > 506 3722 0 215 183 0 970 2516 > 698 1476 1 151 66 0 903 2094 > 501 2865 11 268 117 0 995 1392 > 638 6284 46 233 47 0 1096 4847 > 893 7909 47 175 73 0 870 4070 > 651 3936 48 255 51 0 955 2514 > 424 4211 17 223 29 0 745 1458 > 589 8197 26 199 39 0 918 2983 > > It's being hammered by about 40 machines on multiple connections (it > has > 35 UFS file systems exported). > > When running newnfs (admittedly in some stupid way, with -n 32, the > profiling was made with this, maybe this causes some lock > contention), > it occasionally eats 1600% CPU (means: 0 idle). > Lowering the thread number doesn't really solves the problem, I've > seen > -n X*100 CPU usage peaks lately on machines with lower (4-8) -n > counts... > > Doing a profiling with pmc shows that most of the time is spent in > nfsrvd_updatecache and nfsrvd_getcache: > http://pastebin.com/knyppv4d > > Switching back to oldnfsd (even with -n 32) gives a stable 50-60% CPU > usage (out of the "possible" 1600%) when loaded. > > I know that there are some changes regarding this cache in the > CURRENT > code (along with the possibility to set some values with sysctls), > but I > can't run CURRENT. > > Any ideas on how to improve newnfsd, so we can continue serving NFS > in > the future days, where I won't be able to switch back to the old one? > :) > Well, I put a 1 month MFC on r254337 (which I believe fixes this), so it should be in stable/9 in about a week. Alternately, an uglier (but semantically equivalent) patch can be found at: http://people.freebsd.org/~rmacklem/drc4-stable9.patch rick > Thanks, > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Sep 9 23:11:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 0297811F; Mon, 9 Sep 2013 23:11:11 +0000 (UTC) (envelope-from FreeBSD@shaneware.biz) Received: from ipmail06.adl2.internode.on.net (ipmail06.adl2.internode.on.net [IPv6:2001:44b8:8060:ff02:300:1:2:6]) by mx1.freebsd.org (Postfix) with ESMTP id 5EE862A9C; Mon, 9 Sep 2013 23:11:10 +0000 (UTC) Received: from ppp118-210-73-223.lns20.adl2.internode.on.net (HELO leader.local) ([118.210.73.223]) by ipmail06.adl2.internode.on.net with ESMTP; 10 Sep 2013 08:40:57 +0930 Message-ID: <522E557E.5050202@ShaneWare.Biz> Date: Tue, 10 Sep 2013 08:40:54 +0930 From: Shane Ambler User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130516 Thunderbird/17.0.6 MIME-Version: 1.0 To: J David Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) References: <522D30C9.8000203@bluerosetech.com> <522D3C76.1030705@bluerosetech.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" , freebsd-stable , Darren Pilgrim X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Sep 2013 23:11:11 -0000 On 09/09/2013 21:20, J David wrote: > On Sun, Sep 8, 2013 at 11:11 PM, Darren Pilgrim > wrote: >> You can use zfs.root.mountfrom="zfs:data/root" in /boot/loader.conf >> instead of an fstab entry. > > That has been in loader.conf the whole time. > >> Mountpoint=legacy is required either way. > > It isn't. There is another machine right next to it running 9.2-RC1 > and it works fine with the mountpoint=/ setting and an empty fstab. > I installed 9.0 onto my machine booting from zfs about a year and a half ago and remember having issues getting it bootable. As I recall mounpoint=legacy and mountpoint=/ effectively point to two different filesystems. Changing the mounpoint after installing hides the / filesystem. So it isn't so much which mountpoint to use but which mountpoint *was* used when you installed the system. From owner-freebsd-fs@FreeBSD.ORG Tue Sep 10 09:44:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 9979C41E for ; Tue, 10 Sep 2013 09:44:16 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1B0D428AC for ; Tue, 10 Sep 2013 09:44:15 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 9DADF115FE6C; Tue, 10 Sep 2013 11:44:13 +0200 (CEST) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 17.0430] X-CRM114-CacheID: sfid-20130910_11441_DE3D786C X-CRM114-Status: Good ( pR: 17.0430 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Tue Sep 10 11:44:13 2013 X-DSPAM-Confidence: 0.9957 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 522ee9ed6101076816758 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00010, MFC, 0.00082, wrote+>>, 0.00133, cache, 0.00177, wrote+>, 0.00205, >>+>, 0.00304, >+>, 0.00353, >>+I, 0.00354, >>+>>, 0.00393, >>+>>, 0.00393, fixes, 0.00442, 215, 0.00442, 209, 0.00442, >>+Hi, 0.00482, Url*//people, 0.00482, the+machine, 0.00482, 208, 0.00482, Doing, 0.00482, >+it, 0.00530, From*Attila, 0.00530, wrote, 0.00532, wrote, 0.00532, threads, 0.00589, CPUs, 0.00662, )+>>, 0.00662, I+won't, 0.00662, X-Spambayes-Classification: ham; 0.00 Received: from japan.t-online.private (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id A28D9115FE5D; Tue, 10 Sep 2013 11:44:11 +0200 (CEST) Message-ID: <522EE9EB.4010706@fsn.hu> Date: Tue, 10 Sep 2013 11:44:11 +0200 From: Attila Nagy MIME-Version: 1.0 To: Rick Macklem Subject: Re: High CPU usage with newnfs(d) - seems to be a cache issue References: <1721695444.20803113.1378767479011.JavaMail.root@uoguelph.ca> In-Reply-To: <1721695444.20803113.1378767479011.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Sep 2013 09:44:16 -0000 Hi, On 09/10/13 00:57, Rick Macklem wrote: > Attila Nagy wrote: >> Hi, >> >> I've observed some insane CPU usage on stable/9@r255367. >> About the machine: >> CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (2400.14-MHz >> K8-class CPU) >> real memory = 34359738368 (32768 MB) >> FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs >> FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 SMT threads >> >> It does some NFS serving like this (now running oldnfs) -not quite >> peak >> times actually: >> # nfsstat -w 1 -os >> GtAttr Lookup Rdlink Read Write Rename Access Rddir >> 763 7206 1 175 92 0 915 3589 >> 748 7665 10 131 60 0 905 2923 >> 787 9657 23 204 50 0 974 2387 >> 517 9881 9 150 41 0 572 2321 >> 709 8708 71 235 70 0 1220 3271 >> 621 9157 9 254 208 0 928 2563 >> 699 5336 29 271 103 0 1242 3448 >> 656 4291 11 201 209 0 1119 3908 >> 506 3722 0 215 183 0 970 2516 >> 698 1476 1 151 66 0 903 2094 >> 501 2865 11 268 117 0 995 1392 >> 638 6284 46 233 47 0 1096 4847 >> 893 7909 47 175 73 0 870 4070 >> 651 3936 48 255 51 0 955 2514 >> 424 4211 17 223 29 0 745 1458 >> 589 8197 26 199 39 0 918 2983 >> >> It's being hammered by about 40 machines on multiple connections (it >> has >> 35 UFS file systems exported). >> >> When running newnfs (admittedly in some stupid way, with -n 32, the >> profiling was made with this, maybe this causes some lock >> contention), >> it occasionally eats 1600% CPU (means: 0 idle). >> Lowering the thread number doesn't really solves the problem, I've >> seen >> -n X*100 CPU usage peaks lately on machines with lower (4-8) -n >> counts... >> >> Doing a profiling with pmc shows that most of the time is spent in >> nfsrvd_updatecache and nfsrvd_getcache: >> http://pastebin.com/knyppv4d >> >> Switching back to oldnfsd (even with -n 32) gives a stable 50-60% CPU >> usage (out of the "possible" 1600%) when loaded. >> >> I know that there are some changes regarding this cache in the >> CURRENT >> code (along with the possibility to set some values with sysctls), >> but I >> can't run CURRENT. >> >> Any ideas on how to improve newnfsd, so we can continue serving NFS >> in >> the future days, where I won't be able to switch back to the old one? >> :) >> > Well, I put a 1 month MFC on r254337 (which I believe fixes this), so > it should be in stable/9 in about a week. Alternately, an uglier (but > semantically equivalent) patch can be found at: > http://people.freebsd.org/~rmacklem/drc4-stable9.patch > Great, I'm eagerly waiting for this to happen then. :) From owner-freebsd-fs@FreeBSD.ORG Tue Sep 10 09:54:23 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 524E49EF for ; Tue, 10 Sep 2013 09:54:23 +0000 (UTC) (envelope-from erif-freebsd-fs@z42.net) Received: from s.lundagatan.com (s.lundagatan.com [91.95.26.27]) by mx1.freebsd.org (Postfix) with SMTP id 9B9CB2972 for ; Tue, 10 Sep 2013 09:54:22 +0000 (UTC) Received: (qmail 4000 invoked by uid 1013); 10 Sep 2013 09:54:21 -0000 Date: Tue, 10 Sep 2013 11:54:21 +0200 From: erif To: freebsd-fs@freebsd.org Subject: ZFS recv user unable to mount filesystems Message-ID: <20130910095420.GD5617@s.lundagatan.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Operating-System: NetBSD 3.1 X-Eric-Conspiracy: There is no conspiracy User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Sep 2013 09:54:23 -0000 Hi, We have set up two systems, at remote locations, with FreeBSD 9.1-RELEASE-p4 and ZFS. They have their own zpool and two main filesystems, one to keep local filesystems and the other (read-only, which is inherited to underlying filesystems) to keep replicas of the other nodes locally used filesystems. To keep the filesystems in sync between the two hosts we intend to have two users in each end, running cron jobs and scripts, one for taking snapshots and sending them (over ssh) and one to receive snapshots and mount them. It looks like this, zhost0 has main filesystems zpool0/zfs0/a and zpool0/zfs1/b, and zhost1 has main filesystems zpool1/zfs1/b and zpool1/zfs0/a, where zpool0/zfs1 and zpool1/zfs0 have the property readonly which is inherited by a and b, the filesystems and descendants we intend to sync snapshots of (zfs0 and zfs1 have no mountpoints, a and b do). We have the two users zsend and zrecv with these allow permissions (zhost0) ---- Permissions on zpool0/zfs0 ---------------------------------------- Local+Descendent permissions: user zsend hold,mount,send,snapshot ---- Permissions on zpool0/zfs1 ---------------------------------------- Local+Descendent permissions: user zrecv create,mount,receive and vfs.usermount is set to 1. All is well until the receiving user has gotten the data and tries to mount a newly received, and previously non-existent, filesystem cannot mount 'zpool0/zfs1/b': Insufficient privileges However, zrecv can unmount a previously (by superuser) mounted filesystem, for which it has allow permission mount (it cannot unmount it if vfs.usermount=0). Also, the zrecv user can mount and unmount zpool0/zfs1/b just fine (and likewise, that user on zhost1, zpool1/zfs0/a) if it is the owner of the mountpoint directory, but for us this is not a solution. As a temporary workaround, we will probably let the zrecv user run 'sudo zfs mount -a' in the script run by the cron job. -- Fredrik From owner-freebsd-fs@FreeBSD.ORG Tue Sep 10 11:22:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 785C18B5 for ; Tue, 10 Sep 2013 11:22:45 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-ve0-x22b.google.com (mail-ve0-x22b.google.com [IPv6:2607:f8b0:400c:c01::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 356C42F6D for ; Tue, 10 Sep 2013 11:22:45 +0000 (UTC) Received: by mail-ve0-f171.google.com with SMTP id pa12so5100398veb.16 for ; Tue, 10 Sep 2013 04:22:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=p2O8A5vfobea1mm5QQcmWm9vP58/7AK/BCeS91F7BHI=; b=0QmXmZHKVVJVo9ayUuXX3PgJVKTPunGam2eT4u5VzpVdypNYPJE1Cg+QPSremL0hBR Vx0STQoNq+2TTxAl6ugXqsTua05t7/C8InGl4FmJ7BKbz8xzFKytiNUvXzjn+KIZgIga pVePsGo4aFtBrAqOtypu9MGUd9QIkXmaYy6yeAudqR2ZsHrt83+K3T9eYvb5bhuhB2C5 MDmjTyNG9V56aJiXJpr85bIZ/s9jlhGnPtFFJm9X98jsL7qJiCGJx1UjtiFVyIG+33WK 492JpMFbxWENPObkpcqXg/g4renJ3qCBVfouh3LRBAtQdVUyLX5ZbUMO2Mhd6u4M3AV8 anJg== MIME-Version: 1.0 X-Received: by 10.220.181.136 with SMTP id by8mr22932543vcb.11.1378812164412; Tue, 10 Sep 2013 04:22:44 -0700 (PDT) Received: by 10.221.21.70 with HTTP; Tue, 10 Sep 2013 04:22:44 -0700 (PDT) In-Reply-To: References: <522D67DB.7060404@infracaninophile.co.uk> <522E07B1.5030205@platinum.linux.pl> Date: Tue, 10 Sep 2013 12:22:44 +0100 Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: krad To: J David Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Sep 2013 11:22:45 -0000 "Using mountpoint=legacy seems a little conceptually challenged, especially given that the description of a legacy mount is:" not really as what is one of the 1st things the kernel does after it is loaded with all its modules? It looks to mount the root filesystem. This is something that happens outside zfs's control from what I understand so is a legacy mount rather than a zfs controlled one. The bootfs property of a pool is actually used by the zfsloader to locate the file system the loader.conf etc is on. The loader may or may not choose to pass this parameter through to the kernels environment, and its totally possible to have / set to a different dataset than the bootfs option. "How does mountpoint=legacy interact with importing the pool on another system, or from a LiveCD, with " -o altroot=/mnt " ? (A case where mountpoint=/ works perfectly.)" Its doesn't get mounted as its legacy, ie you have to mount the dataset manually where ever you see fit "And, finally, what would have to change to support a ZFS root filesystem set as mountpoint=/ instead of mountpoint=legacy ?" Why would you want to? On 9 September 2013 19:46, J David wrote: > On Mon, Sep 9, 2013 at 1:38 PM, Adam Nowacki > wrote: > > zfs set mountpoint=legacy data/root > > together with > > zpool set bootfs=data/root data > > This does appear to work, thanks. So the key steps seem to be: > > 1) zfs_load="YES" in loader.conf > 2) zfs_enable="YES" in rc.conf > 3) Set bootfs=data/root in the zpool. > 4) Set mountpoint=legacy on the root fs > > Using mountpoint=legacy seems a little conceptually challenged, > especially given that the description of a legacy mount is: > > "If a file system's mount point is set to legacy, ZFS makes no attempt > to manage the file system, and the administrator is responsible for > mounting and unmounting the file system." > > Is this bending things to claim setting bootfs is adequate example of > the administrator's responsibility to mount the file system? (Even > though that is clearly also part of ZFS.) > > How does mountpoint=legacy interact with importing the pool on another > system, or from a LiveCD, with " -o altroot=/mnt " ? (A case where > mountpoint=/ works perfectly.) > > And, finally, what would have to change to support a ZFS root > filesystem set as mountpoint=/ instead of mountpoint=legacy ? > > Thanks! > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Sep 10 11:23:32 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id E7F77D94 for ; Tue, 10 Sep 2013 11:23:32 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-vc0-x233.google.com (mail-vc0-x233.google.com [IPv6:2607:f8b0:400c:c03::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A494D2055 for ; Tue, 10 Sep 2013 11:23:32 +0000 (UTC) Received: by mail-vc0-f179.google.com with SMTP id ht10so4793643vcb.24 for ; Tue, 10 Sep 2013 04:23:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=TlvxzjXpf0WYnuVM8X2gi0ha5fx/RjWoce5lboJr5g4=; b=NHFaUXcYg5Z/67NMBRegmjh+cr5goQSjWqPJVG+d+kDFn8QESw6h9pF6eI0tHgbsf9 vwvLsh30NoBJBAmrLZ+o8Gxxn1MedRqUJxMu34VGS42Fz0dWvOQPVYWD6ymcVEw3fAcE K8saAM1+iTdsIikKNtPUyVM2uSB+SLJuWdWTBPWmwPcy5YmYg7yFOLzhbinkVrt/LjPh tjudj6+qgXkG8RD8GNyawVaCceN4NBJMG4YGny5adzQUdtFlhhuR13pX3UyVyxYLKoxd uNFhLkyi3yNZJmADeLKlTClNFR+i/N1OuKeTUqiFFNL6S55T0vSXFqWtms+JxKz0uCsF OidQ== MIME-Version: 1.0 X-Received: by 10.58.118.130 with SMTP id km2mr22203191veb.0.1378812211877; Tue, 10 Sep 2013 04:23:31 -0700 (PDT) Received: by 10.221.21.70 with HTTP; Tue, 10 Sep 2013 04:23:31 -0700 (PDT) In-Reply-To: References: <522D67DB.7060404@infracaninophile.co.uk> <522E07B1.5030205@platinum.linux.pl> Date: Tue, 10 Sep 2013 12:23:31 +0100 Message-ID: Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) From: krad To: J David Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Sep 2013 11:23:33 -0000 you could probably play with the canmount property though On 10 September 2013 12:22, krad wrote: > "Using mountpoint=legacy seems a little conceptually challenged, > especially given that the description of a legacy mount is:" > > not really as what is one of the 1st things the kernel does after it is > loaded with all its modules? It looks to mount the root filesystem. This is > something that happens outside zfs's control from what I understand so is a > legacy mount rather than a zfs controlled one. The bootfs property of a > pool is actually used by the zfsloader to locate the file system the > loader.conf etc is on. The loader may or may not choose to pass this > parameter through to the kernels environment, and its totally possible to > have / set to a different dataset than the bootfs option. > > "How does mountpoint=legacy interact with importing the pool on another system, > or from a LiveCD, with " -o altroot=/mnt " ? (A case where > mountpoint=/ works perfectly.)" > > Its doesn't get mounted as its legacy, ie you have to mount the dataset > manually where ever you see fit > > "And, finally, what would have to change to support a ZFS root filesystem > set as mountpoint=/ instead of mountpoint=legacy ?" > > Why would you want to? > > > > > On 9 September 2013 19:46, J David wrote: > >> On Mon, Sep 9, 2013 at 1:38 PM, Adam Nowacki >> wrote: >> > zfs set mountpoint=legacy data/root >> > together with >> > zpool set bootfs=data/root data >> >> This does appear to work, thanks. So the key steps seem to be: >> >> 1) zfs_load="YES" in loader.conf >> 2) zfs_enable="YES" in rc.conf >> 3) Set bootfs=data/root in the zpool. >> 4) Set mountpoint=legacy on the root fs >> >> Using mountpoint=legacy seems a little conceptually challenged, >> especially given that the description of a legacy mount is: >> >> "If a file system's mount point is set to legacy, ZFS makes no attempt >> to manage the file system, and the administrator is responsible for >> mounting and unmounting the file system." >> >> Is this bending things to claim setting bootfs is adequate example of >> the administrator's responsibility to mount the file system? (Even >> though that is clearly also part of ZFS.) >> >> How does mountpoint=legacy interact with importing the pool on another >> system, or from a LiveCD, with " -o altroot=/mnt " ? (A case where >> mountpoint=/ works perfectly.) >> >> And, finally, what would have to change to support a ZFS root >> filesystem set as mountpoint=/ instead of mountpoint=legacy ? >> >> Thanks! >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > > From owner-freebsd-fs@FreeBSD.ORG Tue Sep 10 13:59:15 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 34E413EF; Tue, 10 Sep 2013 13:59:15 +0000 (UTC) (envelope-from longwitz@incore.de) Received: from dss.incore.de (dss.incore.de [195.145.1.138]) by mx1.freebsd.org (Postfix) with ESMTP id DE66B2C6E; Tue, 10 Sep 2013 13:59:14 +0000 (UTC) Received: from inetmail.dmz (inetmail.dmz [10.3.0.3]) by dss.incore.de (Postfix) with ESMTP id 673BE5DD94; Tue, 10 Sep 2013 15:59:13 +0200 (CEST) X-Virus-Scanned: amavisd-new at incore.de Received: from dss.incore.de ([10.3.0.3]) by inetmail.dmz (inetmail.dmz [10.3.0.3]) (amavisd-new, port 10024) with LMTP id 08pB5SAMlQD2; Tue, 10 Sep 2013 15:59:12 +0200 (CEST) Received: from mail.incore (fwintern.dmz [10.0.0.253]) by dss.incore.de (Postfix) with ESMTP id B2F6A5DD8D; Tue, 10 Sep 2013 15:59:10 +0200 (CEST) Received: from bsdlo.incore (bsdlo.incore [192.168.0.84]) by mail.incore (Postfix) with ESMTP id A7EBE50BB0; Tue, 10 Sep 2013 15:59:10 +0200 (CEST) Message-ID: <522F25AE.1080309@incore.de> Date: Tue, 10 Sep 2013 15:59:10 +0200 From: Andreas Longwitz User-Agent: Thunderbird 2.0.0.19 (X11/20090113) MIME-Version: 1.0 To: Andriy Gapon Subject: Re: zfs panic during find(1) on zfs snapshot directory References: <522DF5A9.4070103@incore.de> <522E0118.5020106@FreeBSD.org> In-Reply-To: <522E0118.5020106@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Sep 2013 13:59:15 -0000 Thanks for quick answer ! > My personal recommendation is to keep .zfs directory hidden and/or perform only > basic operations on entries under it while ensuring that there is only one > process at a time that peeks there. > > The gfs stuff that handles .zfs operations is really very broken on FreeBSD[*]. > If you are interested, I have a patch that should some of the mess, but not all. > > [*] To see what I mean run several of the following shell loops in parallel: > while true; do ls -l /pool/fs/.zfs/ >/dev/null; done Ok, I was not aware of the problematic caused by visible snapdir property. I think your recommendation to use the default snapdir property hidden is fine for me and the panic I have described will not happen again. On the other side a panic should not happen when a user configures something else than the default. Therefore I am interested in helping to test the broken gfs stuff on some of my test servers, so your offered patch is welcome. I run zfs on production for a half year now, and I like to note that this panic was the first problem on all of my (eight) production servers running zfs. The only open zfs problem I have is described in kern/180060. -- Andreas Longwitz From owner-freebsd-fs@FreeBSD.ORG Thu Sep 12 19:24:22 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 37AA7F36 for ; Thu, 12 Sep 2013 19:24:22 +0000 (UTC) (envelope-from lkchen@k-state.edu) Received: from ksu-out.merit.edu (ksu-out.merit.edu [207.75.117.133]) by mx1.freebsd.org (Postfix) with ESMTP id 00A492F52 for ; Thu, 12 Sep 2013 19:24:21 +0000 (UTC) X-Merit-ExtLoop1: 1 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AgEFACgUMlLPS3TT/2dsb2JhbABbgweBCoMqvlYWdIIlAQEFI1YMDxoCDRkCWQaIFah7iS2ITYEpkTKBNAOiLoc+gz6CDg X-IronPort-AV: E=Sophos;i="4.90,892,1371096000"; d="scan'208";a="78051035" X-MERIT-SOURCE: KSU Received: from ksu-sfpop-mailstore02.merit.edu ([207.75.116.211]) by sfpop-ironport03.merit.edu with ESMTP; 12 Sep 2013 15:23:01 -0400 Date: Thu, 12 Sep 2013 15:23:00 -0400 (EDT) From: "Lawrence K. Chen, P.Eng." To: Mark Martinec Message-ID: <99471506.80369991.1379013780610.JavaMail.root@k-state.edu> In-Reply-To: <201309091956.53759.Mark.Martinec+freebsd@ijs.si> Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [129.130.0.181] X-Mailer: Zimbra 7.2.2_GA_2852 (ZimbraWebClient - GC29 ([unknown])/7.2.2_GA_2852) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Sep 2013 19:24:22 -0000 ----- Original Message ----- > Adam Nowacki writes: > > zfs set mountpoint=legacy data/root > > together with > > zpool set bootfs=data/root data > > > > setting vfs.root.mountfrom is not required - this is handled by the > > bootfs property, as is listing / in fstab > > So what happens if multiple pools each have their bootfs set? > > Mark > _______________________________________________ Unless things have changed...the boot loader only looks at the first freebsd-zfs partition on the disk its booting. Tripped me up once because originally I had a swap partition before my root zpool, but switched to in pool swap so I thought I would just turn the swap partition into an extra zpool. Had hoped using zfs would fix the random disk errors in swap bringing me down, ended up keeping it swap but gmirror'd. Later found I had a bad DIMM and a flaky controller.... Lawrence From owner-freebsd-fs@FreeBSD.ORG Fri Sep 13 09:29:17 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 55A1C9C5 for ; Fri, 13 Sep 2013 09:29:17 +0000 (UTC) (envelope-from kasahara@nc.kyushu-u.ac.jp) Received: from elvenbow.cc.kyushu-u.ac.jp (unknown [IPv6:2001:200:905:1407:21b:21ff:fe52:5260]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A634D246E for ; Fri, 13 Sep 2013 09:29:16 +0000 (UTC) Received: from elvenbow.nc.kyushu-u.ac.jp (kasahara@localhost [IPv6:::1]) by elvenbow.cc.kyushu-u.ac.jp (8.14.7/8.14.7) with ESMTP id r8D9T8Mq023720 for ; Fri, 13 Sep 2013 18:29:10 +0900 (JST) (envelope-from kasahara@nc.kyushu-u.ac.jp) Date: Fri, 13 Sep 2013 18:29:08 +0900 (JST) Message-Id: <20130913.182908.1011077043171329890.kasahara@nc.kyushu-u.ac.jp> To: freebsd-fs@freebsd.org Subject: [ZFS] continuous write to disk by zfskern From: Yoshiaki Kasahara X-Mailer: Mew version 6.5 on Emacs 24.3.50 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Sep 2013 09:29:17 -0000 Hello, Recently I noticed that my (zfs only) FreeBSD 9-STABLE system (for my main desktop) was very sluggish, and realized that zfskern was continuously writing something to my main raidz1 pool. By checking my munin record, it started just after I updated my world on Aug 27th. The temperature of HDD's are kept over 60C and I'm afraid the system is grinding the lifetime of them rapidly to death. Only the raidz pool "zroot" shows the symptom. It happens even when the system is in single user mode. ----- % zpool iostat 1 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- backup2 1.36T 1.36T 0 0 4.60K 2.52K zroot 1.26T 1.44T 16 1.35K 238K 5.17M ---------- ----- ----- ----- ----- ----- ----- backup2 1.36T 1.36T 0 0 0 0 zroot 1.26T 1.44T 0 1.56K 0 7.96M ---------- ----- ----- ----- ----- ----- ----- backup2 1.36T 1.36T 0 0 0 0 zroot 1.26T 1.44T 0 1.58K 0 4.47M ---------- ----- ----- ----- ----- ----- ----- backup2 1.36T 1.36T 0 0 0 0 zroot 1.26T 1.44T 0 1.59K 0 4.54M ---------- ----- ----- ----- ----- ----- ----- backup2 1.36T 1.36T 0 0 0 0 zroot 1.26T 1.44T 0 1.73K 0 6.13M ---------- ----- ----- ----- ----- ----- ----- ^C % top -mio -SH -owrite 10 last pid: 13351; load averages: 1.04, 0.52, 0.40 up 0+00:30:32 17:55:32 671 processes: 9 running, 635 sleeping, 2 zombie, 25 waiting Mem: 747M Active, 235M Inact, 1383M Wired, 2100K Cache, 13G Free ARC: 956M Total, 258M MFU, 661M MRU, 5566K Anon, 8027K Header, 23M Other Swap: 24G Total, 24G Free PID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND 4 root 129929 114 229 166157 0 166386 88.64% zfskern{txg_ 1198 root 751 15 15 1621 0 1636 0.87% syslogd 1025 _pflogd 5642 2 1 1390 0 1391 0.74% pflogd 2512 kasahara 120297 1606 116 574 182 872 0.46% Xorg 2569 kasahara 1131 21 712 196 8 916 0.49% gconfd-2 2598 www 136 0 4 146 0 150 0.08% httpd 2570 www 192 4 12 143 0 155 0.08% httpd 2248 www 147 4 19 141 0 160 0.09% httpd 2626 kasahara 4192 91 381 131 389 901 0.48% nautilus{nau 4536 kasahara 32937 762 1866 128 725 2719 1.45% emacs-24.3.5 ----- Are there any way to see what is happening to my system? I had I believe the behavior started to happen when I installed the following kernel, but I'm not sure if it is a culprit. FreeBSD 9.2-PRERELEASE #0 r254947: Tue Aug 27 13:36:54 JST 2013 I updated my world again after that (to fix vulnerabilities), so I don't have the previous kernel anymore. I can only remember the previous update was during July. My system has two pools, one is raidz consist of 3 disks, another one is single disk pool. Please ignore backup2's data errors (it is another story and I'm going to replace the disk soon). ----- % zpool status pool: backup2 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 3h37m with 9 errors on Sat Aug 31 08:34:28 2013 config: NAME STATE READ WRITE CKSUM backup2 ONLINE 0 0 0 gpt/backup2 ONLINE 0 0 0 errors: 9 data errors, use '-v' for a list pool: zroot state: ONLINE scan: scrub canceled on Fri Sep 13 17:22:45 2013 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 errors: No known data errors % grep ada /var/run/dmesg.boot ada0 at ahcich0 bus 0 scbus0 target 0 lun 0 ada0: ATA-7 SATA 2.x device ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 ada1 at ahcich1 bus 0 scbus1 target 0 lun 0 ada1: ATA-7 SATA 2.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad6 ada2 at ahcich2 bus 0 scbus2 target 0 lun 0 ada2: ATA-7 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad8 ada3 at ahcich4 bus 0 scbus4 target 0 lun 0 ada3: ATA-8 SATA 3.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) ada3: quirks=0x1<4K> ada3: Previously was known as ad12 % gpart list Geom name: ada0 modified: false state: OK fwheads: 16 fwsectors: 63 last: 1953525134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada0p1 Mediasize: 65536 (64k) Sectorsize: 512 Stripesize: 0 Stripeoffset: 17408 Mode: r0w0e0 rawuuid: 5c629775-10e5-11df-9abf-001b21525260 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: (null) length: 65536 offset: 17408 type: freebsd-boot index: 1 end: 161 start: 34 2. Name: ada0p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e1 rawuuid: 9f7ec4c5-10e5-11df-9abf-001b21525260 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap0 length: 8589934592 offset: 82944 type: freebsd-swap index: 2 end: 16777377 start: 162 3. Name: ada0p3 Mediasize: 991614851584 (923G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e2 rawuuid: d275a596-10e5-11df-9abf-001b21525260 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: disk0 length: 991614851584 offset: 8590017536 type: freebsd-zfs index: 3 end: 1953525134 start: 16777378 Consumers: 1. Name: ada0 Mediasize: 1000204886016 (931G) Sectorsize: 512 Mode: r2w2e5 Geom name: ada1 modified: false state: OK fwheads: 16 fwsectors: 63 last: 1953525134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada1p1 Mediasize: 65536 (64k) Sectorsize: 512 Stripesize: 0 Stripeoffset: 17408 Mode: r0w0e0 rawuuid: 5f353ff1-10e5-11df-9abf-001b21525260 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: (null) length: 65536 offset: 17408 type: freebsd-boot index: 1 end: 161 start: 34 2. Name: ada1p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e1 rawuuid: a28e75bb-10e5-11df-9abf-001b21525260 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap1 length: 8589934592 offset: 82944 type: freebsd-swap index: 2 end: 16777377 start: 162 3. Name: ada1p3 Mediasize: 991614851584 (923G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e2 rawuuid: d60cd5c7-10e5-11df-9abf-001b21525260 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: disk1 length: 991614851584 offset: 8590017536 type: freebsd-zfs index: 3 end: 1953525134 start: 16777378 Consumers: 1. Name: ada1 Mediasize: 1000204886016 (931G) Sectorsize: 512 Mode: r2w2e5 Geom name: ada2 modified: false state: OK fwheads: 16 fwsectors: 63 last: 1953525134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada2p1 Mediasize: 65536 (64k) Sectorsize: 512 Stripesize: 0 Stripeoffset: 17408 Mode: r0w0e0 rawuuid: 60568cdf-10e5-11df-9abf-001b21525260 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: (null) length: 65536 offset: 17408 type: freebsd-boot index: 1 end: 161 start: 34 2. Name: ada2p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e1 rawuuid: a4cfd93f-10e5-11df-9abf-001b21525260 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap2 length: 8589934592 offset: 82944 type: freebsd-swap index: 2 end: 16777377 start: 162 3. Name: ada2p3 Mediasize: 991614851584 (923G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e2 rawuuid: d88a0a46-10e5-11df-9abf-001b21525260 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: disk2 length: 991614851584 offset: 8590017536 type: freebsd-zfs index: 3 end: 1953525134 start: 16777378 Consumers: 1. Name: ada2 Mediasize: 1000204886016 (931G) Sectorsize: 512 Mode: r2w2e5 Geom name: ada3 modified: false state: OK fwheads: 16 fwsectors: 63 last: 5860533134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada3p1 Mediasize: 65536 (64k) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: b995fc9e-abc3-11e2-b146-001cc0fac46a rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: (null) length: 65536 offset: 20480 type: freebsd-boot index: 1 end: 167 start: 40 2. Name: ada3p2 Mediasize: 3000591450112 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e2 rawuuid: fcaac211-abc3-11e2-b146-001cc0fac46a rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: backup2 length: 3000591450112 offset: 1048576 type: freebsd-zfs index: 2 end: 5860532223 start: 2048 Consumers: 1. Name: ada3 Mediasize: 3000592982016 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e3 % zfs-stats -a ------------------------------------------------------------------------ ZFS Subsystem Report Fri Sep 13 18:03:32 2013 ------------------------------------------------------------------------ System Information: Kernel Version: 902503 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 ZFS Storage pool Version: 5000 ZFS Filesystem Version: 5 FreeBSD 9.2-PRERELEASE #0 r255506: Fri Sep 13 16:09:51 JST 2013 root 6:03PM up 39 mins, 5 users, load averages: 0.21, 0.32, 0.33 ------------------------------------------------------------------------ System Memory: 4.74% 751.62 MiB Active, 1.49% 236.12 MiB Inact 8.89% 1.38 GiB Wired, 0.01% 2.05 MiB Cache 84.86% 13.14 GiB Free, 0.01% 1.84 MiB Gap Real Installed: 16.00 GiB Real Available: 99.79% 15.97 GiB Real Managed: 96.96% 15.48 GiB Logical Total: 16.00 GiB Logical Used: 16.44% 2.63 GiB Logical Free: 83.56% 13.37 GiB Kernel Memory: 1.10 GiB Data: 97.02% 1.07 GiB Text: 2.98% 33.62 MiB Kernel Memory Map: 15.47 GiB Size: 6.44% 1019.41 MiB Free: 93.56% 14.47 GiB ------------------------------------------------------------------------ ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 25 Recycle Misses: 0 Mutex Misses: 0 Evict Skips: 744 ARC Size: 15.90% 977.06 MiB Target Size: (Adaptive) 100.00% 6.00 GiB Min Size (Hard Limit): 16.67% 1.00 GiB Max Size (High Water): 6:1 6.00 GiB ARC Size Breakdown: Recently Used Cache Size: 50.00% 3.00 GiB Frequently Used Cache Size: 50.00% 3.00 GiB ARC Hash Breakdown: Elements Max: 31.89k Elements Current: 100.00% 31.89k Collisions: 107.00k Chain Max: 4 Chains: 1.83k ------------------------------------------------------------------------ ARC Efficiency: 1.19m Cache Hit Ratio: 97.73% 1.16m Cache Miss Ratio: 2.27% 26.98k Actual Hit Ratio: 92.39% 1.10m Data Demand Efficiency: 97.97% 531.13k Data Prefetch Efficiency: 17.48% 1.84k CACHE HITS BY CACHE LIST: Anonymously Used: 5.46% 63.35k Most Recently Used: 26.05% 302.29k Most Frequently Used: 68.50% 795.00k Most Recently Used Ghost: 0.00% 0 Most Frequently Used Ghost: 0.00% 0 CACHE HITS BY DATA TYPE: Demand Data: 44.83% 520.37k Prefetch Data: 0.03% 322 Demand Metadata: 49.71% 576.91k Prefetch Metadata: 5.43% 63.04k CACHE MISSES BY DATA TYPE: Demand Data: 39.89% 10.76k Prefetch Data: 5.63% 1.52k Demand Metadata: 27.09% 7.31k Prefetch Metadata: 27.39% 7.39k ------------------------------------------------------------------------ L2ARC is disabled ------------------------------------------------------------------------ File-Level Prefetch: (HEALTHY) DMU Efficiency: 2.98m Hit Ratio: 74.29% 2.22m Miss Ratio: 25.71% 766.97k Colinear: 766.97k Hit Ratio: 0.01% 98 Miss Ratio: 99.99% 766.87k Stride: 2.18m Hit Ratio: 100.00% 2.18m Miss Ratio: 0.00% 6 DMU Misc: Reclaim: 766.87k Successes: 0.52% 4.01k Failures: 99.48% 762.86k Streams: 32.42k +Resets: 0.10% 34 -Resets: 99.90% 32.39k Bogus: 0 ------------------------------------------------------------------------ VDEV cache is disabled ------------------------------------------------------------------------ ZFS Tunables (sysctl): kern.maxusers 1357 vm.kmem_size 16622653440 vm.kmem_size_scale 1 vm.kmem_size_min 0 vm.kmem_size_max 329853485875 vfs.zfs.arc_max 6442450944 vfs.zfs.arc_min 1073741824 vfs.zfs.arc_meta_used 246149616 vfs.zfs.arc_meta_limit 4294967296 vfs.zfs.l2arc_write_max 8388608 vfs.zfs.l2arc_write_boost 8388608 vfs.zfs.l2arc_headroom 2 vfs.zfs.l2arc_feed_secs 1 vfs.zfs.l2arc_feed_min_ms 200 vfs.zfs.l2arc_noprefetch 1 vfs.zfs.l2arc_feed_again 1 vfs.zfs.l2arc_norw 1 vfs.zfs.anon_size 3833856 vfs.zfs.anon_metadata_lsize 0 vfs.zfs.anon_data_lsize 0 vfs.zfs.mru_size 694739968 vfs.zfs.mru_metadata_lsize 115806208 vfs.zfs.mru_data_lsize 529095680 vfs.zfs.mru_ghost_size 28662784 vfs.zfs.mru_ghost_metadata_lsize 13669888 vfs.zfs.mru_ghost_data_lsize 14992896 vfs.zfs.mfu_size 289340928 vfs.zfs.mfu_metadata_lsize 26335232 vfs.zfs.mfu_data_lsize 249390592 vfs.zfs.mfu_ghost_size 72080896 vfs.zfs.mfu_ghost_metadata_lsize 41472 vfs.zfs.mfu_ghost_data_lsize 72039424 vfs.zfs.l2c_only_size 0 vfs.zfs.dedup.prefetch 1 vfs.zfs.nopwrite_enabled 1 vfs.zfs.mdcomp_disable 0 vfs.zfs.no_write_throttle 0 vfs.zfs.write_limit_shift 3 vfs.zfs.write_limit_min 33554432 vfs.zfs.write_limit_max 2142888448 vfs.zfs.write_limit_inflated 51429322752 vfs.zfs.write_limit_override 0 vfs.zfs.prefetch_disable 0 vfs.zfs.zfetch.max_streams 8 vfs.zfs.zfetch.min_sec_reap 2 vfs.zfs.zfetch.block_cap 256 vfs.zfs.zfetch.array_rd_sz 1048576 vfs.zfs.top_maxinflight 32 vfs.zfs.resilver_delay 2 vfs.zfs.scrub_delay 4 vfs.zfs.scan_idle 50 vfs.zfs.scan_min_time_ms 1000 vfs.zfs.free_min_time_ms 1000 vfs.zfs.resilver_min_time_ms 3000 vfs.zfs.no_scrub_io 0 vfs.zfs.no_scrub_prefetch 0 vfs.zfs.mg_alloc_failures 12 vfs.zfs.write_to_degraded 0 vfs.zfs.check_hostid 1 vfs.zfs.recover 0 vfs.zfs.deadman_synctime 1000 vfs.zfs.deadman_enabled 1 vfs.zfs.txg.synctime_ms 1000 vfs.zfs.txg.timeout 5 vfs.zfs.vdev.cache.max 16384 vfs.zfs.vdev.cache.size 0 vfs.zfs.vdev.cache.bshift 16 vfs.zfs.vdev.trim_on_init 1 vfs.zfs.vdev.max_pending 10 vfs.zfs.vdev.min_pending 4 vfs.zfs.vdev.time_shift 29 vfs.zfs.vdev.ramp_rate 2 vfs.zfs.vdev.aggregation_limit 131072 vfs.zfs.vdev.read_gap_limit 32768 vfs.zfs.vdev.write_gap_limit 4096 vfs.zfs.vdev.bio_flush_disable 0 vfs.zfs.vdev.bio_delete_disable 0 vfs.zfs.vdev.trim_max_bytes 2147483648 vfs.zfs.vdev.trim_max_pending 64 vfs.zfs.zil_replay_disable 0 vfs.zfs.cache_flush_disable 0 vfs.zfs.zio.use_uma 0 vfs.zfs.sync_pass_deferred_free 2 vfs.zfs.sync_pass_dont_compress 5 vfs.zfs.sync_pass_rewrite 2 vfs.zfs.snapshot_list_prefetch 0 vfs.zfs.super_owner 0 vfs.zfs.debug 0 vfs.zfs.version.ioctl 3 vfs.zfs.version.acl 1 vfs.zfs.version.spa 5000 vfs.zfs.version.zpl 5 vfs.zfs.trim.enabled 0 vfs.zfs.trim.txg_delay 32 vfs.zfs.trim.timeout 30 vfs.zfs.trim.max_interval 1 ------------------------------------------------------------------------ ----- Please tell me if you need anything else to diagnose the problem. Regards, -- Yoshiaki Kasahara Research Institute for Information Technology, Kyushu University kasahara@nc.kyushu-u.ac.jp From owner-freebsd-fs@FreeBSD.ORG Fri Sep 13 10:01:29 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 93403392 for ; Fri, 13 Sep 2013 10:01:29 +0000 (UTC) (envelope-from kasahara@nc.kyushu-u.ac.jp) Received: from elvenbow.cc.kyushu-u.ac.jp (unknown [IPv6:2001:200:905:1407:21b:21ff:fe52:5260]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EDEF42644 for ; Fri, 13 Sep 2013 10:01:28 +0000 (UTC) Received: from elvenbow.nc.kyushu-u.ac.jp (kasahara@localhost [IPv6:::1]) by elvenbow.cc.kyushu-u.ac.jp (8.14.7/8.14.7) with ESMTP id r8DA1K6T008116 for ; Fri, 13 Sep 2013 19:01:23 +0900 (JST) (envelope-from kasahara@nc.kyushu-u.ac.jp) Date: Fri, 13 Sep 2013 19:01:20 +0900 (JST) Message-Id: <20130913.190120.1468536214959099699.kasahara@nc.kyushu-u.ac.jp> To: freebsd-fs@freebsd.org Subject: Re: [ZFS] continuous write to disk by zfskern From: Yoshiaki Kasahara In-Reply-To: <20130913.182908.1011077043171329890.kasahara@nc.kyushu-u.ac.jp> References: <20130913.182908.1011077043171329890.kasahara@nc.kyushu-u.ac.jp> X-Mailer: Mew version 6.5 on Emacs 24.3.50 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Sep 2013 10:01:29 -0000 On Fri, 13 Sep 2013 18:29:08 +0900 (JST), Yoshiaki Kasahara said: > Hello, > > Recently I noticed that my (zfs only) FreeBSD 9-STABLE system (for my > main desktop) was very sluggish, and realized that zfskern was > continuously writing something to my main raidz1 pool. By checking my > munin record, it started just after I updated my world on Aug > 27th. The temperature of HDD's are kept over 60C and I'm afraid the > system is grinding the lifetime of them rapidly to death. > > Only the raidz pool "zroot" shows the symptom. It happens even when > the system is in single user mode. > > ----- > > % zpool iostat 1 > capacity operations bandwidth > pool alloc free read write read write > ---------- ----- ----- ----- ----- ----- ----- > backup2 1.36T 1.36T 0 0 4.60K 2.52K > zroot 1.26T 1.44T 16 1.35K 238K 5.17M > ---------- ----- ----- ----- ----- ----- ----- > backup2 1.36T 1.36T 0 0 0 0 > zroot 1.26T 1.44T 0 1.56K 0 7.96M > ---------- ----- ----- ----- ----- ----- ----- > backup2 1.36T 1.36T 0 0 0 0 > zroot 1.26T 1.44T 0 1.58K 0 4.47M > ---------- ----- ----- ----- ----- ----- ----- > backup2 1.36T 1.36T 0 0 0 0 > zroot 1.26T 1.44T 0 1.59K 0 4.54M > ---------- ----- ----- ----- ----- ----- ----- > backup2 1.36T 1.36T 0 0 0 0 > zroot 1.26T 1.44T 0 1.73K 0 6.13M > ---------- ----- ----- ----- ----- ----- ----- > ^C > % top -mio -SH -owrite 10 > last pid: 13351; load averages: 1.04, 0.52, 0.40 up 0+00:30:32 17:55:32 > 671 processes: 9 running, 635 sleeping, 2 zombie, 25 waiting > > Mem: 747M Active, 235M Inact, 1383M Wired, 2100K Cache, 13G Free > ARC: 956M Total, 258M MFU, 661M MRU, 5566K Anon, 8027K Header, 23M Other > Swap: 24G Total, 24G Free > > > PID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND > 4 root 129929 114 229 166157 0 166386 88.64% zfskern{txg_ > 1198 root 751 15 15 1621 0 1636 0.87% syslogd > 1025 _pflogd 5642 2 1 1390 0 1391 0.74% pflogd > 2512 kasahara 120297 1606 116 574 182 872 0.46% Xorg > 2569 kasahara 1131 21 712 196 8 916 0.49% gconfd-2 > 2598 www 136 0 4 146 0 150 0.08% httpd > 2570 www 192 4 12 143 0 155 0.08% httpd > 2248 www 147 4 19 141 0 160 0.09% httpd > 2626 kasahara 4192 91 381 131 389 901 0.48% nautilus{nau > 4536 kasahara 32937 762 1866 128 725 2719 1.45% emacs-24.3.5 > > ----- > > Are there any way to see what is happening to my system? I had I > believe the behavior started to happen when I installed the following > kernel, but I'm not sure if it is a culprit. > > FreeBSD 9.2-PRERELEASE #0 r254947: Tue Aug 27 13:36:54 JST 2013 > > I updated my world again after that (to fix vulnerabilities), so I > don't have the previous kernel anymore. I can only remember the > previous update was during July. > > My system has two pools, one is raidz consist of 3 disks, another one > is single disk pool. Please ignore backup2's data errors (it is > another story and I'm going to replace the disk soon). > > ----- > > % zpool status > pool: backup2 > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://illumos.org/msg/ZFS-8000-8A > scan: scrub repaired 0 in 3h37m with 9 errors on Sat Aug 31 08:34:28 2013 > config: > > NAME STATE READ WRITE CKSUM > backup2 ONLINE 0 0 0 > gpt/backup2 ONLINE 0 0 0 > > errors: 9 data errors, use '-v' for a list > > pool: zroot > state: ONLINE > scan: scrub canceled on Fri Sep 13 17:22:45 2013 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > gpt/disk0 ONLINE 0 0 0 > gpt/disk1 ONLINE 0 0 0 > gpt/disk2 ONLINE 0 0 0 > > errors: No known data errors > > % grep ada /var/run/dmesg.boot > ada0 at ahcich0 bus 0 scbus0 target 0 lun 0 > ada0: ATA-7 SATA 2.x device > ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) > ada0: Command Queueing enabled > ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) > ada0: Previously was known as ad4 > ada1 at ahcich1 bus 0 scbus1 target 0 lun 0 > ada1: ATA-7 SATA 2.x device > ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) > ada1: Command Queueing enabled > ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) > ada1: Previously was known as ad6 > ada2 at ahcich2 bus 0 scbus2 target 0 lun 0 > ada2: ATA-7 SATA 2.x device > ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) > ada2: Command Queueing enabled > ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) > ada2: Previously was known as ad8 > ada3 at ahcich4 bus 0 scbus4 target 0 lun 0 > ada3: ATA-8 SATA 3.x device > ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) > ada3: Command Queueing enabled > ada3: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) > ada3: quirks=0x1<4K> > ada3: Previously was known as ad12 > > % gpart list > Geom name: ada0 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada0p1 > Mediasize: 65536 (64k) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 17408 > Mode: r0w0e0 > rawuuid: 5c629775-10e5-11df-9abf-001b21525260 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: (null) > length: 65536 > offset: 17408 > type: freebsd-boot > index: 1 > end: 161 > start: 34 > 2. Name: ada0p2 > Mediasize: 8589934592 (8.0G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 82944 > Mode: r1w1e1 > rawuuid: 9f7ec4c5-10e5-11df-9abf-001b21525260 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap0 > length: 8589934592 > offset: 82944 > type: freebsd-swap > index: 2 > end: 16777377 > start: 162 > 3. Name: ada0p3 > Mediasize: 991614851584 (923G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 82944 > Mode: r1w1e2 > rawuuid: d275a596-10e5-11df-9abf-001b21525260 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: disk0 > length: 991614851584 > offset: 8590017536 > type: freebsd-zfs > index: 3 > end: 1953525134 > start: 16777378 > Consumers: > 1. Name: ada0 > Mediasize: 1000204886016 (931G) > Sectorsize: 512 > Mode: r2w2e5 > > Geom name: ada1 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada1p1 > Mediasize: 65536 (64k) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 17408 > Mode: r0w0e0 > rawuuid: 5f353ff1-10e5-11df-9abf-001b21525260 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: (null) > length: 65536 > offset: 17408 > type: freebsd-boot > index: 1 > end: 161 > start: 34 > 2. Name: ada1p2 > Mediasize: 8589934592 (8.0G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 82944 > Mode: r1w1e1 > rawuuid: a28e75bb-10e5-11df-9abf-001b21525260 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap1 > length: 8589934592 > offset: 82944 > type: freebsd-swap > index: 2 > end: 16777377 > start: 162 > 3. Name: ada1p3 > Mediasize: 991614851584 (923G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 82944 > Mode: r1w1e2 > rawuuid: d60cd5c7-10e5-11df-9abf-001b21525260 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: disk1 > length: 991614851584 > offset: 8590017536 > type: freebsd-zfs > index: 3 > end: 1953525134 > start: 16777378 > Consumers: > 1. Name: ada1 > Mediasize: 1000204886016 (931G) > Sectorsize: 512 > Mode: r2w2e5 > > Geom name: ada2 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada2p1 > Mediasize: 65536 (64k) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 17408 > Mode: r0w0e0 > rawuuid: 60568cdf-10e5-11df-9abf-001b21525260 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: (null) > length: 65536 > offset: 17408 > type: freebsd-boot > index: 1 > end: 161 > start: 34 > 2. Name: ada2p2 > Mediasize: 8589934592 (8.0G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 82944 > Mode: r1w1e1 > rawuuid: a4cfd93f-10e5-11df-9abf-001b21525260 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap2 > length: 8589934592 > offset: 82944 > type: freebsd-swap > index: 2 > end: 16777377 > start: 162 > 3. Name: ada2p3 > Mediasize: 991614851584 (923G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 82944 > Mode: r1w1e2 > rawuuid: d88a0a46-10e5-11df-9abf-001b21525260 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: disk2 > length: 991614851584 > offset: 8590017536 > type: freebsd-zfs > index: 3 > end: 1953525134 > start: 16777378 > Consumers: > 1. Name: ada2 > Mediasize: 1000204886016 (931G) > Sectorsize: 512 > Mode: r2w2e5 > > Geom name: ada3 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 5860533134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada3p1 > Mediasize: 65536 (64k) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r0w0e0 > rawuuid: b995fc9e-abc3-11e2-b146-001cc0fac46a > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: (null) > length: 65536 > offset: 20480 > type: freebsd-boot > index: 1 > end: 167 > start: 40 > 2. Name: ada3p2 > Mediasize: 3000591450112 (2.7T) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e2 > rawuuid: fcaac211-abc3-11e2-b146-001cc0fac46a > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: backup2 > length: 3000591450112 > offset: 1048576 > type: freebsd-zfs > index: 2 > end: 5860532223 > start: 2048 > Consumers: > 1. Name: ada3 > Mediasize: 3000592982016 (2.7T) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e3 > > % zfs-stats -a > > ------------------------------------------------------------------------ > ZFS Subsystem Report Fri Sep 13 18:03:32 2013 > ------------------------------------------------------------------------ > > System Information: > > Kernel Version: 902503 (osreldate) > Hardware Platform: amd64 > Processor Architecture: amd64 > > ZFS Storage pool Version: 5000 > ZFS Filesystem Version: 5 > > FreeBSD 9.2-PRERELEASE #0 r255506: Fri Sep 13 16:09:51 JST 2013 root > 6:03PM up 39 mins, 5 users, load averages: 0.21, 0.32, 0.33 > > ------------------------------------------------------------------------ > > System Memory: > > 4.74% 751.62 MiB Active, 1.49% 236.12 MiB Inact > 8.89% 1.38 GiB Wired, 0.01% 2.05 MiB Cache > 84.86% 13.14 GiB Free, 0.01% 1.84 MiB Gap > > Real Installed: 16.00 GiB > Real Available: 99.79% 15.97 GiB > Real Managed: 96.96% 15.48 GiB > > Logical Total: 16.00 GiB > Logical Used: 16.44% 2.63 GiB > Logical Free: 83.56% 13.37 GiB > > Kernel Memory: 1.10 GiB > Data: 97.02% 1.07 GiB > Text: 2.98% 33.62 MiB > > Kernel Memory Map: 15.47 GiB > Size: 6.44% 1019.41 MiB > Free: 93.56% 14.47 GiB > > ------------------------------------------------------------------------ > > ARC Summary: (HEALTHY) > Memory Throttle Count: 0 > > ARC Misc: > Deleted: 25 > Recycle Misses: 0 > Mutex Misses: 0 > Evict Skips: 744 > > ARC Size: 15.90% 977.06 MiB > Target Size: (Adaptive) 100.00% 6.00 GiB > Min Size (Hard Limit): 16.67% 1.00 GiB > Max Size (High Water): 6:1 6.00 GiB > > ARC Size Breakdown: > Recently Used Cache Size: 50.00% 3.00 GiB > Frequently Used Cache Size: 50.00% 3.00 GiB > > ARC Hash Breakdown: > Elements Max: 31.89k > Elements Current: 100.00% 31.89k > Collisions: 107.00k > Chain Max: 4 > Chains: 1.83k > > ------------------------------------------------------------------------ > > ARC Efficiency: 1.19m > Cache Hit Ratio: 97.73% 1.16m > Cache Miss Ratio: 2.27% 26.98k > Actual Hit Ratio: 92.39% 1.10m > > Data Demand Efficiency: 97.97% 531.13k > Data Prefetch Efficiency: 17.48% 1.84k > > CACHE HITS BY CACHE LIST: > Anonymously Used: 5.46% 63.35k > Most Recently Used: 26.05% 302.29k > Most Frequently Used: 68.50% 795.00k > Most Recently Used Ghost: 0.00% 0 > Most Frequently Used Ghost: 0.00% 0 > > CACHE HITS BY DATA TYPE: > Demand Data: 44.83% 520.37k > Prefetch Data: 0.03% 322 > Demand Metadata: 49.71% 576.91k > Prefetch Metadata: 5.43% 63.04k > > CACHE MISSES BY DATA TYPE: > Demand Data: 39.89% 10.76k > Prefetch Data: 5.63% 1.52k > Demand Metadata: 27.09% 7.31k > Prefetch Metadata: 27.39% 7.39k > > ------------------------------------------------------------------------ > > L2ARC is disabled > > ------------------------------------------------------------------------ > > File-Level Prefetch: (HEALTHY) > > DMU Efficiency: 2.98m > Hit Ratio: 74.29% 2.22m > Miss Ratio: 25.71% 766.97k > > Colinear: 766.97k > Hit Ratio: 0.01% 98 > Miss Ratio: 99.99% 766.87k > > Stride: 2.18m > Hit Ratio: 100.00% 2.18m > Miss Ratio: 0.00% 6 > > DMU Misc: > Reclaim: 766.87k > Successes: 0.52% 4.01k > Failures: 99.48% 762.86k > > Streams: 32.42k > +Resets: 0.10% 34 > -Resets: 99.90% 32.39k > Bogus: 0 > > ------------------------------------------------------------------------ > > VDEV cache is disabled > > ------------------------------------------------------------------------ > > ZFS Tunables (sysctl): > kern.maxusers 1357 > vm.kmem_size 16622653440 > vm.kmem_size_scale 1 > vm.kmem_size_min 0 > vm.kmem_size_max 329853485875 > vfs.zfs.arc_max 6442450944 > vfs.zfs.arc_min 1073741824 > vfs.zfs.arc_meta_used 246149616 > vfs.zfs.arc_meta_limit 4294967296 > vfs.zfs.l2arc_write_max 8388608 > vfs.zfs.l2arc_write_boost 8388608 > vfs.zfs.l2arc_headroom 2 > vfs.zfs.l2arc_feed_secs 1 > vfs.zfs.l2arc_feed_min_ms 200 > vfs.zfs.l2arc_noprefetch 1 > vfs.zfs.l2arc_feed_again 1 > vfs.zfs.l2arc_norw 1 > vfs.zfs.anon_size 3833856 > vfs.zfs.anon_metadata_lsize 0 > vfs.zfs.anon_data_lsize 0 > vfs.zfs.mru_size 694739968 > vfs.zfs.mru_metadata_lsize 115806208 > vfs.zfs.mru_data_lsize 529095680 > vfs.zfs.mru_ghost_size 28662784 > vfs.zfs.mru_ghost_metadata_lsize 13669888 > vfs.zfs.mru_ghost_data_lsize 14992896 > vfs.zfs.mfu_size 289340928 > vfs.zfs.mfu_metadata_lsize 26335232 > vfs.zfs.mfu_data_lsize 249390592 > vfs.zfs.mfu_ghost_size 72080896 > vfs.zfs.mfu_ghost_metadata_lsize 41472 > vfs.zfs.mfu_ghost_data_lsize 72039424 > vfs.zfs.l2c_only_size 0 > vfs.zfs.dedup.prefetch 1 > vfs.zfs.nopwrite_enabled 1 > vfs.zfs.mdcomp_disable 0 > vfs.zfs.no_write_throttle 0 > vfs.zfs.write_limit_shift 3 > vfs.zfs.write_limit_min 33554432 > vfs.zfs.write_limit_max 2142888448 > vfs.zfs.write_limit_inflated 51429322752 > vfs.zfs.write_limit_override 0 > vfs.zfs.prefetch_disable 0 > vfs.zfs.zfetch.max_streams 8 > vfs.zfs.zfetch.min_sec_reap 2 > vfs.zfs.zfetch.block_cap 256 > vfs.zfs.zfetch.array_rd_sz 1048576 > vfs.zfs.top_maxinflight 32 > vfs.zfs.resilver_delay 2 > vfs.zfs.scrub_delay 4 > vfs.zfs.scan_idle 50 > vfs.zfs.scan_min_time_ms 1000 > vfs.zfs.free_min_time_ms 1000 > vfs.zfs.resilver_min_time_ms 3000 > vfs.zfs.no_scrub_io 0 > vfs.zfs.no_scrub_prefetch 0 > vfs.zfs.mg_alloc_failures 12 > vfs.zfs.write_to_degraded 0 > vfs.zfs.check_hostid 1 > vfs.zfs.recover 0 > vfs.zfs.deadman_synctime 1000 > vfs.zfs.deadman_enabled 1 > vfs.zfs.txg.synctime_ms 1000 > vfs.zfs.txg.timeout 5 > vfs.zfs.vdev.cache.max 16384 > vfs.zfs.vdev.cache.size 0 > vfs.zfs.vdev.cache.bshift 16 > vfs.zfs.vdev.trim_on_init 1 > vfs.zfs.vdev.max_pending 10 > vfs.zfs.vdev.min_pending 4 > vfs.zfs.vdev.time_shift 29 > vfs.zfs.vdev.ramp_rate 2 > vfs.zfs.vdev.aggregation_limit 131072 > vfs.zfs.vdev.read_gap_limit 32768 > vfs.zfs.vdev.write_gap_limit 4096 > vfs.zfs.vdev.bio_flush_disable 0 > vfs.zfs.vdev.bio_delete_disable 0 > vfs.zfs.vdev.trim_max_bytes 2147483648 > vfs.zfs.vdev.trim_max_pending 64 > vfs.zfs.zil_replay_disable 0 > vfs.zfs.cache_flush_disable 0 > vfs.zfs.zio.use_uma 0 > vfs.zfs.sync_pass_deferred_free 2 > vfs.zfs.sync_pass_dont_compress 5 > vfs.zfs.sync_pass_rewrite 2 > vfs.zfs.snapshot_list_prefetch 0 > vfs.zfs.super_owner 0 > vfs.zfs.debug 0 > vfs.zfs.version.ioctl 3 > vfs.zfs.version.acl 1 > vfs.zfs.version.spa 5000 > vfs.zfs.version.zpl 5 > vfs.zfs.trim.enabled 0 > vfs.zfs.trim.txg_delay 32 > vfs.zfs.trim.timeout 30 > vfs.zfs.trim.max_interval 1 > > ------------------------------------------------------------------------ > > ----- > > Please tell me if you need anything else to diagnose the problem. > > Regards, > -- > Yoshiaki Kasahara > Research Institute for Information Technology, Kyushu University > kasahara@nc.kyushu-u.ac.jp I noticed that one of my snapshot still holds older kernel (Aug 8th), and even using the kernel didn't solve the problem. So maybe the timing of the symptom is just a coincidence. Sorry for confusion. Now I'm feeling that the dataset (ZIL maybe?) is somehow corrupted. After I receive the new HDD for backup2 pool, I'm considering to backup the entire contents of zroot and destroy/create zroot again (when I have time). Regards, -- Yoshiaki Kasahara Research Institute for Information Technology, Kyushu University kasahara@nc.kyushu-u.ac.jp From owner-freebsd-fs@FreeBSD.ORG Fri Sep 13 13:20:37 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id D4DCFEE5; Fri, 13 Sep 2013 13:20:37 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id D660A21F5; Fri, 13 Sep 2013 13:20:36 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id QAA16552; Fri, 13 Sep 2013 16:20:27 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VKTIR-000OyW-Ga; Fri, 13 Sep 2013 16:20:27 +0300 Message-ID: <523310E2.4050702@FreeBSD.org> Date: Fri, 13 Sep 2013 16:19:30 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130810 Thunderbird/17.0.8 MIME-Version: 1.0 To: J David Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) References: In-Reply-To: X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Cc: "freebsd-fs@freebsd.org" , freebsd-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Sep 2013 13:20:38 -0000 First, a note that below I try to reply not only to this specific message but to the whole thread. on 09/09/2013 04:02 J David said the following: > After setting up a new machine to boot from a ZFS root using the 9.1 > install, it worked fine, but when the kernel & world was updated to > releng/9.2, it stopped booting. The pool is called "data" and the > root partition is "data/root." > > Under 9.1 it had in loader.conf: > > zfs_load="YES" > vfs.root.mountfrom="zfs:data/root" > > Under 9.2-RC3, the same config results in a panic: > > Trying to mount root from zfs:data/root []… > init: not found in path > /sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init:/stand/sysinstall > panic: no init This is a very weird error. It means that kernel was able to mount data/root as a root filesystem, but couldn't find /sbin/init in it. Which can mean at least two different things: (1) some other filesystem was mounted instead of data/root because of some bug; (2) your data/root didn't actually contain valid FreeBSD installation. I set up a test system exactly the way you described above and I can not reproduce this behavior. Just in case, I used mfsbsd zfsinstall and that's how it creates and configures a pool by default. > If this is changed (as many Google hits recommend) to: > > zfs_enable="YES" I think that this was discussed enough in the thread and the right conclusions have been already reached. I just have two general comments: - you don't have to trust everything that is written "on the internet". Prefer to use more or less authoritative sources: FreeBSD documentation, FreeBSD wiki, posts by FreeBSD developers and alike - it surprises me how many people who don't understand how the code works feel that they can give advices to other people - when *I* used the following query https://www.google.com/search?q=%22zfs_enable%22+%22loader.conf%22 I could not find a single suggestion to put zfs_enable into loader.conf in the first dozen of results (references to this thread excluded) > vfs.root.mountfrom="zfs:data/root" > > It seems like ZFS doesn't get loaded, so it fails instead with: > > Trying to mount root from zfs:data/root []… > Mounting from zfs:data/root failed with error 2: unknown file system. > > If the "?" mountroot> option is used, 50 devices are listed, none of > which are ZFS. And the "unknown file system" response comes from > vfs_byname returning NULL for zfs. Obvious (as already established). > (If both zfs_enable and zfs_load are set to "YES" then it fails as the > zfs_load case.) Obvious (as already established). > The system is using update-to-date zpool (v5000 / feature flags), and > all the updated bootblocks from the releng/9.2 build. zpool.cache is > correct, the zpool imports fine from the 9.2-RC3 live cd. The zpool's > bootfs is set correctly, the zfs mountpoint of data/root is / . And, > of course, init is present and health in data/root. The system booted > fine until updating to 9.2. I just wish that I could reproduce this problem using exactly the same setup... But I can't. Perhaps there are any other special things about your configuration - like having other pools or other disks/partitions that are not used by 'data' pool. Any other non-standard things... -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri Sep 13 13:22:59 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id D53801B5; Fri, 13 Sep 2013 13:22:59 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id DCB41224D; Fri, 13 Sep 2013 13:22:58 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id QAA16588; Fri, 13 Sep 2013 16:22:57 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1VKTKr-000Oyv-1U; Fri, 13 Sep 2013 16:22:57 +0300 Message-ID: <52331179.4030201@FreeBSD.org> Date: Fri, 13 Sep 2013 16:22:01 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130810 Thunderbird/17.0.8 MIME-Version: 1.0 To: "freebsd-fs@freebsd.org" , freebsd-stable Subject: Re: zfs_enable vs zfs_load in loader.conf (but neither works) References: <523310E2.4050702@FreeBSD.org> In-Reply-To: <523310E2.4050702@FreeBSD.org> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Sep 2013 13:22:59 -0000 Now some high level information on how ZFS boot works and a little bit more detailed information on how a root filesystem is chosen in ZFS case. The information is applicable to recent versions of FreeBSD in head, stable/9 (including upcoming 9.2) and stable/8 (including 8.4). - boot0-like stage always takes boot2-like stage from the same disk using simple rules - boot2-like stage probes all disks and partitions it can understand for ZFS pools - default pool is the first pool detected by probing which starts at boot disk - default filesystem is determined by bootfs property - boot2-like stage allows to select a different pool, a specific filesystem in the pool and a specific loader boot0-like stage is pmbr in the case of GPT partitioning. boot0-like stage is the first block of zfsboot in the case of whole-disk ZFS. boot2-like stage is either gptzfsboot or zfsboot correspondingly. - loader uses boot pool and filesystem information passed by boot2-like stage - loader exposes loaddev and currdev variables, initially they point to the pool and filesystem obtained from boot2-like stage - currdev can be changed (e.g. at the prompt) while loaddev is read only - kernel and modules are loaded from currdev by default - kernel mounts root from a filesystem specified by vfs.root.mountfrom variable that is passed by loader to kenv - value of the variable is determined as follows: - loader tries to set this variable based on "/" entry, if any, in /etc/fstab, if any, in the filesystem specified by currdev - the variable can be explicitly set in loader.conf or at the prompt; the explicit assignment overrides the fstab-based auto-detected value - for ZFS, if the above methods do not produce any value, vfs.root.mountfrom is set based on currdev So, you can see that all three methods mentioned in this thread can work equally well. You can either specify a root entry in fstab, or set vfs.root.mountfrom in loader.conf, or simply set bootfs property. The above information also describes precedence rules if more than one knob is used: vfs.root.mountfrom is the most significant, fstab is after it, bootfs plays role in root filesystem selection only if neither of the previous is set. Thus, it's completely up to you which method to use. Whichever is more convenient. I prefer to just set bootfs. Another piece of information is that neither mountpoint nor canmount property affects ZFS root mounting. They of course have their usual effect in other contexts like importing a pool on a different system or when a different filesystem is selected to be a root filesystem. So, again, you can set these properties to whatever is most convenient for you. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri Sep 13 22:37:08 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id CC31BAC8; Fri, 13 Sep 2013 22:37:08 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9BAF12BAF; Fri, 13 Sep 2013 22:37:08 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id r8DMb8wV094995; Fri, 13 Sep 2013 22:37:08 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id r8DMb8aq094994; Fri, 13 Sep 2013 22:37:08 GMT (envelope-from linimon) Date: Fri, 13 Sep 2013 22:37:08 GMT Message-Id: <201309132237.r8DMb8aq094994@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/181966: [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig); zio.c line 2955 [9.2/amd64] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Sep 2013 22:37:08 -0000 Old Synopsis: Kernel panic in ZFS I/O: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig); zio.c line 2955 [9.2/amd64] New Synopsis: [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig); zio.c line 2955 [9.2/amd64] Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Sep 13 22:36:50 UTC 2013 Responsible-Changed-Why: overs http://www.freebsd.org/cgi/query-pr.cgi?pr=181966 From owner-freebsd-fs@FreeBSD.ORG Sat Sep 14 20:02:46 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 9EEF38C2; Sat, 14 Sep 2013 20:02:46 +0000 (UTC) (envelope-from danger@FreeBSD.org) Received: from services.syscare.sk (services.syscare.sk [188.40.39.36]) by mx1.freebsd.org (Postfix) with ESMTP id D9D3A28CD; Sat, 14 Sep 2013 20:02:45 +0000 (UTC) Received: from services.syscare.sk (services [188.40.39.36]) by services.syscare.sk (Postfix) with ESMTP id 2370C5CDB; Sat, 14 Sep 2013 21:53:06 +0200 (CEST) X-Virus-Scanned: amavisd-new at rulez.sk Received: from services.syscare.sk ([188.40.39.36]) by services.syscare.sk (services.rulez.sk [188.40.39.36]) (amavisd-new, port 10024) with ESMTP id FlTngpxMXRrR; Sat, 14 Sep 2013 21:53:03 +0200 (CEST) Received: from mbp.local (adsl-dyn14.91-127-75.t-com.sk [91.127.75.14]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: danger@rulez.sk) by services.syscare.sk (Postfix) with ESMTPSA id 9D56F5CC5; Sat, 14 Sep 2013 21:53:03 +0200 (CEST) Message-ID: <5234BE9E.1030308@FreeBSD.org> Date: Sat, 14 Sep 2013 21:53:02 +0200 From: Daniel Gerzo Organization: The FreeBSD Project User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: avg@freebsd.org, fs@freebsd.org Subject: Mounting from zfs failed with error 22 with gmirror Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Sep 2013 20:02:46 -0000 Hello list, I have come across this thing and I don't have an idea what to do next. I have this partition setup: [root@rescue ~]# gpart show => 34 3907029101 ada0 GPT (1.8T) 34 6 - free - (3.0k) 40 1024 1 freebsd-boot (512k) 1064 83886080 2 freebsd-swap (40G) 83887144 3823141984 3 freebsd-zfs (1.8T) 3907029128 7 - free - (3.5k) => 34 3907029101 ada1 GPT (1.8T) 34 6 - free - (3.0k) 40 1024 1 freebsd-boot (512k) 1064 83886080 2 freebsd-swap (40G) 83887144 3823141984 3 freebsd-zfs (1.8T) 3907029128 7 - free - (3.5k) [root@rescue ~]# gpart show -l => 34 3907029101 ada0 GPT (1.8T) 34 6 - free - (3.0k) 40 1024 1 boot0 (512k) 1064 83886080 2 swap0 (40G) 83887144 3823141984 3 sys0 (1.8T) 3907029128 7 - free - (3.5k) => 34 3907029101 ada1 GPT (1.8T) 34 6 - free - (3.0k) 40 1024 1 boot1 (512k) 1064 83886080 2 swap1 (40G) 83887144 3823141984 3 sys1 (1.8T) 3907029128 7 - free - (3.5k) [root@rescue ~]# zpool import -f -o altroot=/mnt -o cachefile=/boot/zfs/zpool.cache sys [root@rescue ~]# zpool status pool: sys state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM sys ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/sys0 ONLINE 0 0 0 gpt/sys1 ONLINE 0 0 0 errors: No known data errors [root@rescue ~]# zdb sys: version: 28 name: 'sys' state: 0 txg: 13622 pool_guid: 13749191682008517984 hostid: 966392425 hostname: 'rescue' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 13749191682008517984 children[0]: type: 'mirror' id: 0 guid: 10821644781744913225 metaslab_array: 30 metaslab_shift: 34 ashift: 12 asize: 1957443928064 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 12516881521540558071 path: '/dev/gpt/sys0' phys_path: '/dev/gpt/sys0' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 187152467666907385 path: '/dev/gpt/sys1' phys_path: '/dev/gpt/sys1' whole_disk: 1 create_txg: 4 [root@rescue ~]# zpool get bootfs sys NAME PROPERTY VALUE SOURCE sys bootfs sys/default/root local [root@rescue ~]# gmirror status Name Status Components mirror/swap COMPLETE ada1p2 (ACTIVE) ada0p2 (ACTIVE) The problem is that while I do not load geom_mirror from loader.conf, the machine boots fine, however as soon as I enable gmirror in loader.conf the machine doesn't boot and errors with /Trying to mount root from zfs:sys/default/root [].../ /Mounting from zfs:sys/default/root failed with error 22. / and it hangs in the prompt asking me to enter device to mount root from. I found only this http://lists.freebsd.org/pipermail/freebsd-current/2012-November/037910.html email where avg@ mentions that it might be a bug in his code, but no further followups. However that is almost a year ago and I got trapped by this on 9.2-RC4. Could anyone possibly give me some hints? (Please keep in in cc: as I am not subscribed to fs@) Thank you in advance! Kind regards, Daniel