From owner-freebsd-fs@FreeBSD.ORG Sun Jun 14 05:19:30 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2EDBD1065675 for ; Sun, 14 Jun 2009 05:19:30 +0000 (UTC) (envelope-from noackjr@alumni.rice.edu) Received: from smtp109.biz.mail.mud.yahoo.com (smtp109.biz.mail.mud.yahoo.com [68.142.201.178]) by mx1.freebsd.org (Postfix) with SMTP id E3EA48FC1A for ; Sun, 14 Jun 2009 05:19:29 +0000 (UTC) (envelope-from noackjr@alumni.rice.edu) Received: (qmail 34584 invoked from network); 14 Jun 2009 05:19:29 -0000 Received: from unknown (HELO optimator.noacks.org) (noackjr@96.35.144.62 with login) by smtp109.biz.mail.mud.yahoo.com with SMTP; 14 Jun 2009 05:19:29 -0000 X-Yahoo-SMTP: lf_ydH2swBBBfU4zSj6s29Gn1AqWpQIrFClaJdTnJv1EdZ8- X-YMail-OSG: EIyfk.YVM1lnmiRFYJ9SMI7zDsgTx24gR_FIAz9TZg4kpv9sOOoNqOFL1U463wC3rxm1w8RqfdbusBZCd2L.DH_U.cljBlSo2ZD2nO3ut0gGROeax86xlb9ZiGtJGDKMcFGRT2F0rAO6AAuA9w9mIou.zk35Qep5l8tWQxduv5fGe3XNqmFv.wYxuiFYnZuLZ5L9cix9PRvhAf_BU5JfoBMCEKdfUTfDGMmhlvw5yCjIARj3KiD4Gf29qEzpIeQJNaowgqNalbXfowm6XIPns21oTTy3hvvNk5Tt X-Yahoo-Newman-Property: ymail-3 Received: from localhost (localhost [127.0.0.1]) by optimator.noacks.org (Postfix) with ESMTP id 99F48649D; Sun, 14 Jun 2009 00:09:30 -0500 (CDT) X-Virus-Scanned: amavisd-new at noacks.org Received: from optimator.noacks.org ([127.0.0.1]) by localhost (optimator.noacks.org [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 9a4nZ+lOLBsD; Sun, 14 Jun 2009 00:09:28 -0500 (CDT) Received: from www.noacks.org (localhost [127.0.0.1]) by optimator.noacks.org (Postfix) with ESMTP id 9452F62F8; Sun, 14 Jun 2009 00:09:28 -0500 (CDT) Received: from 192.168.1.148 (SquirrelMail authenticated user noackjr) by www.noacks.org with HTTP; Sun, 14 Jun 2009 00:09:28 -0500 Message-ID: In-Reply-To: <9cc826f0720e1624489dd6e6d384babc.squirrel@www.noacks.org> References: <9461581F-F354-486D-961D-3FD5B1EF007C@rabson.org> <20090201072432.GA25276@server.vk2pj.dyndns.org> <246ecf0c87f944d70c5562eeed4165c9@mail.rabson.org> <9cc826f0720e1624489dd6e6d384babc.squirrel@www.noacks.org> Date: Sun, 14 Jun 2009 00:09:28 -0500 From: "Jonathan Noack" To: noackjr@alumni.rice.edu User-Agent: SquirrelMail/1.4.19 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal Cc: freebsd-fs@freebsd.org Subject: Re: Booting from ZFS raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: noackjr@alumni.rice.edu List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jun 2009 05:19:30 -0000 On Fri, May 15, 2009 19:07, Jonathan Noack wrote: > On Thu, May 14, 2009 10:25, Doug Rabson wrote: >> I fixed a bug in the patch. Try this version: >> http://people.freebsd.org/~dfr/raidzboot-14052009.diff > > I know the bug fix was for booting from degraded pools, but I can at least > give you a "no regression" report. I just set up a new amd64 box and was > able to boot from a raidz1 pool using your latest patch. > > Getting this working from scratch was tedious but not too complicated. I > followed lulf's instructions > (http://blogs.freebsdish.org/lulf/2008/12/16/setting-up-a-zfs-only-system/) > using the May snapshot fixit CD. Only differences were that I set up all > 4 disks with gpart (identically), created a raidz1 pool, and used a > patched gptzfsboot that I cross-compiled on my 7.2 i386 box for the > bootcode (applied to all 4 disks). > > If only I had remembered to patch my /usr/src tree before rebuilding world > and rebooting... *sigh* Once more unto the fixit breach... :) This (and the committed version) had been working fine for me on my stock amd64 CURRENT system until I rebuilt world/kernel on 5/30 and rebooted. I get the following error on boot (hand transcribed so hopefully I didn't screw it up): ************************************************************ ZFS: i/o error - all block copies unavailable ZFS: can't read object set for dataset lld Can't find root filesystem - giving up ZFS: unexpected object set type lld ZFS: unexpected object set type lld FreeBSD/i386 boot Default: tank:/boot/kernel/kernerl boot: ZFS: unexpected object set type lld FreeBSD/i386 boot Default: tank:/boot/kernel/kernel boot: ************************************************************ The previously working world/kernel was from 5/26. I haven't had much time to troubleshoot until today. I can use the fixit CD to access the ZFS pool with no issues; the problem appears to just be the boot code. I cross-built a fresh world on my i386 system today, reinstalled everything in /boot, reinstalled gptzfsboot, and still got the same results. What steps should I take to troubleshoot and resolve this? Thank, -Jon From owner-freebsd-fs@FreeBSD.ORG Sun Jun 14 07:50:03 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3FBF310656A7 for ; Sun, 14 Jun 2009 07:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 138408FC1A for ; Sun, 14 Jun 2009 07:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n5E7o2LQ069090 for ; Sun, 14 Jun 2009 07:50:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n5E7o2bN069089; Sun, 14 Jun 2009 07:50:02 GMT (envelope-from gnats) Date: Sun, 14 Jun 2009 07:50:02 GMT Message-Id: <200906140750.n5E7o2bN069089@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Thomas Backman Cc: Subject: Re: kern/135050: [zfs] ZFS clears/hides disk errors on reboot X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Thomas Backman List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jun 2009 07:50:04 -0000 The following reply was made to PR kern/135050; it has been noted by GNATS. From: Thomas Backman To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/135050: [zfs] ZFS clears/hides disk errors on reboot Date: Sun, 14 Jun 2009 09:25:01 +0200 Apparently, errors like these are actually logged to syslog, and thus not completely hidden at all. By adding a line to your /etc/devd.conf you can even get an email notification automatically the instant an error is logged. Very nice. See this post: http://lists.freebsd.org/pipermail/freebsd-current/2009-June/008149.html Regards, Thomas From owner-freebsd-fs@FreeBSD.ORG Sun Jun 14 09:50:03 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7AF101065672 for ; Sun, 14 Jun 2009 09:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 670E68FC1A for ; Sun, 14 Jun 2009 09:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n5E9o3pP066825 for ; Sun, 14 Jun 2009 09:50:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n5E9o3no066824; Sun, 14 Jun 2009 09:50:03 GMT (envelope-from gnats) Date: Sun, 14 Jun 2009 09:50:03 GMT Message-Id: <200906140950.n5E9o3no066824@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Dan Naumov Cc: Subject: Re: misc/118855: [zfs] ZFS-related commands are nonfunctional in fixit shell. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Dan Naumov List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jun 2009 09:50:03 -0000 The following reply was made to PR misc/118855; it has been noted by GNATS. From: Dan Naumov To: bug-followup@FreeBSD.org, erik.swanson@gmail.com Cc: Subject: Re: misc/118855: [zfs] ZFS-related commands are nonfunctional in fixit shell. Date: Sun, 14 Jun 2009 12:43:20 +0300 This should be moved to -docs, here is why: I managed to figure it out after having some of my hair go gray: when you are in FIXIT, you have to do "kldload /dist/boot/kernel/opensolaris.ko; kldload /dist/boot/kernel/zfs.ko" in that particular order (because automatic loading of kernel module dependencies does not work in FIXIT). After this, "zpool" and "zfs" will start working. The ZFS part of the Handbook ( http://www.freebsd.org/doc/en/books/handbook/filesystems-zfs.html ) makes no mention about this, I think a small note in there is in order. Sincerely, Dan Naumov From owner-freebsd-fs@FreeBSD.ORG Sun Jun 14 12:21:58 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 38E01106564A; Sun, 14 Jun 2009 12:21:58 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id C8E5C8FC15; Sun, 14 Jun 2009 12:21:57 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from volatile.chemikals.org (adsl-67-127-7.shv.bellsouth.net [98.67.127.7]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 67ABC8AE434D; Sun, 14 Jun 2009 07:21:56 -0500 (CDT) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id n5ECLphC082419; Sun, 14 Jun 2009 07:21:52 -0500 (CDT) (envelope-from morganw@chemikals.org) Date: Sun, 14 Jun 2009 07:21:51 -0500 (CDT) From: Wes Morgan To: Stanislav Sedov In-Reply-To: <20090613205648.9840e240.stas@FreeBSD.org> Message-ID: References: <86ljnxyy01.fsf@pmade.com> <20090613205648.9840e240.stas@FreeBSD.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org, Peter Jones Subject: Re: Logical Disk to Physical Drive Mapping X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jun 2009 12:21:58 -0000 On Sat, 13 Jun 2009, Stanislav Sedov wrote: > On Fri, 12 Jun 2009 14:53:50 -0600 > Peter Jones mentioned: > >> Given the situation where you have several identical physical drives, >> what is the best way to turn logical labels such as da5 into a physical >> identifier like "the drive in slot 4"? >> >> It looks like I could use dmesg, some assumptions, and glabel to label >> the logical disks. However, I plan to use ZFS and as far as I can tell >> glabel doesn't support ZFS. >> > > If you're using ZFS you probably don't need labels at all. AFAIK, ZFS > stores all of its information in the on-disk metadata, and you always > access data via ZFS volume labels. It does, but even in -current I have to export/import a pool if the device numbering shifts, and "zpool status" output could make your heart skip a beat if you didn't know how to fix it :) It might be kludgy (a chicken/egg type problem), but couldn't glabel be extended to read ZFS labels and create something like /dev/zpools/, and then zfs look there first for devices to import? From owner-freebsd-fs@FreeBSD.ORG Sun Jun 14 15:49:43 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 757271065670; Sun, 14 Jun 2009 15:49:43 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello087206192061.chello.pl [87.206.192.61]) by mx1.freebsd.org (Postfix) with ESMTP id ADD7D8FC20; Sun, 14 Jun 2009 15:49:42 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id B3FEC45E9C; Sun, 14 Jun 2009 17:49:39 +0200 (CEST) Received: from localhost (chello087206192061.chello.pl [87.206.192.61]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 5D16F4569A; Sun, 14 Jun 2009 17:49:34 +0200 (CEST) Date: Sun, 14 Jun 2009 17:49:38 +0200 From: Pawel Jakub Dawidek To: "James R. Van Artsdalen" Message-ID: <20090614154938.GC1848@garage.freebsd.pl> References: <920A69B1-4F06-477E-A13B-63CC22A13120@exscape.org> <3c1674c90906121401s19105167vf4535566321b45de@mail.gmail.com> <20090613150627.GB1848@garage.freebsd.pl> <4A3411EF.5000307@jrv.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="mvpLiMfbWzRoNl4x" Content-Disposition: inline In-Reply-To: <4A3411EF.5000307@jrv.org> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 8.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@FreeBSD.org, Kip Macy , FreeBSD Current , Thomas Backman Subject: Re: ZFS: Silent/hidden errors, nothing logged anywhere X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jun 2009 15:49:44 -0000 --mvpLiMfbWzRoNl4x Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jun 13, 2009 at 03:54:07PM -0500, James R. Van Artsdalen wrote: > Pawel Jakub Dawidek wrote: > > > > We do log such errors. Solaris uses FMA and for FreeBSD I use devd. You > > can find the following entry in /etc/devd.conf: > > > > notify 10 { > > match "system" "ZFS"; > > match "type" "checksum"; > > action "logger -p kern.warn 'ZFS: checksum mismatch, zpool=3D$p= ool path=3D$vdev_path offset=3D$zio_offset size=3D$zio_size'"; > > }; > > > > If you see nothing in your logs, there must be a bug with reporting the > > problem somewhere or devd is not running (it should be enabled by > > default). > > =20 >=20 > Looking at vsyslog(3), I don't think logger(1) can ever log with > facility KERN. LOG_KERN is 0, so this in vsyslog >=20 > /* Set default facility if none specified. */ > if ((pri & LOG_FACMASK) =3D=3D 0) > pri |=3D LogFacility; >=20 > will always change the KERN facility is to LogFacility, which defaults > to LOG_USER. > =20 > So the devd output is really going to user.warn and a syslog.conf line li= ke >=20 > kern.* /var/log/kernel.log >=20 > will capture kernel messages, but not the devd logger output, and if you > look in kernel.log you won't find the checksum errors. Could be, I'm most of the time just use *.* /var/log/all.log. We could easly log directly from inside the kernel, but this is just an example devd entry so one can replace it with, eg. mailing the problem to the system administrator or whatever. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --mvpLiMfbWzRoNl4x Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFKNRwSForvXbEpPzQRAlJwAKC/wgqzMzX+/LO2A4YqY8Rm6DAk6ACgrgQh ONb32i62rHj7RVg8J8PwZrE= =OCAq -----END PGP SIGNATURE----- --mvpLiMfbWzRoNl4x-- From owner-freebsd-fs@FreeBSD.ORG Sun Jun 14 16:16:23 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D795C106564A; Sun, 14 Jun 2009 16:16:23 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from yw-out-2324.google.com (yw-out-2324.google.com [74.125.46.28]) by mx1.freebsd.org (Postfix) with ESMTP id 83F408FC14; Sun, 14 Jun 2009 16:16:23 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by yw-out-2324.google.com with SMTP id 9so1750620ywe.13 for ; Sun, 14 Jun 2009 09:16:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type:content-transfer-encoding; bh=md+Ux4Rtfm6x+IPK0kfzCj3+C9WvSch+vi8//QeRdNs=; b=N+9ELMjLCPfaWCpqdEmOHlrbuYW8x82ipaB5084v4jDqO385aS/+NRO/kseSmx/IeR g/n1lgww1Rz9PWZIuZshVYDaIoEX4GkAoiLxei6QDwbqAGlTChI31xNeZk5MpQXy0BfS bIFV6q7xr5kzYYKcFWiv07K9KEUnQY4VSWZhg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; b=rr0ly5V6c3WsPKn8bLtLbc+9fDANBwsL6vlA5kz/NGsNK1QWP43kQLoouEWVeph6F2 1YYApL86A404y+4lsdSI/2xwLJ6IRQcpsM+3x5M0I0p4twZHcmI2U3dCCktLO3tin1zp Iq3pQf27//eICTIogX5+nancBem0ahCSveCGA= MIME-Version: 1.0 Received: by 10.100.249.14 with SMTP id w14mr7563403anh.162.1244996182882; Sun, 14 Jun 2009 09:16:22 -0700 (PDT) Date: Sun, 14 Jun 2009 19:16:22 +0300 Message-ID: From: Dan Naumov To: freebsd-geom@freebsd.org, freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Subject: Does this disk/filesystem layout look sane to you? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jun 2009 16:16:24 -0000 Hello list. I just wanted to have an extra pair (or a dozen) of eyes look this configuration over before I commit to it (tested it in VMWare just in case, it works, so I am considering doing this on real hardware soon). I drew a nice diagram: http://www.pastebin.ca/1460089 Since it doesnt show on the diagram, let me clarify that the geom mirror consumers as well as the vdevz for ZFS RAIDZ are going to be partitions (raw disk => full disk slice => swap partition | mirror provider partition | zfs vdev partition | unused. Is there any actual downside to having a 5-way mirror vs a 2-way or a 3-way one? - Sincerely, Dan Naumov From owner-freebsd-fs@FreeBSD.ORG Mon Jun 15 11:06:53 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B0BCE106566C for ; Mon, 15 Jun 2009 11:06:53 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 9D6B58FC21 for ; Mon, 15 Jun 2009 11:06:53 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n5FB6rrT076893 for ; Mon, 15 Jun 2009 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n5FB6rWr076889 for freebsd-fs@FreeBSD.org; Mon, 15 Jun 2009 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 15 Jun 2009 11:06:53 GMT Message-Id: <200906151106.n5FB6rWr076889@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jun 2009 11:06:53 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135480 fs [zfs] panic: lock &arg.lock already initialized o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135412 fs [zfs] [nfs] zfs(v13)+nfs and open(..., O_WRONLY|O_CREA o bin/135314 fs [zfs] assertion failed for zdb(8) usage o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/135039 fs [zfs] mkstemp() fails over NFS when server uses ZFS (7 f kern/134496 fs [zfs] [panic] ZFS pool export occasionally causes a ke o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133980 fs [panic] [ffs] panic: ffs_valloc: dup alloc o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133614 fs [smbfs] [panic] panic: ffs_truncate: read-only filesys o kern/133373 fs [zfs] umass attachment causes ZFS checksum errors, dat o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int f kern/133150 fs [zfs] Page fault with ZFS on 7.1-RELEASE/amd64 while w o kern/133134 fs [zfs] Missing ZFS zpool labels f kern/133020 fs [zfs] [panic] inappropriate panic caused by zfs. Pani o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132597 fs [tmpfs] [panic] tmpfs-related panic while interrupting o kern/132551 fs [zfs] ZFS locks up on extattr_list_link syscall o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132337 fs [zfs] [panic] kernel panic in zfs_fuid_create_cred o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes f kern/132068 fs [zfs] page fault when using ZFS over NFS on 7.1-RELEAS o kern/131995 fs [nfs] Failure to mount NFSv4 server o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/131086 fs [ext2fs] [patch] mkfs.ext2 creates rotten partition o kern/130979 fs [smbfs] [panic] boot/kernel/smbfs.ko o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130229 fs [iconv] usermount fails on fs that need iconv o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129148 fs [zfs] [panic] panic on concurrent writing & rollback o kern/129059 fs [zfs] [patch] ZFS bootloader whitelistable via WITHOUT f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/128633 fs [zfs] [lor] lock order reversal in zfs o kern/128514 fs [zfs] [mpt] problems with ZFS and LSILogic SAS/SATA Ad f kern/128173 fs [ext2fs] ls gives "Input/output error" on mounted ext3 o kern/127659 fs [tmpfs] tmpfs memory leak o kern/127492 fs [zfs] System hang on ZFS input-output o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127213 fs [tmpfs] sendfile on tmpfs data corruption o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/125644 fs [zfs] [panic] zfs unfixable fs errors caused panic whe f kern/125536 fs [ext2fs] ext 2 mounts cleanly but fails on commands li o kern/125149 fs [nfs] [panic] changing into .zfs dir from nfs client c f kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition f bin/124424 fs [zfs] zfs(8): zfs list -r shows strange snapshots' siz o kern/123939 fs [msdosfs] corrupts new files o kern/122888 fs [zfs] zfs hang w/ prefetch on, zil off while running t o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o kern/122173 fs [zfs] [panic] Kernel Panic if attempting to replace a o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o kern/122047 fs [ext2fs] [patch] incorrect handling of UF_IMMUTABLE / o kern/122038 fs [tmpfs] [panic] tmpfs: panic: tmpfs_alloc_vp: type 0xc o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121779 fs [ufs] snapinfo(8) (and related tools?) only work for t o kern/121770 fs [zfs] ZFS on i386, large file or heavy I/O leads to ke o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [fs] [snapshot] System crashes when manipulati o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o bin/120288 fs zfs(8): "zfs share -a" does not send SIGHUP to mountd f kern/119735 fs [zfs] geli + ZFS + samba starting on boot panics 7.0-B o kern/118912 fs [2tb] disk sizing/geometry problem with large array o misc/118855 fs [zfs] ZFS-related commands are nonfunctional in fixit o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118320 fs [zfs] [patch] NFS SETATTR sometimes fails to set file o bin/118249 fs mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o kern/116913 fs [ffs] [panic] ffs_blkfree: freeing free block p kern/116608 fs [msdosfs] [patch] msdosfs fails to check mount options o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/115645 fs [snapshots] [panic] lockmgr: thread 0xc4c00d80, not ex o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o kern/113180 fs [zfs] Setting ZFS nfsshare property does not cause inh o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] mount_msdosfs: msdosfs_iconv: Operation not o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/105093 fs [ext2fs] [patch] ext2fs on read-only media cannot be m o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist f kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [iso9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna f kern/91568 fs [ufs] [panic] writing to UFS/softupdates DVD media in o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/89991 fs [ufs] softupdates with mount -ur causes fs UNREFS o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o kern/85326 fs [smbfs] [panic] saving a file via samba to an overquot o kern/84589 fs [2TB] 5.4-STABLE unresponsive during background fsck 2 o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o kern/77826 fs [ext2fs] ext2fs usb filesystem will not mount RW o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 143 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 15 17:24:30 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A0CD2106564A; Mon, 15 Jun 2009 17:24:30 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 767D88FC18; Mon, 15 Jun 2009 17:24:30 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (linimon@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n5FHOUb7077308; Mon, 15 Jun 2009 17:24:30 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n5FHOUIW077304; Mon, 15 Jun 2009 17:24:30 GMT (envelope-from linimon) Date: Mon, 15 Jun 2009 17:24:30 GMT Message-Id: <200906151724.n5FHOUIW077304@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/135594: [zfs] Single dataset unresponsive with Samba X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jun 2009 17:24:31 -0000 Synopsis: [zfs] Single dataset unresponsive with Samba Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jun 15 17:24:18 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=135594 From owner-freebsd-fs@FreeBSD.ORG Tue Jun 16 15:00:40 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F0D2C106566C; Tue, 16 Jun 2009 15:00:40 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id CCFAC8FC18; Tue, 16 Jun 2009 15:00:39 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA09265; Tue, 16 Jun 2009 18:00:37 +0300 (EEST) (envelope-from avg@icyb.net.ua) Message-ID: <4A37B395.20506@icyb.net.ua> Date: Tue, 16 Jun 2009 18:00:37 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.21 (X11/20090406) MIME-Version: 1.0 To: Kip Macy References: <4A325E9F.2080802@icyb.net.ua> <3c1674c90906121354s6d6ae7ben5082708b1586e94f@mail.gmail.com> In-Reply-To: <3c1674c90906121354s6d6ae7ben5082708b1586e94f@mail.gmail.com> X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: zfs related panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Jun 2009 15:00:41 -0000 on 12/06/2009 23:54 Kip Macy said the following: > show sleepchain I can only do post-mortem using jhb's scripts for kgdb: (kgdb) sleepchain 2432 thread 100263 (pid 2432, tcsh) non-lock sleep lockchain 2432 thread 100263 (pid 2432, tcsh) inhibited Not sure if this correct though and what this means. > show thread 100263 (kgdb) thr 250 [Switching to thread 250 (Thread 100263)]#0 sched_switch (td=0xffffff000cfad720, newtd=Variable "newtd" is not available. ) at /usr/src/sys/kern/sched_ule.c:1944 1944 cpuid = PCPU_GET(cpuid); (kgdb) backtrace #0 sched_switch (td=0xffffff000cfad720, newtd=Variable "newtd" is not available. ) at /usr/src/sys/kern/sched_ule.c:1944 #1 0xffffffff80302a59 in mi_switch (flags=1, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:444 #2 0xffffffff8032f645 in sleepq_switch (wchan=Variable "wchan" is not available. ) at /usr/src/sys/kern/subr_sleepqueue.c:497 #3 0xffffffff8032f925 in sleepq_catch_signals (wchan=0xffffff011440e548) at /usr/src/sys/kern/subr_sleepqueue.c:417 #4 0xffffffff80330219 in sleepq_wait_sig (wchan=Variable "wchan" is not available. ) at /usr/src/sys/kern/subr_sleepqueue.c:594 #5 0xffffffff80302eba in _sleep (ident=0xffffff011440e548, lock=0xffffff011440e5a0, priority=360, wmesg=0xffffffff80508788 "pause", timo=0) at /usr/src/sys/kern/kern_synch.c:228 #6 0xffffffff802fc567 in kern_sigsuspend (td=Variable "td" is not available. ) at /usr/src/sys/kern/kern_sig.c:1474 #7 0xffffffff802fc5e9 in sigsuspend (td=0xffffff000cfad720, uap=Variable "uap" is not available. ) at /usr/src/sys/kern/kern_sig.c:1453 #8 0xffffffff80491d2d in syscall (frame=0xffffff8076db8c80) at /usr/src/sys/amd64/amd64/trap.c:899 #9 0xffffffff8047d00b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:339 #10 0x000000080092ce3c in ?? () Previous frame inner to this frame (corrupt stack?) -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Jun 16 15:20:44 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 962A410656AD for ; Tue, 16 Jun 2009 15:20:44 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 7A3EB8FC13 for ; Tue, 16 Jun 2009 15:20:43 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA09400; Tue, 16 Jun 2009 18:09:10 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4A37B596.4090607@freebsd.org> Date: Tue, 16 Jun 2009 18:09:10 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.21 (X11/20090406) MIME-Version: 1.0 To: Kip Macy References: <4A325E9F.2080802@icyb.net.ua> <3c1674c90906121354s6d6ae7ben5082708b1586e94f@mail.gmail.com> <4A37B395.20506@icyb.net.ua> In-Reply-To: <4A37B395.20506@icyb.net.ua> X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: zfs related panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Jun 2009 15:20:45 -0000 on 16/06/2009 18:00 Andriy Gapon said the following: > on 12/06/2009 23:54 Kip Macy said the following: >> show sleepchain > > I can only do post-mortem using jhb's scripts for kgdb: > (kgdb) sleepchain 2432 > thread 100263 (pid 2432, tcsh) non-lock sleep I think that this was reported because td_wchan is not 'struct lock' in this case. (kgdb) fr 6 #6 0xffffffff802fc567 in kern_sigsuspend (td=Variable "td" is not available. ) at /usr/src/sys/kern/kern_sig.c:1474 (kgdb) list 1469 td->td_oldsigmask = td->td_sigmask; 1470 td->td_pflags |= TDP_OLDMASK; 1471 SIG_CANTMASK(mask); 1472 td->td_sigmask = mask; 1473 signotify(td); 1474 while (msleep(&p->p_sigacts, &p->p_mtx, PPAUSE|PCATCH, "pause", 0) == 0) 1475 /* void */; 1476 PROC_UNLOCK(p); 1477 /* always return EINTR rather than ERESTART... */ 1478 return (EINTR); (kgdb) p &p->p_sigacts $10 = (struct sigacts **) 0xffffff011440e548 (kgdb) fr 0 #0 sched_switch (td=0xffffff000cfad720, newtd=Variable "newtd" is not available. ) at /usr/src/sys/kern/sched_ule.c:1944 (kgdb) p td->td_wchan $11 = (void *) 0xffffff011440e548 (kgdb) p td->td_wmesg $12 = 0xffffffff80508788 "pause" (kgdb) backtrace #0 sched_switch (td=0xffffff000cfad720, newtd=Variable "newtd" is not available. ) at /usr/src/sys/kern/sched_ule.c:1944 #1 0xffffffff80302a59 in mi_switch (flags=1, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:444 #2 0xffffffff8032f645 in sleepq_switch (wchan=Variable "wchan" is not available. ) at /usr/src/sys/kern/subr_sleepqueue.c:497 #3 0xffffffff8032f925 in sleepq_catch_signals (wchan=0xffffff011440e548) at /usr/src/sys/kern/subr_sleepqueue.c:417 #4 0xffffffff80330219 in sleepq_wait_sig (wchan=Variable "wchan" is not available. ) at /usr/src/sys/kern/subr_sleepqueue.c:594 #5 0xffffffff80302eba in _sleep (ident=0xffffff011440e548, lock=0xffffff011440e5a0, priority=360, wmesg=0xffffffff80508788 "pause", timo=0) at /usr/src/sys/kern/kern_synch.c:228 #6 0xffffffff802fc567 in kern_sigsuspend (td=Variable "td" is not available. ) at /usr/src/sys/kern/kern_sig.c:1474 #7 0xffffffff802fc5e9 in sigsuspend (td=0xffffff000cfad720, uap=Variable "uap" is not available. ) at /usr/src/sys/kern/kern_sig.c:1453 #8 0xffffffff80491d2d in syscall (frame=0xffffff8076db8c80) at /usr/src/sys/amd64/amd64/trap.c:899 #9 0xffffffff8047d00b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:339 #10 0x000000080092ce3c in ?? () -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Jun 16 18:53:12 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 318BC1065672 for ; Tue, 16 Jun 2009 18:53:12 +0000 (UTC) (envelope-from peterjeremy@optushome.com.au) Received: from mail12.syd.optusnet.com.au (mail12.syd.optusnet.com.au [211.29.132.193]) by mx1.freebsd.org (Postfix) with ESMTP id 1560E8FC29 for ; Tue, 16 Jun 2009 18:53:10 +0000 (UTC) (envelope-from peterjeremy@optushome.com.au) Received: from server.vk2pj.dyndns.org (c122-106-216-167.belrs3.nsw.optusnet.com.au [122.106.216.167]) by mail12.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id n5GIqNnA011108 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 17 Jun 2009 04:53:08 +1000 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.3/8.14.3) with ESMTP id n5GIqMeK067320; Wed, 17 Jun 2009 04:52:22 +1000 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.3/8.14.3/Submit) id n5GIqM3e067317; Wed, 17 Jun 2009 04:52:22 +1000 (EST) (envelope-from peter) Date: Wed, 17 Jun 2009 04:52:21 +1000 From: Peter Jeremy To: Dan Naumov Message-ID: <20090616185221.GI9529@server.vk2pj.dyndns.org> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="+Z7/5fzWRHDJ0o7Q" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.19 (2009-01-05) Cc: freebsd-fs@freebsd.org, freebsd-geom@freebsd.org Subject: Re: Does this disk/filesystem layout look sane to you? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Jun 2009 18:53:12 -0000 --+Z7/5fzWRHDJ0o7Q Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2009-Jun-14 19:16:22 +0300, Dan Naumov wrote: >Is there any actual downside to having a 5-way mirror vs a 2-way or a 3-wa= y one? Only write performance to the UFS root filesystem. I run a system using a similar approach (though across 3 disks). My only suggestion would be that instead of a single 5-way mirrored root, you have a 2- or 3-way mirrored root and an off-line root backup using the remaining disks - if you accidently trash your active root, you can just boot off one of the other disks to recover. --=20 Peter Jeremy --+Z7/5fzWRHDJ0o7Q Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.11 (FreeBSD) iEYEARECAAYFAko36eUACgkQ/opHv/APuIeeJQCfUt1mb4iCQonTgVOBWQGcVJ8d JW4AnR1DKOrDCf8O5/+B6uGAvDVeFRJ4 =SloR -----END PGP SIGNATURE----- --+Z7/5fzWRHDJ0o7Q-- From owner-freebsd-fs@FreeBSD.ORG Wed Jun 17 07:34:03 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E58E0106566B; Wed, 17 Jun 2009 07:34:03 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from mail-yx0-f200.google.com (mail-yx0-f200.google.com [209.85.210.200]) by mx1.freebsd.org (Postfix) with ESMTP id 8FE288FC15; Wed, 17 Jun 2009 07:34:03 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by yxe38 with SMTP id 38so122361yxe.3 for ; Wed, 17 Jun 2009 00:34:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type:content-transfer-encoding; bh=YTEyU0acg4mHW+UBhPvAPnugFM9Tsp6+rXVGYv8lGlQ=; b=PNX5dMlAQ5/7/2tZ+ZCCB/iAKgEvu4PoYiSzfkCWG9esUGCU7UbK82pK0x9S7useF3 InDhr0p2d1Hr8KW3weQwo/kN2i7zhF2cPD5OJFu/DvQXOdPoBhoAR5pct8HgzJwowj5x T5YLhaKy0G/+ZG68/JBR8Xbow+lKgrkU827W4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; b=P9X4OVJ6bsM/VTvfQZbCqeTMlEVdSwwhuXsRguj0kOi9bejq02Z0n7+DcsPKFKxSTO D9W/FJDEPFkRibJ9bhM+X3F57JgXBCStHrq6MlrYtPEgjNkfF2DyuqywYJ6jamuvvX+Z 3HoI5jLLpERt11GI02yY5Nr//sqKrYWSDDEOM= MIME-Version: 1.0 Received: by 10.100.127.4 with SMTP id z4mr11768967anc.129.1245224042984; Wed, 17 Jun 2009 00:34:02 -0700 (PDT) Date: Wed, 17 Jun 2009 10:34:02 +0300 Message-ID: From: Dan Naumov To: freebsd-fs@freebsd.org, FreeBSD-STABLE Mailing List Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Subject: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2009 07:34:04 -0000 I am wondering if the numbers I am seeing is something expected or is something broken somewhere. Output of bonnie -s 1024: on UFS2 + SoftUpdates: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 1024 56431 94.5 88407 38.9 77357 53.3 64042 98.6 644511 98.6 23603.8 243.3 on ZFS: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 1024 22591 53.7 45602 35.1 14770 13.2 45007 83.8 94595 28.0 102.2 1.2 atom# cat /boot/loader.conf vm.kmem_size="1024M" vm.kmem_size_max="1024M" vfs.zfs.arc_max="96M" The test isn't completely fair in that the test on UFS2 is done on a partition that resides on the first 16gb of a 2tb disk while the zfs test is done on the enormous 1,9tb zfs pool that comes after that partition (same disk). Can this difference in layout make up for the huge difference in performance or is there something else in play? The system is an Intel Atom 330 dualcore, 2gb ram, Western Digital Green 2tb disk. Also what would be another good way to get good numbers for comparing the performance of UFS2 vs ZFS on the same system. Sincerely, - Dan Naumov From owner-freebsd-fs@FreeBSD.ORG Wed Jun 17 10:53:52 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 619DB106566B for ; Wed, 17 Jun 2009 10:53:52 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by mx1.freebsd.org (Postfix) with ESMTP id E38268FC27 for ; Wed, 17 Jun 2009 10:53:51 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from list by ciao.gmane.org with local (Exim 4.43) id 1MGsm3-0004Zf-HF for freebsd-fs@freebsd.org; Wed, 17 Jun 2009 10:53:47 +0000 Received: from lara.cc.fer.hr ([161.53.72.113]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 17 Jun 2009 10:53:47 +0000 Received: from ivoras by lara.cc.fer.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 17 Jun 2009 10:53:47 +0000 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Ivan Voras Date: Wed, 17 Jun 2009 12:53:37 +0200 Lines: 40 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: lara.cc.fer.hr User-Agent: Thunderbird 2.0.0.21 (X11/20090615) In-Reply-To: Sender: news Cc: freebsd-stable@freebsd.org Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2009 10:53:52 -0000 Dan Naumov wrote: > I am wondering if the numbers I am seeing is something expected or is > something broken somewhere. Output of bonnie -s 1024: Unless you have 512 MB of memory in the machine or you're trying to test caching, the benchmark you did is useless. In your environment, you need at least "-s 4096". Even with those issues solved, it's semi-useless since you did both tests on the same drive, on different parts of it (see "diskinfo -vt ad0" or whatever your drive is to see how different parts of the drive have different performance). To make an objective comparison you need two identical drives, and create a new empty small-ish partition (e.g. 15 GB) on the same position on both (e.g. at the start), then use this partition only for benchmarking (not for the OS, etc). > on UFS2 + SoftUpdates: > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 1024 56431 94.5 88407 38.9 77357 53.3 64042 98.6 644511 98.6 > 23603.8 243.3 > > on ZFS: > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 1024 22591 53.7 45602 35.1 14770 13.2 45007 83.8 94595 28.0 102.2 1.2 I did my own testing on the early import of ZFS, the results in bonnie++ were that read and rewrite speeds are significantly better on ZFS than on UFS+SU (50%+), while write speed is a bit slower (~~10%). There are of course other workloads than the sequential that need to be reviewed. For example, blogbench places ZFS again at about 50% better than UFS+SU, while randomio makes it 50% slower. Untarring the ports tree on ZFS is about 3x faster than on UFS+SU. From owner-freebsd-fs@FreeBSD.ORG Wed Jun 17 11:26:11 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0A5F71065672 for ; Wed, 17 Jun 2009 11:26:11 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id C00188FC08 for ; Wed, 17 Jun 2009 11:26:10 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id B756317D17; Wed, 17 Jun 2009 21:26:32 +1000 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.20.30.102] (60.218.233.220.static.exetel.com.au [220.233.218.60]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 7C8EE173D3; Wed, 17 Jun 2009 21:26:28 +1000 (EST) Message-ID: <4A38D1F9.6020105@modulus.org> Date: Wed, 17 Jun 2009 21:22:33 +1000 From: Andrew Snow User-Agent: Thunderbird 2.0.0.6 (X11/20070926) MIME-Version: 1.0 To: Ivan Voras , freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2009 11:26:11 -0000 Further to this, the gap between ZFS and UFS grows even larger when you compare ZFS software RAID with UFS on hardware RAID. (with ZFS beating UFS rather soundly) From owner-freebsd-fs@FreeBSD.ORG Wed Jun 17 16:07:58 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2C90B1065673; Wed, 17 Jun 2009 16:07:58 +0000 (UTC) (envelope-from joe@osoft.us) Received: from mail.osoft.us (osoft.us [67.14.192.59]) by mx1.freebsd.org (Postfix) with ESMTP id 055BE8FC29; Wed, 17 Jun 2009 16:07:57 +0000 (UTC) (envelope-from joe@osoft.us) Received: from [10.0.1.100] (99-25-241-54.lightspeed.ltrkar.sbcglobal.net [99.25.241.54]) by mail.osoft.us (Postfix) with ESMTP id A3FE633C54; Wed, 17 Jun 2009 10:40:07 -0500 (CDT) Message-ID: <4A390E57.9010701@osoft.us> Date: Wed, 17 Jun 2009 10:40:07 -0500 From: Joe Koberg User-Agent: Thunderbird 2.0.0.21 (Windows/20090302) MIME-Version: 1.0 To: Dan Naumov References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, FreeBSD-STABLE Mailing List Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2009 16:07:58 -0000 The difference in layout can easily explain a 2x difference in sequential transfer performance. I seriously doubt your disk is really getting 23K seeks/s done in the UFS case - 100/s sounds much more reasonable for real hardware. Perhaps the results of caching? Joe Koberg Dan Naumov wrote: > I am wondering if the numbers I am seeing is something expected or is > something broken somewhere. Output of bonnie -s 1024: > > on UFS2 + SoftUpdates: > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 1024 56431 94.5 88407 38.9 77357 53.3 64042 98.6 644511 98.6 23603.8 243.3 > > on ZFS: > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 1024 22591 53.7 45602 35.1 14770 13.2 45007 83.8 94595 28.0 102.2 1.2 > > > atom# cat /boot/loader.conf > vm.kmem_size="1024M" > vm.kmem_size_max="1024M" > vfs.zfs.arc_max="96M" > > The test isn't completely fair in that the test on UFS2 is done on a > partition that resides on the first 16gb of a 2tb disk while the zfs > test is done on the enormous 1,9tb zfs pool that comes after that > partition (same disk). Can this difference in layout make up for the > huge difference in performance or is there something else in play? The > system is an Intel Atom 330 dualcore, 2gb ram, Western Digital Green > 2tb disk. Also what would be another good way to get good numbers for > comparing the performance of UFS2 vs ZFS on the same system. > > > Sincerely, > - Dan Naumov > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Wed Jun 17 17:15:10 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DD61F1065676 for ; Wed, 17 Jun 2009 17:15:10 +0000 (UTC) (envelope-from dan@dan.emsphone.com) Received: from email1.allantgroup.com (email1.emsphone.com [199.67.51.115]) by mx1.freebsd.org (Postfix) with ESMTP id EF8348FC1A for ; Wed, 17 Jun 2009 17:15:09 +0000 (UTC) (envelope-from dan@dan.emsphone.com) Received: from dan.emsphone.com (dan.emsphone.com [199.67.51.101]) by email1.allantgroup.com (8.14.0/8.14.0) with ESMTP id n5HGcm8Q020473 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 17 Jun 2009 11:38:48 -0500 (CDT) (envelope-from dan@dan.emsphone.com) Received: from dan.emsphone.com (smmsp@localhost [127.0.0.1]) by dan.emsphone.com (8.14.3/8.14.3) with ESMTP id n5HGclAl071779 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 17 Jun 2009 11:38:47 -0500 (CDT) (envelope-from dan@dan.emsphone.com) Received: (from dan@localhost) by dan.emsphone.com (8.14.3/8.14.3/Submit) id n5HGBAr6013809; Wed, 17 Jun 2009 11:11:10 -0500 (CDT) (envelope-from dan) Date: Wed, 17 Jun 2009 11:11:10 -0500 From: Dan Nelson To: Dan Naumov Message-ID: <20090617161109.GA12966@dan.emsphone.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-OS: FreeBSD 7.2-STABLE User-Agent: Mutt/1.5.19 (2009-01-05) X-Virus-Scanned: ClamAV version 0.94.1, clamav-milter version 0.94.1 on email1.allantgroup.com X-Virus-Status: Clean X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-2.0.2 (email1.allantgroup.com [199.67.51.78]); Wed, 17 Jun 2009 11:38:48 -0500 (CDT) X-Scanned-By: MIMEDefang 2.45 Cc: freebsd-fs@freebsd.org, FreeBSD-STABLE Mailing List Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2009 17:15:11 -0000 In the last episode (Jun 17), Dan Naumov said: > I am wondering if the numbers I am seeing is something expected or is > something broken somewhere. Output of bonnie -s 1024: > > on UFS2 + SoftUpdates: > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 1024 56431 94.5 88407 38.9 77357 53.3 64042 98.6 644511 98.6 23603.8 243.3 The insane sequential input K/sec and random seeks/sec values indicate that your entire test file was cached in memory. Try a larger file (at least 2x your installed RAM). > on ZFS: > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 1024 22591 53.7 45602 35.1 14770 13.2 45007 83.8 94595 28.0 102.2 1.2 > -- Dan Nelson dnelson@allantgroup.com From owner-freebsd-fs@FreeBSD.ORG Wed Jun 17 20:48:10 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BBCE0106564A for ; Wed, 17 Jun 2009 20:48:10 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from ciao.gmane.org (main.gmane.org [80.91.229.2]) by mx1.freebsd.org (Postfix) with ESMTP id 778FD8FC13 for ; Wed, 17 Jun 2009 20:48:10 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from list by ciao.gmane.org with local (Exim 4.43) id 1MH23E-0006ld-TN for freebsd-fs@freebsd.org; Wed, 17 Jun 2009 20:48:08 +0000 Received: from 67.177.235.141 ([67.177.235.141]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 17 Jun 2009 20:48:08 +0000 Received: from mlists by 67.177.235.141 with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 17 Jun 2009 20:48:08 +0000 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Peter Jones Date: Wed, 17 Jun 2009 14:47:54 -0600 Lines: 26 Message-ID: <8663eu8u4l.fsf@pmade.com> References: <86ljnxyy01.fsf@pmade.com> <4A32CF01.4010004@barryp.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: 67.177.235.141 User-Agent: Gnus/5.110011 (No Gnus v0.11) Emacs/22.3 (darwin) Cancel-Lock: sha1:/5puGy6nLNtcIyGIbcThL8JEBdk= Sender: news Subject: Re: Logical Disk to Physical Drive Mapping X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2009 20:48:11 -0000 Barry Pederson writes: > Peter Jones wrote: >> Given the situation where you have several identical physical drives, >> what is the best way to turn logical labels such as da5 into a physical >> identifier like "the drive in slot 4"? >> >> It looks like I could use dmesg, some assumptions, and glabel to label >> the logical disks. However, I plan to use ZFS and as far as I can tell >> glabel doesn't support ZFS. >> >> What is the de facto way of doing this? I'll be using FreeBSD-CURRENT >> for this, btw. > > I've glabeled disks and then added them to ZFS pools, seems to work > fine. Here's a raidz2 setup of 8 identical glabeled drives on 7.2 I'm not exactly sure how file system labels differ from disk labels, but the man page suggests that they both write meta-data to the last sector of the disk. Wouldn't that indicate that once ZFS wrote to the last sector of the disk you'd loose that meta-data? -- Peter Jones, http://pmade.com pmade inc. Louisville, CO US From owner-freebsd-fs@FreeBSD.ORG Wed Jun 17 20:58:41 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 14FDB106567A for ; Wed, 17 Jun 2009 20:58:41 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from mail-yx0-f200.google.com (mail-yx0-f200.google.com [209.85.210.200]) by mx1.freebsd.org (Postfix) with ESMTP id BFF428FC1F for ; Wed, 17 Jun 2009 20:58:40 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by yxe38 with SMTP id 38so818945yxe.3 for ; Wed, 17 Jun 2009 13:58:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=NsRph5QurfkNaFuRhythdVWD1uMffj+7cN0LjatZRrI=; b=t5QY4BYdpjFvmEzDek+u6G4Ty85cRu4ZU337H91vnddydb7hXRhvBkdxwXJa4l4kPG nIWuR5IkMoUnnCNqfEHB49SMeMmMOhMxcvm+qpyOy288+q+wfzMw935JRkv18x1O/Thk 1bhHQs6oVqlTAR0h/eV+NBbXaqOFlu7bc1YVw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=MB8hi8lnZLiY36pfoI7M93sVFhOmaR8kGX0asIuhP3Ls2ecEkXiJSjHPIQCCAT1YLb snDoT9QrM9spxpngerV3+ZTJJif/IaI7RoGDRnJIRx5vhOYvm46FeBMxtTfHR3CXGBvp 78OKHHVZbWMwDXwSh+ftJc1BtnRJNjlQeoDLM= MIME-Version: 1.0 Received: by 10.100.141.15 with SMTP id o15mr899564and.20.1245272319987; Wed, 17 Jun 2009 13:58:39 -0700 (PDT) In-Reply-To: <8663eu8u4l.fsf@pmade.com> References: <86ljnxyy01.fsf@pmade.com> <4A32CF01.4010004@barryp.org> <8663eu8u4l.fsf@pmade.com> Date: Wed, 17 Jun 2009 23:58:39 +0300 Message-ID: From: Dan Naumov To: Peter Jones Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: Logical Disk to Physical Drive Mapping X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jun 2009 20:58:43 -0000 You could use ZFS on a slice/partition taking up 99,9% of the disk's size to avoid this. Contrary to how it works in Solaris/OpenSolaris, in FreeBSD you don't use the ability to use write cache if you chose to use a slice or partition as a vdev for a ZFS pool instead of giving it the full disk. Additionally, you get some room to play if one disk in your raidz drop dead and your replacement drive ends up being a few sectors smaller then the disk you are replacing. - Dan Naumov On Wed, Jun 17, 2009 at 11:47 PM, Peter Jones wrote: > I'm not exactly sure how file system labels differ from disk labels, but > the man page suggests that they both write meta-data to the last sector > of the disk. > > Wouldn't that indicate that once ZFS wrote to the last sector of the > disk you'd loose that meta-data? From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 00:07:41 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0E43910656A6 for ; Thu, 18 Jun 2009 00:07:41 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smtp-out1.tiscali.nl (smtp-out1.tiscali.nl [195.241.79.176]) by mx1.freebsd.org (Postfix) with ESMTP id EE7EE8FC18 for ; Thu, 18 Jun 2009 00:07:38 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from [212.123.145.58] (helo=sjakie.klop.ws) by smtp-out1.tiscali.nl with esmtp id 1MH4uR-0002L7-IM; Thu, 18 Jun 2009 01:51:15 +0200 Received: from 82-170-177-25.ip.telfort.nl (localhost [127.0.0.1]) by sjakie.klop.ws (Postfix) with ESMTP id EE3A75ABE; Thu, 18 Jun 2009 01:51:14 +0200 (CEST) Date: Thu, 18 Jun 2009 01:51:14 +0200 To: "Dan Naumov" , freebsd-fs@freebsd.org, "FreeBSD-STABLE Mailing List" From: "Ronald Klop" Content-Type: text/plain; format=flowed; delsp=yes; charset=us-ascii MIME-Version: 1.0 References: Content-Transfer-Encoding: 7bit Message-ID: In-Reply-To: User-Agent: Opera Mail/9.64 (FreeBSD) Cc: Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 00:07:41 -0000 On Wed, 17 Jun 2009 09:34:02 +0200, Dan Naumov wrote: > I am wondering if the numbers I am seeing is something expected or is > something broken somewhere. Output of bonnie -s 1024: > > on UFS2 + SoftUpdates: > > -------Sequential Output-------- ---Sequential Input-- > --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- > --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU > /sec %CPU > 1024 56431 94.5 88407 38.9 77357 53.3 64042 98.6 644511 98.6 > 23603.8 243.3 > > on ZFS: > > -------Sequential Output-------- ---Sequential Input-- > --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- > --Seeks--- > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU > /sec %CPU > 1024 22591 53.7 45602 35.1 14770 13.2 45007 83.8 94595 28.0 > 102.2 1.2 > > > atom# cat /boot/loader.conf > vm.kmem_size="1024M" > vm.kmem_size_max="1024M" > vfs.zfs.arc_max="96M" Isn't 96M for ARC really small? Mine is 860M. vfs.zfs.arc_max: 860072960 kstat.zfs.misc.arcstats.size: 657383376 I think the UFS2 cache is much bigger which makes a difference in your test. Ronald. From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 00:07:56 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F3EC5106568E; Thu, 18 Jun 2009 00:07:55 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from mail-yx0-f200.google.com (mail-yx0-f200.google.com [209.85.210.200]) by mx1.freebsd.org (Postfix) with ESMTP id 8EFDB8FC24; Thu, 18 Jun 2009 00:07:55 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by yxe38 with SMTP id 38so985121yxe.3 for ; Wed, 17 Jun 2009 17:07:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=mrblN0JeWeMmZd2HDlpsgZ8/1eIOz5OAOKQM6WNtmdM=; b=nQD66o8PAsPZaxybWYIKocSJwtWfw35yOeNHS1XhC/V1/srzYLpZN+DMRFnDenXvtz Nc5mMdp5TaPyx03N3BjqjAqkJzaF2qZHYw577goUVdjHVUAVH5tCdORUYJlXpqXNGyek QsI0WbydaWEqZ0QtoC59l7OXPHXgcan/8Ac8Y= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=oxy6JrEz8HJMVF7Gr8hCzR4L+TqiWkSy9xlNnlQxAnDzuxta7MuHmEtF8NbT3O23vm u+ytEAjUdiA3S3ETQe8yS+Zkd7DuNZ8cR8zLK+AUU7sg/xpAD63z4J1s6CozMt39y+TV pw5NGjtvCy4guDv/nBjao3hlmXMHXLZKhwb1o= MIME-Version: 1.0 Received: by 10.100.41.9 with SMTP id o9mr1107355ano.155.1245283671367; Wed, 17 Jun 2009 17:07:51 -0700 (PDT) In-Reply-To: References: Date: Thu, 18 Jun 2009 03:07:51 +0300 Message-ID: From: Dan Naumov To: Ronald Klop Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, FreeBSD-STABLE Mailing List Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 00:07:56 -0000 All the ZFS tuning guides for FreeBSD (including one on the FreeBSD ZFS wiki) have recommended values between 64M and 128M to improve stability, so that what I went with. How much of my max kmem is it safe to give to ZFS? - Dan Naumov On Thu, Jun 18, 2009 at 2:51 AM, Ronald Klop wrote: > Isn't 96M for ARC really small? > Mine is 860M. > vfs.zfs.arc_max: 860072960 > kstat.zfs.misc.arcstats.size: 657383376 > > I think the UFS2 cache is much bigger which makes a difference in your test. > > Ronald. > From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 04:57:26 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B4D2C106564A for ; Thu, 18 Jun 2009 04:57:26 +0000 (UTC) (envelope-from chengjin@fastsoft.com) Received: from HQ-ES.FASTSOFT.COM (hq-es.fastsoft.com [38.102.243.86]) by mx1.freebsd.org (Postfix) with ESMTP id 99DED8FC12 for ; Thu, 18 Jun 2009 04:57:26 +0000 (UTC) (envelope-from chengjin@fastsoft.com) X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Date: Wed, 17 Jun 2009 21:45:23 -0700 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: 7.0 RELEASE panic: zero vnode ref count (with VFS_BIO_DEBUG on) Thread-Index: Acnvz5eY5Vn0O+sOT6iMgWZ9tReDhQ== From: "Cheng Jin" To: Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: 7.0 RELEASE panic: zero vnode ref count (with VFS_BIO_DEBUG on) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 04:57:26 -0000 All, =20 While I was testing various kernel debug options, I ran into the kernel = panic. I am not much of filesystem/vm person. I also didn't find any recent report of a = similar crash so I am hoping one of you would provide some pointers on what this is. =20 I am running 7.0 Release on a Dell 860 with a WD 80G SATA disk using the = following kernel config file. I do have a few other debug option turned on, but as far = as fs is concerned, VFS_BIO_DEBUG is the only thing. =20 I do have the core file (about 181 MB) so if anyone needs additional = information, please let me know. =20 cpu HAMMER ident NOFP options IPFIREWALL options DUMMYNET options IPFIREWALL_DEFAULT_TO_ACCEPT options KDB options KDB_TRACE options KDB_UNATTENDED options GDB options DDB options INVARIANTS options INVARIANT_SUPPORT options WITNESS options WITNESS_KDB options VFS_BIO_DEBUG options HZ=3D1000 makeoptions DEBUG=3D-g # Build kernel with gdb(1) debug symbols options SCHED_ULE # ULE scheduler options PREEMPTION # Enable kernel thread preemption options INET # InterNETworking options FFS # Berkeley Fast Filesystem options SOFTUPDATES # Enable FFS soft updates support options UFS_ACL # Support for access control lists options UFS_DIRHASH # Improve performance on big directories options UFS_GJOURNAL # Enable gjournal-based UFS journaling options MD_ROOT # MD is a potential root device options CD9660 # ISO 9660 Filesystem options PROCFS # Process filesystem (requires PSEUDOFS) options PSEUDOFS # Pseudo-filesystem framework options GEOM_PART_GPT # GUID Partition Tables. options GEOM_LABEL # Provides labelization options COMPAT_43TTY # BSD 4.3 TTY compat [KEEP THIS!] options COMPAT_IA32 # Compatible with i386 binaries options COMPAT_FREEBSD4 # Compatible with FreeBSD4 options COMPAT_FREEBSD5 # Compatible with FreeBSD5 options COMPAT_FREEBSD6 # Compatible with FreeBSD6 options SCSI_DELAY=3D5000 # Delay (in ms) before probing SCSI options KTRACE # ktrace(1) support options SYSVSHM # SYSV-style shared memory options SYSVMSG # SYSV-style message queues options SYSVSEM # SYSV-style semaphores options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time = extensions options KBD_INSTALL_CDEV # install a CDEV entry in /dev options ADAPTIVE_GIANT # Giant mutex is adaptive. options STOP_NMI # Stop CPUS using NMI instead of IPI options AUDIT # Security event auditing # Make an SMP-capable kernel by default options SMP # Symmetric MultiProcessor Kernel # Bus support. device acpi device pci # Floppy drives #device fdc # ATA and ATAPI devices device ata device atadisk # ATA disk drives device ataraid # ATA RAID drives device atapicd # ATAPI CDROM drives #device atapifd # ATAPI floppy drives #device atapist # ATAPI tape drives options ATA_STATIC_ID # Static device numbering # SCSI Controllers device ahc # AHA2940 and onboard AIC7xxx devices options AHC_REG_PRETTY_PRINT # Print register bitfields in debug # output. Adds ~128k to driver. device ahd # AHA39320/29320 and onboard AIC79xx devices options AHD_REG_PRETTY_PRINT # Print register bitfields in debug # output. Adds ~215k to driver. device amd # AMD 53C974 (Tekram DC-390(T)) device hptiop # Highpoint RocketRaid 3xxx series device isp # Qlogic family device mpt # LSI-Logic MPT-Fusion device sym # NCR/Symbios Logic (newer chipsets + those of = `ncr') device trm # Tekram DC395U/UW/F DC315U adapters device adv # Advansys SCSI adapters device adw # Advansys wide SCSI adapters device aic # Adaptec 15[012]x SCSI adapters, AIC-6[23]60. device bt # Buslogic/Mylex MultiMaster SCSI adapters # SCSI peripherals device scbus # SCSI bus (required for SCSI) device ch # SCSI media changers device da # Direct Access (disks) device sa # Sequential Access (tape etc) device cd # CD device pass # Passthrough device (direct SCSI access) device ses # SCSI Environmental Services (and SAF-TE) # RAID controllers interfaced to the SCSI subsystem device amr # AMI MegaRAID device arcmsr # Areca SATA II RAID device ciss # Compaq Smart RAID 5* device dpt # DPT Smartcache III, IV - See NOTES for options device hptmv # Highpoint RocketRAID 182x device hptrr # Highpoint RocketRAID 17xx, 22xx, 23xx, 25xx device iir # Intel Integrated RAID device ips # IBM (Adaptec) ServeRAID device mly # Mylex AcceleRAID/eXtremeRAID device twa # 3ware 9000 series PATA/SATA RAID # RAID controllers device aac # Adaptec FSA RAID device aacp # SCSI passthrough for aac (requires CAM) device ida # Compaq Smart RAID device mfi # LSI MegaRAID SAS device mlx # Mylex DAC960 family device twe # 3ware ATA RAID # atkbdc0 controls both the keyboard and the PS/2 mouse device atkbdc # AT keyboard controller device atkbd # AT keyboard device psm # PS/2 mouse device kbdmux # keyboard multiplexer device vga # VGA video card driver device splash # Splash screen and screen saver support # syscons is the default console driver, resembling an SCO console device sc # IPMI support device ipmi # Serial (COM) ports device sio # 8250, 16[45]50 based serial ports # Parallel port device ppbus # Parallel port bus (required) # PCI Ethernet NICs that use the common MII bus controller code. # NOTE: Be sure to keep the 'device miibus' line in order to use these = NICs! device miibus # MII bus support device bce # Broadcom BCM5706/BCM5708 Gigabit Ethernet device bge # Broadcom BCM570xx Gigabit Ethernet # Pseudo devices. device loop # Network loopback device random # Entropy device device ether # Ethernet support device sl # Kernel SLIP device ppp # Kernel PPP device tun # Packet tunnel. device pty # Pseudo-ttys (telnet etc) device md # Memory "disks" device gif # IPv6 and IPv4 tunneling device faith # IPv6-to-IPv4 relaying (translation) device firmware # firmware assist module device if_bridge #Bridge interface # The `bpf' device enables the Berkeley Packet Filter. # Be aware of the administrative consequences of enabling this! # Note that 'bpf' is required for DHCP. device bpf # Berkeley packet filter # USB support device uhci # UHCI PCI->USB interface device ohci # OHCI PCI->USB interface device ehci # EHCI PCI->USB interface (USB 2.0) device usb # USB Bus (required) device ugen # Generic device ukbd # Keyboard =20 The backtrace of the stack is the following: #0 doadump () at pcpu.h:194 194 __asm __volatile("movq %%gs:0,%0" : "=3Dr" (td)); (kgdb) bt #0 doadump () at pcpu.h:194 #1 0xffffffff80351af5 in boot (howto=3D260) at = ../../../kern/kern_shutdown.c:409 #2 0xffffffff80351f77 in panic (fmt=3DVariable "fmt" is not available.) = at ../../../kern/kern_shutdown.c:563 #3 0xffffffff803baa00 in bufdone_finish (bp=3D0xffffffffa0209b00) at = ../../../kern/vfs_bio.c:3202 #4 0xffffffff803baaa8 in bufdone (bp=3D0xffffffffa0209b00) at = ../../../kern/vfs_bio.c:3173 #5 0xffffffff8030a2b1 in g_io_schedule_up (tp=3DVariable "tp" is not = available.) at ../../../geom/geom_io.c:587 #6 0xffffffff8030a99f in g_up_procbody () at = ../../../geom/geom_kern.c:95 #7 0xffffffff80334dca in fork_exit (callout=3D0xffffffff8030a930 = , arg=3D0x0, frame=3D0xffffffffab9fdc80) at ../../../kern/kern_fork.c:781 #8 0xffffffff804ee63e in fork_trampoline () at = ../../../amd64/amd64/exception.S:415 From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 07:31:54 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 32FE21065674 for ; Thu, 18 Jun 2009 07:31:54 +0000 (UTC) (envelope-from numisemis@yahoo.com) Received: from web37305.mail.mud.yahoo.com (web37305.mail.mud.yahoo.com [209.191.90.248]) by mx1.freebsd.org (Postfix) with SMTP id E03C38FC16 for ; Thu, 18 Jun 2009 07:31:53 +0000 (UTC) (envelope-from numisemis@yahoo.com) Received: (qmail 98650 invoked by uid 60001); 18 Jun 2009 07:05:12 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1245308712; bh=PDY6oDEBeqoqsyeU/bASJdhGNMIbYhPZEVxhuezuYU8=; h=Message-ID:X-YMail-OSG:Received:X-Mailer:References:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=DvxHWb3OlQy+lorE5dq/V9YupYzLD0hN9kHQonMz4WXSnoodRzIjG19V+tQfvHl+GBuz4XvGUTf9mN10rUqauvFiXSXJ3FY5zrSfoDKXxJPrRrZsbzkzv+3u6tkzG8z+9lz0tjVqtaV8WIAh6E90/Bb9SaXyvuKKIl7307fg62w= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:References:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=Zb7OP+9DJwmRd2EoUGpWrLQQCC3MUsV63G8U36apq6f5jF0EXrOVNcaVyfKsPFHrCWp5/nCALmnY5tBo7gNZfMy5hoPFHUF/H7a/t71vynUjy5CveOiPlLAInEWp+rjPQLDYutFYuT+JRjt6hK0EA9P2U0XN35li1rkX66HG+zw=; Message-ID: <270394.95537.qm@web37305.mail.mud.yahoo.com> X-YMail-OSG: q10L8GgVM1k_5k.BWoNgAeap6tex2ZMAwjNOMK94tHSZ5BXiQzpvg2lLqY7TFV1F3GodRHOQP7vor80mPs9.HTPePb0yBRGC__CO1P6E62JAlZL8nrqdOepM4C3QZ7NCYvTsRPX5gz7O6ri26CSGNVYcnnC_vck1Uil27hlDJLg6sd0s5i49ZctMewpCRR5wLPJJQGlHQ7IeyIci8NHm2KWbj.ThLViihYiwHhHwYdPSua5lkYRTu_fmk882_boMrVuEnCkTnRH0AokUyTFn7doRfkxLtTx69P2VWwqvo6zNWUh9GfJjivcS62MryfLjRiB72KSD Received: from [213.147.110.159] by web37305.mail.mud.yahoo.com via HTTP; Thu, 18 Jun 2009 00:05:12 PDT X-Mailer: YahooMailRC/1277.43 YahooMailWebService/0.7.289.15 References: Date: Thu, 18 Jun 2009 00:05:12 -0700 (PDT) From: Simun Mikecin To: Dan Naumov In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 07:31:54 -0000 Dan Naumov wrote: > All the ZFS tuning guides for FreeBSD (including one on the FreeBSD > ZFS wiki) have recommended values between 64M and 128M to improve > stability, so that what I went with. How much of my max kmem is it > safe to give to ZFS? On amd64 since 7.2-RELEASE manually adjusting kmem map or arc size is not necessary any more (see /usr/src/UPDATING) for stability. But if you like you can still do it. If you want to use ZFS allot I suggest to use latest 7-STABLE (which has ZFS v13, more stable, more bugs resolved). amd64 of course. For i386 it would be better to use UFS+SU (for SCSI) or UFS+gjournal (for ATA). btw. turning on compression on ZFS filesystems might actually increase it's performance that is seen by benchmark programs. From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 08:29:03 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CDB45106564A; Thu, 18 Jun 2009 08:29:03 +0000 (UTC) (envelope-from petefrench@ticketswitch.com) Received: from constantine.ticketswitch.com (constantine.ticketswitch.com [IPv6:2002:57e0:1d4e:1::3]) by mx1.freebsd.org (Postfix) with ESMTP id 930C28FC28; Thu, 18 Jun 2009 08:29:03 +0000 (UTC) (envelope-from petefrench@ticketswitch.com) Received: from dilbert.rattatosk ([10.64.50.6] helo=dilbert.ticketswitch.com) by constantine.ticketswitch.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MHCzU-0002iw-CZ; Thu, 18 Jun 2009 09:29:00 +0100 Received: from petefrench by dilbert.ticketswitch.com with local (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MHCzU-000HX6-B9; Thu, 18 Jun 2009 09:29:00 +0100 To: dan.naumov@gmail.com, ronald-freebsd8@klop.yi.org In-Reply-To: Message-Id: From: Pete French Date: Thu, 18 Jun 2009 09:29:00 +0100 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 08:29:04 -0000 > All the ZFS tuning guides for FreeBSD (including one on the FreeBSD > ZFS wiki) have recommended values between 64M and 128M to improve > stability, so that what I went with. How much of my max kmem is it > safe to give to ZFS? If you are on amd64 then don't tune it, it will tune itself. If you are on i386 (or an earlier verions of amd64) then 128M on a 2 gig machine should be OK, assuming you have kmem_size_max set to the full 1500 odd. Those are numbers which come up time and time again - I ran reliably with them for ages, until the latest -STABLE. -pete. From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 15:47:25 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 023191065672 for ; Thu, 18 Jun 2009 15:47:25 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yx0-f180.google.com (mail-yx0-f180.google.com [209.85.210.180]) by mx1.freebsd.org (Postfix) with ESMTP id AF9F68FC1E for ; Thu, 18 Jun 2009 15:47:24 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by yxe10 with SMTP id 10so2137141yxe.7 for ; Thu, 18 Jun 2009 08:47:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=4y+564LWvS1iAcbM110Gyh7+VKeLUc6/sQbUsdhKucw=; b=a/mZJ84rOeEBd8TGaB4fhQEiUtGPS4oi/lSm+16tX7rk1o5li/BYXh77uTG4Rh13PN oyDgEQEGaRJGKJE6sIlGDBCjM5QMUXJ6XMm7xoCLhvCrNttYKnIwhWXuFZ3lJSDcvU8v p6wuNWbXEqlgnzJMMSs1CHqNqFKA2JeThJdzg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=NPNXj2SyMEkz9Y0u1JQkQhYei8f/AH9DGPlTzLPAQ0OMDf21F3TyGs5YUgRJrUigcG bb9avwW0khYqCl+bKC45R9mBAmoSYALkgYeJN+BGKXFBp8KdwEx4Jk1M0b5xtK6xCdJn ybZEcotcoIDK0PKubCzMrxus0USrehOlHPruU= MIME-Version: 1.0 Received: by 10.150.124.11 with SMTP id w11mr3757247ybc.276.1245338749519; Thu, 18 Jun 2009 08:25:49 -0700 (PDT) In-Reply-To: References: Date: Thu, 18 Jun 2009 08:25:47 -0700 Message-ID: From: Freddie Cash To: freebsd-fs@freebsd.org, FreeBSD Stable Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Subject: Re: ZFS performance on 7.2-release/amd64 low compared to UFS2 + SoftUpdates X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 15:47:25 -0000 On Thu, Jun 18, 2009 at 1:29 AM, Pete French wrote: > > All the ZFS tuning guides for FreeBSD (including one on the FreeBSD > > ZFS wiki) have recommended values between 64M and 128M to improve > > stability, so that what I went with. How much of my max kmem is it > > safe to give to ZFS? > > If you are on amd64 then don't tune it, it will tune itself. If you > are on i386 (or an earlier verions of amd64) then 128M on a 2 gig machine > should be OK, assuming you have kmem_size_max set to the full 1500 odd. > Those are numbers which come up time and time again - I ran reliably with > them for ages, until the latest -STABLE. > My "rule of thumb" for 32-bit i386 systems has been to: - assign half of RAM to kmem (up to the max of ~1500 on 7.0/7.1) - assign half of kmem to zfs_arc_max So far, for my workloads (nfs/cifs file servers, cups print servers, rsync servers, kde4 desktop), it's worked well. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 22:19:06 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4DCC61065677 for ; Thu, 18 Jun 2009 22:19:06 +0000 (UTC) (envelope-from randy@psg.com) Received: from ran.psg.com (ran.psg.com [IPv6:2001:418:1::36]) by mx1.freebsd.org (Postfix) with ESMTP id 2D7BA8FC14 for ; Thu, 18 Jun 2009 22:19:06 +0000 (UTC) (envelope-from randy@psg.com) Received: from localhost ([127.0.0.1] helo=rmac.psg.com) by ran.psg.com with esmtp (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MHPwm-000G6c-Vv for freebsd-fs@freebsd.org; Thu, 18 Jun 2009 22:19:05 +0000 Received: from rmac.local.psg.com (localhost [127.0.0.1]) by rmac.psg.com (Postfix) with ESMTP id 8BE6D2304408 for ; Thu, 18 Jun 2009 15:19:02 -0700 (PDT) Date: Thu, 18 Jun 2009 15:19:00 -0700 Message-ID: From: Randy Bush To: freebsd-fs User-Agent: Wanderlust/2.15.5 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.7 Emacs/22.3 (i386-apple-darwin9.6.0) MULE/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Subject: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 22:19:06 -0000 so, i made the jet-lagged mistake of saying # zpool add tank ad7s1 and got the following # zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad4s3 ONLINE 0 0 0 ad5s3 ONLINE 0 0 0 ad6s1 ONLINE 0 0 0 ad7s1 ONLINE 0 0 0 when i wanted to add it to the raidz1. # zpool remove tank ad7s1 cannot remove ad7s1: only inactive hot spares or cache devices can be removed # zpool offline tank ad7s1 cannot offline ad7s1: no valid replicas how do i pry it off of the pool and stick it into the raidz1? thanks randy From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 22:29:45 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 98DAC106567D for ; Thu, 18 Jun 2009 22:29:45 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 5B5308FC17 for ; Thu, 18 Jun 2009 22:29:44 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id 27275174E1; Fri, 19 Jun 2009 08:30:06 +1000 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.1.50.60] (ppp121-44-41-14.lns10.syd7.internode.on.net [121.44.41.14]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 1A28417212; Fri, 19 Jun 2009 08:30:02 +1000 (EST) Message-ID: <4A3ABF76.3020905@modulus.org> Date: Fri, 19 Jun 2009 08:28:06 +1000 From: Andrew Snow User-Agent: Thunderbird 2.0.0.14 (X11/20080523) MIME-Version: 1.0 To: Randy Bush , freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 22:29:46 -0000 Randy Bush wrote: > so, i made the jet-lagged mistake of saying > > # zpool add tank ad7s1 > when i wanted to add it to the raidz1. > > # zpool remove tank ad7s1 > cannot remove ad7s1: only inactive hot spares or cache devices can be removed > # zpool offline tank ad7s1 > cannot offline ad7s1: no valid replicas > > how do i pry it off of the pool and stick it into the raidz1? *braces* You can't, without recreating the whole zpool. - Andrew From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 22:40:19 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E4691065670 for ; Thu, 18 Jun 2009 22:40:19 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.240]) by mx1.freebsd.org (Postfix) with ESMTP id 08CA18FC13 for ; Thu, 18 Jun 2009 22:40:18 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by an-out-0708.google.com with SMTP id c3so606920ana.13 for ; Thu, 18 Jun 2009 15:40:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=NLwSMy0zUwMzh/ySlhGGq+qruI8q5RLPR36cIydJihU=; b=RhT/HbEyiBvdjAUBdL3UIj9CNWPO8AmtOC3stAhUN5Kj8IzH5Ai9Ry4U0UzKn07w9u JJUR/IMTkG4aA2PkGe5m5EIogppyIguiry5vYo4Buj6BpiV3BOA0sIbeOJFljPzKf+Sw Yc+8pxNRA9zqd/gneeL+1qhhBWU7CeLr9T6R4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=phs2cIaLHvt8Fx+r+8rXEOxuhvQN+BaKVOLbSR5Z/ckQYhjNS6hXQwKk4kp7qBJPG3 2iLTwrxMprVff0xCL7tZiZp6IkBQlwbST2Z8XouBpLfdn/vmyuXi7WbuTrzIlyqlYImQ u853yjpajgtwCJevRuZPvBExcJ074r491zzgo= MIME-Version: 1.0 Received: by 10.100.255.7 with SMTP id c7mr2821815ani.137.1245364818385; Thu, 18 Jun 2009 15:40:18 -0700 (PDT) In-Reply-To: <4A3ABF76.3020905@modulus.org> References: <4A3ABF76.3020905@modulus.org> Date: Fri, 19 Jun 2009 01:40:18 +0300 Message-ID: From: Dan Naumov To: Andrew Snow Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: Randy Bush , freebsd-fs@freebsd.org Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 22:40:19 -0000 To reiterate, you cant just add a single disk drive to a raidz1 or raidz2 pool. This is a known limitation (you can check with SUN ZFS docs). If you have an existing raidz and you MUST increase that particular pool's storage capabilities, you have 3 options: 1) Add a raidz of the same configuration to the pool (think 3 disk raidz + 3 disk raidz or 5 + 5, for example) 2) Replace each (and every) disk in your raidz pool one by one, letting it resilver after inserting each upgraded disk 3) Backup your data, destroy your pool and create a new raidz pool with a bigger amount of disks. - Dan Naumov On Fri, Jun 19, 2009 at 1:28 AM, Andrew Snow wrote: > Randy Bush wrote: >> >> so, i made the jet-lagged mistake of saying >> >> =A0 =A0# zpool add tank ad7s1 >> when i wanted to add it to the raidz1. >> >> =A0 =A0# zpool remove tank ad7s1 >> =A0 =A0cannot remove ad7s1: only inactive hot spares or cache devices ca= n be >> removed >> =A0 =A0# zpool offline tank ad7s1 >> =A0 =A0cannot offline ad7s1: no valid replicas >> >> how do i pry it off of the pool and stick it into the raidz1? > > > *braces* =A0You can't, without recreating the whole zpool. From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 22:52:27 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9C6C7106567D for ; Thu, 18 Jun 2009 22:52:27 +0000 (UTC) (envelope-from randy@psg.com) Received: from ran.psg.com (ran.psg.com [IPv6:2001:418:1::36]) by mx1.freebsd.org (Postfix) with ESMTP id 7DCC88FC18 for ; Thu, 18 Jun 2009 22:52:27 +0000 (UTC) (envelope-from randy@psg.com) Received: from localhost ([127.0.0.1] helo=rmac.psg.com) by ran.psg.com with esmtp (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MHQT3-000GAr-WC; Thu, 18 Jun 2009 22:52:26 +0000 Received: from rmac.local.psg.com (localhost [127.0.0.1]) by rmac.psg.com (Postfix) with ESMTP id C63A923046C5; Thu, 18 Jun 2009 15:52:25 -0700 (PDT) Date: Thu, 18 Jun 2009 15:52:25 -0700 Message-ID: From: Randy Bush To: Dan Naumov In-Reply-To: <4A3ABF76.3020905@modulus.org> References: <4A3ABF76.3020905@modulus.org> User-Agent: Wanderlust/2.15.5 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.7 Emacs/22.3 (i386-apple-darwin9.6.0) MULE/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Cc: freebsd-fs@freebsd.org Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 22:52:28 -0000 > 2) Replace each (and every) disk in your raidz pool one by one, > letting it resilver after inserting each upgraded disk ok. given # zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad4s3 ONLINE 0 0 0 ad5s3 ONLINE 0 0 0 ad6s1 ONLINE 0 0 0 ad7s1 ONLINE 0 0 0 how do i get ad7s1 offline? i can't detach it. will using it in a replace do the trick? and then how do i replace the four slices one by one? sorry, but this is a distant system and after screwing up once, i am a bit cautious. just From owner-freebsd-fs@FreeBSD.ORG Thu Jun 18 22:56:20 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A14701065715 for ; Thu, 18 Jun 2009 22:56:20 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 61DC18FC1A for ; Thu, 18 Jun 2009 22:56:20 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id E94F217409; Fri, 19 Jun 2009 08:56:42 +1000 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.1.50.60] (ppp121-44-41-14.lns10.syd7.internode.on.net [121.44.41.14]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 0210417428; Fri, 19 Jun 2009 08:56:38 +1000 (EST) Message-ID: <4A3AC5B2.9010607@modulus.org> Date: Fri, 19 Jun 2009 08:54:42 +1000 From: Andrew Snow User-Agent: Thunderbird 2.0.0.14 (X11/20080523) MIME-Version: 1.0 To: Randy Bush References: <4A3ABF76.3020905@modulus.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jun 2009 22:56:21 -0000 > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad4s3 ONLINE 0 0 0 > ad5s3 ONLINE 0 0 0 > ad6s1 ONLINE 0 0 0 > ad7s1 ONLINE 0 0 0 Here you have created a non-redundant stripe with two vdev members: 1. a 3-disk RAIDZ1 and 2.a single disk. So you can't ever remove the ad7s1 without data loss. If you haven't written anything to the pool since adding ad7s1, you can probably yank the disk out and ignore any errors but the error messages will never go away until you recreate the whole pool from scratch From owner-freebsd-fs@FreeBSD.ORG Fri Jun 19 00:45:06 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 78F241065672 for ; Fri, 19 Jun 2009 00:45:06 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 15B2A8FC0A for ; Fri, 19 Jun 2009 00:45:06 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from volatile.chemikals.org (adsl-67-127-7.shv.bellsouth.net [98.67.127.7]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 7A9B78CA7115; Thu, 18 Jun 2009 19:45:04 -0500 (CDT) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id n5J0isRo043571; Thu, 18 Jun 2009 19:44:58 -0500 (CDT) (envelope-from morganw@chemikals.org) Date: Thu, 18 Jun 2009 19:44:54 -0500 (CDT) From: Wes Morgan To: Andrew Snow In-Reply-To: <4A3AC5B2.9010607@modulus.org> Message-ID: References: <4A3ABF76.3020905@modulus.org> <4A3AC5B2.9010607@modulus.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Randy Bush , freebsd-fs@freebsd.org Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2009 00:45:06 -0000 On Fri, 19 Jun 2009, Andrew Snow wrote: > >> NAME STATE READ WRITE CKSUM >> tank ONLINE 0 0 0 >> raidz1 ONLINE 0 0 0 >> ad4s3 ONLINE 0 0 0 >> ad5s3 ONLINE 0 0 0 >> ad6s1 ONLINE 0 0 0 >> ad7s1 ONLINE 0 0 0 > > Here you have created a non-redundant stripe with two vdev members: > 1. a 3-disk RAIDZ1 and > 2.a single disk. > > So you can't ever remove the ad7s1 without data loss. > > If you haven't written anything to the pool since adding ad7s1, you can > probably yank the disk out and ignore any errors but the error messages will > never go away until you recreate the whole pool from scratch If you yank ad7s1 the pool will become unavailable. You could remove one of the slices in the raidz, though. The only way to "fix" this is just what everyone has said... Back up the data, destroy the pool and recreate. When you do this, if you don't want to be using slices, just "don't" -- use zpool create raidz somethingbesidestankfortheloveofgod ad4 ad5 ad6 ad7 And you'll be set. But you're using ad5s3 and ad6s3, are the first two slices in use? From owner-freebsd-fs@FreeBSD.ORG Fri Jun 19 01:04:21 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 33E721065672 for ; Fri, 19 Jun 2009 01:04:21 +0000 (UTC) (envelope-from randy@psg.com) Received: from ran.psg.com (ran.psg.com [IPv6:2001:418:1::36]) by mx1.freebsd.org (Postfix) with ESMTP id 0F7168FC13 for ; Fri, 19 Jun 2009 01:04:21 +0000 (UTC) (envelope-from randy@psg.com) Received: from localhost ([127.0.0.1] helo=rmac.psg.com) by ran.psg.com with esmtp (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MHSWg-000GrM-78; Fri, 19 Jun 2009 01:04:18 +0000 Received: from rmac.local.psg.com (localhost [127.0.0.1]) by rmac.psg.com (Postfix) with ESMTP id 09620230AB4B; Thu, 18 Jun 2009 18:04:18 -0700 (PDT) Date: Thu, 18 Jun 2009 18:04:17 -0700 Message-ID: From: Randy Bush To: Wes Morgan In-Reply-To: References: <4A3ABF76.3020905@modulus.org> <4A3AC5B2.9010607@modulus.org> User-Agent: Wanderlust/2.15.5 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.7 Emacs/22.3 (i386-apple-darwin9.6.0) MULE/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Cc: freebsd-fs@freebsd.org Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2009 01:04:21 -0000 > The only way to "fix" this is just what everyone has said... Back up the > data, destroy the pool and recreate. done. worked. luckily this was a system in build. > When you do this, if you don't want to be using slices i have nothing against slices > zpool create raidz somethingbesidestankfortheloveofgod ad4 ad5 ad6 ad7 she really does not care about the pool name. :) > And you'll be set. But you're using ad5s3 and ad6s3, are the first two > slices in use? on the two bootables s1 is a small gmirroed boot s2 is a non-mirrored swap s3 is pool randy From owner-freebsd-fs@FreeBSD.ORG Fri Jun 19 01:10:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E8994106564A for ; Fri, 19 Jun 2009 01:10:37 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 849718FC08 for ; Fri, 19 Jun 2009 01:10:37 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from volatile.chemikals.org (adsl-67-127-7.shv.bellsouth.net [98.67.127.7]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 8CD7E8CA7106; Thu, 18 Jun 2009 20:10:36 -0500 (CDT) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id n5J1AU5a043804; Thu, 18 Jun 2009 20:10:31 -0500 (CDT) (envelope-from morganw@chemikals.org) Date: Thu, 18 Jun 2009 20:10:30 -0500 (CDT) From: Wes Morgan To: Randy Bush In-Reply-To: Message-ID: References: <4A3ABF76.3020905@modulus.org> <4A3AC5B2.9010607@modulus.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2009 01:10:38 -0000 On Thu, 18 Jun 2009, Randy Bush wrote: >> The only way to "fix" this is just what everyone has said... Back up the >> data, destroy the pool and recreate. > > done. worked. luckily this was a system in build. > >> When you do this, if you don't want to be using slices > > i have nothing against slices > >> zpool create raidz somethingbesidestankfortheloveofgod ad4 ad5 ad6 ad7 > > she really does not care about the pool name. :) > >> And you'll be set. But you're using ad5s3 and ad6s3, are the first two >> slices in use? > > on the two bootables > s1 is a small gmirroed boot > s2 is a non-mirrored swap > s3 is pool Just out of sheer curiosity, are all the slices and devices in the raidz the same size? From owner-freebsd-fs@FreeBSD.ORG Fri Jun 19 01:21:01 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8BD8A1065677 for ; Fri, 19 Jun 2009 01:21:01 +0000 (UTC) (envelope-from randy@psg.com) Received: from ran.psg.com (ran.psg.com [IPv6:2001:418:1::36]) by mx1.freebsd.org (Postfix) with ESMTP id 281588FC19 for ; Fri, 19 Jun 2009 01:21:01 +0000 (UTC) (envelope-from randy@psg.com) Received: from localhost ([127.0.0.1] helo=rmac.psg.com) by ran.psg.com with esmtp (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MHSmq-000Gtj-PK; Fri, 19 Jun 2009 01:21:01 +0000 Received: from rmac.local.psg.com (localhost [127.0.0.1]) by rmac.psg.com (Postfix) with ESMTP id 8FD92230C0AB; Thu, 18 Jun 2009 18:21:00 -0700 (PDT) Date: Thu, 18 Jun 2009 18:21:00 -0700 Message-ID: From: Randy Bush To: Wes Morgan In-Reply-To: References: <4A3ABF76.3020905@modulus.org> <4A3AC5B2.9010607@modulus.org> User-Agent: Wanderlust/2.15.5 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.7 Emacs/22.3 (i386-apple-darwin9.6.0) MULE/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Cc: freebsd-fs@freebsd.org Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2009 01:21:01 -0000 >> on the two bootables >> s1 is a small gmirroed boot >> s2 is a non-mirrored swap >> s3 is pool > Just out of sheer curiosity, are all the slices and devices in the raidz > the same size? no they were not. and now it is not a raidz, but rather NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 ad4s3 ONLINE 0 0 0 ad5s3 ONLINE 0 0 0 mirror ONLINE 0 0 0 ad6s1 ONLINE 0 0 0 ad7s1 ONLINE 0 0 0 randy From owner-freebsd-fs@FreeBSD.ORG Fri Jun 19 04:12:39 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 298741065670 for ; Fri, 19 Jun 2009 04:12:39 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id C14308FC16 for ; Fri, 19 Jun 2009 04:12:38 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id n5J4Cbm3058717; Thu, 18 Jun 2009 23:12:37 -0500 (CDT) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:x-enigmail-version:content-type:content-transfer-encoding; b=adSRmLGBKvaXItMQcHbPnC9p+44l9LIh17ebIAvFopjwIQK1BAks4MT5VoF9iFiPM PbuFBTTz9IJ2fvmWnK+J5K/d2bXp6us8/eEzPkHZigy7Ivp1P+s5kzxgTzPGL8auNmh lpWXVnDGmKk+bXHke5G+kAZzSlFbDMapAuqwrhQ= Message-ID: <4A3B1020.2010305@jrv.org> Date: Thu, 18 Jun 2009 23:12:16 -0500 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.21 (Macintosh/20090302) MIME-Version: 1.0 To: freebsd-fs References: In-Reply-To: X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Randy Bush Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2009 04:12:39 -0000 As a feature suggestion why not reject an "zpool add" of a non-redundant vdev to a pool of redundant vdev's unless -f is given? A command of that sort is almost always a mistake so requiring -f would seem no hardship for anyone... Randy Bush wrote: > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad4s3 ONLINE 0 0 0 > ad5s3 ONLINE 0 0 0 > ad6s1 ONLINE 0 0 0 > ad7s1 ONLINE 0 0 0 > As was said, a vdev (ad7s1) cannot be removed from a pool, and a device cannot be added to a raidz. However, I believe it is possible to attach a device to a single-device vdev such as ad7s1 and turn that vdev into a mirror, regaining redundancy without recreating the pool, perhaps something like: # zpool attach tank ad7s1 ad8s1 to get NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad4s3 ONLINE 0 0 0 ad5s3 ONLINE 0 0 0 ad6s1 ONLINE 0 0 0 mirror ONLINE 0 0 0 ad7s1 ONLINE 0 0 0 ad8s1 ONLINE 0 0 0 (hand edited, not actual zpool output) Even if the pool the pool is to be rebuilt I suggest converting the naked vdevs to mirrors in the meantime to avoid disaster... PS. I prefer pools of mirrors over raidz anyway with such a small number of devices since it's easier to protect against many more kinds of system faults (i.e., power supply, cable, device firmware, host controller, driver, etc). From owner-freebsd-fs@FreeBSD.ORG Fri Jun 19 09:43:19 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 61522106564A for ; Fri, 19 Jun 2009 09:43:19 +0000 (UTC) (envelope-from jh@saunalahti.fi) Received: from gw03.mail.saunalahti.fi (gw03.mail.saunalahti.fi [195.197.172.111]) by mx1.freebsd.org (Postfix) with ESMTP id 233F68FC13 for ; Fri, 19 Jun 2009 09:43:18 +0000 (UTC) (envelope-from jh@saunalahti.fi) Received: from a91-153-125-115.elisa-laajakaista.fi (a91-153-125-115.elisa-laajakaista.fi [91.153.125.115]) by gw03.mail.saunalahti.fi (Postfix) with SMTP id 029B02166AA for ; Fri, 19 Jun 2009 12:43:16 +0300 (EEST) Date: Fri, 19 Jun 2009 12:43:16 +0300 From: Jaakko Heinonen To: freebsd-fs@FreeBSD.org Message-ID: <20090619094316.GA805@a91-153-125-115.elisa-laajakaista.fi> References: <20090527150258.GA3666@a91-153-125-115.elisa-laajakaista.fi> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20090527150258.GA3666@a91-153-125-115.elisa-laajakaista.fi> User-Agent: Mutt/1.5.19 (2009-01-05) Cc: Subject: Re: VOP_WRITE & read-only file system X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2009 09:43:19 -0000 On 2009-05-27, Jaakko Heinonen wrote: > I found a few ways to get VOP_WRITE called for a read-only system. > > Ways I found: > > 1) mmap(2) > > 2) ktrace(2) > - start ktracing a process > - remount file-system as read-only While kib@ has a patch for mmap(2) I took a look at ktrace(2). ktrace too has a problem with writecount. ktrace uses vn_open() to open the trace file but immediately after that it calls vn_close() which decreases the writecount. As far as I can tell it does this because the same vnode may be associated with several processes and there is no easy and efficient way to know when it is disassociated from last process. Ideas how to fix it? Some thoughts: - Fiddle with writecount. IMHO it wouldn't fix the real bug (write after vn_close()). - Walk through all processes when disconnecting a vnode from process to find out if it was the last process using the vnode. Inefficient. - Keep track of vnodes which are used for tracing and have reference count for them. -- Jaakko From owner-freebsd-fs@FreeBSD.ORG Fri Jun 19 19:32:14 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3740B106568B for ; Fri, 19 Jun 2009 19:32:14 +0000 (UTC) (envelope-from fb-fs@psconsult.nl) Received: from mx1.psconsult.nl (psc11.adsl.iaf.nl [80.89.238.138]) by mx1.freebsd.org (Postfix) with ESMTP id BC2948FC08 for ; Fri, 19 Jun 2009 19:32:12 +0000 (UTC) (envelope-from fb-fs@psconsult.nl) Received: from mx1.psconsult.nl (localhost [80.89.238.138]) by mx1.psconsult.nl (8.14.2/8.14.2) with ESMTP id n5JJLx1W078671 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Fri, 19 Jun 2009 21:22:04 +0200 (CEST) (envelope-from fb-fs@psconsult.nl) Received: (from paul@localhost) by mx1.psconsult.nl (8.14.2/8.14.2/Submit) id n5JJLwQ7078670 for freebsd-fs@freebsd.org; Fri, 19 Jun 2009 21:21:58 +0200 (CEST) (envelope-from fb-fs@psconsult.nl) Date: Fri, 19 Jun 2009 21:21:58 +0200 From: Paul Schenkeveld To: freebsd-fs@freebsd.org Message-ID: <20090619192158.GA78254@psconsult.nl> Mail-Followup-To: freebsd-fs@freebsd.org References: <4A3ABF76.3020905@modulus.org> <4A3AC5B2.9010607@modulus.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Jun 2009 19:32:14 -0000 On Thu, Jun 18, 2009 at 06:04:17PM -0700, Randy Bush wrote: > on the two bootables > s1 is a small gmirroed boot > s2 is a non-mirrored swap If the system swaps, a read error on the swap device will panic the system. Although swap data is always transient and after a reboot generally not interesting anymore, I ALWAYS put swap on a mirror/raid3/raid5/raidz just to make sure the system survives a read error, especially with remote systems. > s3 is pool Paul Schenkeveld From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 04:54:07 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BFA941065670 for ; Sat, 20 Jun 2009 04:54:07 +0000 (UTC) (envelope-from peterjeremy@optushome.com.au) Received: from mail34.syd.optusnet.com.au (mail34.syd.optusnet.com.au [211.29.133.218]) by mx1.freebsd.org (Postfix) with ESMTP id 523BB8FC18 for ; Sat, 20 Jun 2009 04:54:07 +0000 (UTC) (envelope-from peterjeremy@optushome.com.au) Received: from server.vk2pj.dyndns.org (c122-106-216-167.belrs3.nsw.optusnet.com.au [122.106.216.167]) by mail34.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id n5K4s348001361 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 20 Jun 2009 14:54:04 +1000 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.3/8.14.3) with ESMTP id n5K4rwin087547; Sat, 20 Jun 2009 14:53:58 +1000 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.3/8.14.3/Submit) id n5K4rvxk087546; Sat, 20 Jun 2009 14:53:57 +1000 (EST) (envelope-from peter) Date: Sat, 20 Jun 2009 14:53:57 +1000 From: Peter Jeremy To: "James R. Van Artsdalen" Message-ID: <20090620045357.GB22846@server.vk2pj.dyndns.org> References: <4A3B1020.2010305@jrv.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="kORqDWCi7qDJ0mEj" Content-Disposition: inline In-Reply-To: <4A3B1020.2010305@jrv.org> X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.19 (2009-01-05) Cc: freebsd-fs , Randy Bush Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 04:54:08 -0000 --kORqDWCi7qDJ0mEj Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2009-Jun-18 23:12:16 -0500, "James R. Van Artsdalen" wrote: >As a feature suggestion why not reject an "zpool add" of a non-redundant >vdev to a pool of redundant vdev's unless -f is given? A command of >that sort is almost always a mistake so requiring -f would seem no >hardship for anyone... Agreed. >As was said, a vdev (ad7s1) cannot be removed from a pool, and a device >cannot be added to a raidz. Both these are unfortunate restrictions. I can understand that expanding a RAIDZ would be a fairly complex operation but it's probably the most requested feature. I'm surprised that Sun don't allow removing vdevs from a pool - it's orthogonal to adding a vdev to a pool and (eg) HP AdvFS allows both. --=20 Peter Jeremy --kORqDWCi7qDJ0mEj Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.11 (FreeBSD) iEYEARECAAYFAko8a2UACgkQ/opHv/APuIegQACeMcKgN7KZbX7krZWTiNjDsU8e KXAAn1TKSqrhA0N14PhbCAyP0IGK97p0 =IfkF -----END PGP SIGNATURE----- --kORqDWCi7qDJ0mEj-- From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 05:43:31 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 939BF106564A for ; Sat, 20 Jun 2009 05:43:31 +0000 (UTC) (envelope-from mat.macy@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.240]) by mx1.freebsd.org (Postfix) with ESMTP id 477B18FC12 for ; Sat, 20 Jun 2009 05:43:31 +0000 (UTC) (envelope-from mat.macy@gmail.com) Received: by an-out-0708.google.com with SMTP id c3so964616ana.13 for ; Fri, 19 Jun 2009 22:43:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=pMvKDyYT1T39ibpcUaf6szBmV/Dzoa6Z/f2cP2zjDfY=; b=ezR/e9ahB6JwmAliuVSvXiUyBYyc6V8kOgG3gVoi94uogzyuarRd0GQG+3BBrmedH2 Wj0JlniYhd9gxq4OatoDAjxxPtWKqKwDDFflmnvE8sToAeTg82agfk2LXmrtzqBUFu7k 8cC1PzNkfslnLczRH4LXjgJYD367NjH67E39M= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=tvcd7aKHCnYwG83ezFsNXDR+BZXAbOdmUk9SJuRzJoUhI9qIT9Afc2m3jFdBPrtjd5 mZuBfRwwdj0rEIZQR3p3Dhq22KS4hdTB2pUFTiHDg2lzPmESoQkArO0LNcNXbvj65yT+ ZaJHfRGEj1UXDkNf214TX3WnZ+yMapRg6JjgE= MIME-Version: 1.0 Sender: mat.macy@gmail.com Received: by 10.100.46.3 with SMTP id t3mr4888679ant.94.1245476610334; Fri, 19 Jun 2009 22:43:30 -0700 (PDT) In-Reply-To: <20090620045357.GB22846@server.vk2pj.dyndns.org> References: <4A3B1020.2010305@jrv.org> <20090620045357.GB22846@server.vk2pj.dyndns.org> Date: Fri, 19 Jun 2009 22:43:30 -0700 X-Google-Sender-Auth: 97c438b1e3d0ec55 Message-ID: <3c1674c90906192243g2ea0781ne66fb67a520d56bf@mail.gmail.com> From: Kip Macy To: Peter Jeremy Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs , Randy Bush Subject: Re: adding drive to raidz1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 05:43:31 -0000 > Both these are unfortunate restrictions. =A0I can understand that > expanding a RAIDZ would be a fairly complex operation but it's > probably the most requested feature. =A0I'm surprised that Sun don't > allow removing vdevs from a pool - it's orthogonal to adding a vdev to > a pool and (eg) HP AdvFS allows both. http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z This has a very good discussion of how it could be done along with why it hasn't been done. Cheers, Kip From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 08:24:04 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 444601065679 for ; Sat, 20 Jun 2009 08:24:04 +0000 (UTC) (envelope-from shopsite-user-bounces@mailman.dca.net) Received: from mailman.dca.net (elephant.hq.dca.net [216.158.48.73]) by mx1.freebsd.org (Postfix) with ESMTP id 1C0958FC1A for ; Sat, 20 Jun 2009 08:24:03 +0000 (UTC) (envelope-from shopsite-user-bounces@mailman.dca.net) Received: from elephant.hq.dca.net (elephant.dca.net [127.0.0.1]) by mailman.dca.net (Postfix) with ESMTP id 2DB683BC8A for ; Sat, 20 Jun 2009 03:33:20 -0400 (EDT) MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit From: shopsite-user-bounces@mailman.dca.net To: freebsd-fs@freebsd.org Message-ID: Date: Sat, 20 Jun 2009 03:33:18 -0400 Precedence: bulk X-BeenThere: shopsite-user@mailman.dca.net X-Mailman-Version: 2.1.5 X-List-Administrivia: yes Sender: shopsite-user-bounces@mailman.dca.net Errors-To: shopsite-user-bounces@mailman.dca.net Subject: Your message to Shopsite-user awaits moderator approval X-BeenThere: freebsd-fs@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 08:24:04 -0000 Your mail to 'Shopsite-user' with the subject Mail System Error - Returned Mail Is being held until the list moderator can review it for approval. The reason it is being held: Post by non-member to a members-only list Either the message will get posted to the list, or you will receive notification of the moderator's decision. If you would like to cancel this posting, please visit the following URL: http://mailman.dca.net/mailman/confirm/shopsite-user/ec53249e000defe6a525f6261fadae3875751119 From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 19:32:33 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F1EBA1065670 for ; Sat, 20 Jun 2009 19:32:33 +0000 (UTC) (envelope-from mat.macy@gmail.com) Received: from mail-yx0-f200.google.com (mail-yx0-f200.google.com [209.85.210.200]) by mx1.freebsd.org (Postfix) with ESMTP id A02198FC08 for ; Sat, 20 Jun 2009 19:32:33 +0000 (UTC) (envelope-from mat.macy@gmail.com) Received: by yxe38 with SMTP id 38so831277yxe.3 for ; Sat, 20 Jun 2009 12:32:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=91uOkxgFVsnv8Y1/eWQeRnVgEguKc2DsXtXT01fBbSE=; b=Cso4AwOdSBPg0VuES66BFU1ykn6SqvrJEjKT4X3PWNrhgTRbBQs55dHkVf1mNmiIs0 f5vbqe6GOfUy2Wy0/5RfPN2PQSOlM/JQDMkcX1/3uxvX2L67bqzpF7R+ey5h3oTBCqTV wQf2vLwfGb8JH6rlG3+c6ais1Uk/Rix1M/VVA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=SxjZGBMdbXNnAfb5IWF4PzE/r+7hB9KXgZF+oelH4Ghb2YTyvSFjKfMQx7L2TOGLtJ o8Pc9WLJ+8JKaJEfaI2TvbqQkpUB5fPSuChlpUef9qR0/ooeBOHAL/oU8Jfk/yoqdZLP JBeMXBlXWA2kX3XWAW7BcAty3KKQSp/X52Plk= MIME-Version: 1.0 Sender: mat.macy@gmail.com Received: by 10.100.37.12 with SMTP id k12mr5781141ank.99.1245526352336; Sat, 20 Jun 2009 12:32:32 -0700 (PDT) In-Reply-To: <1245525965.26909.69.camel@phoenix.blechhirn.net> References: <1245519413.26909.60.camel@phoenix.blechhirn.net> <3c1674c90906201050w15e4cd5dpae76cd70d64b4e92@mail.gmail.com> <1245525965.26909.69.camel@phoenix.blechhirn.net> Date: Sat, 20 Jun 2009 12:32:32 -0700 X-Google-Sender-Auth: 68337241cd8c4ccf Message-ID: <3c1674c90906201232x63ddee19yf91aeac30f3401bb@mail.gmail.com> From: Kip Macy To: mister.olli@googlemail.com Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: Unable to delete files on ZFS volume X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 19:32:34 -0000 This is a a known issue with write allocate file systems and snapshots. I haven't seen this before on v13 without any snapshots. A few questions: - How many file systems? - How old are the file systems? - How much churn has there been on the file system? - Was this an upgraded v6 or created as v13? - How many files on test? ... as well as any other things that occur to you to characterize the file system. Cheers, Kip On Sat, Jun 20, 2009 at 12:26 PM, Mister Olli w= rote: > Hi, > >> Do you have snapshots or run ZFS v6? > neither one or the other. Here are my pool/ ZFS details. > > [root@template-8_CURRENT /test/data2]# zpool get all test > NAME =A0PROPERTY =A0 =A0 =A0 VALUE =A0 =A0 =A0 SOURCE > test =A0size =A0 =A0 =A0 =A0 =A0 2.98G =A0 =A0 =A0 - > test =A0used =A0 =A0 =A0 =A0 =A0 2.94G =A0 =A0 =A0 - > test =A0available =A0 =A0 =A047.9M =A0 =A0 =A0 - > test =A0capacity =A0 =A0 =A0 98% =A0 =A0 =A0 =A0 - > test =A0altroot =A0 =A0 =A0 =A0- =A0 =A0 =A0 =A0 =A0 default > test =A0health =A0 =A0 =A0 =A0 ONLINE =A0 =A0 =A0- > test =A0guid =A0 =A0 =A0 =A0 =A0 5305090209740383945 =A0- > test =A0version =A0 =A0 =A0 =A013 =A0 =A0 =A0 =A0 =A0default > test =A0bootfs =A0 =A0 =A0 =A0 - =A0 =A0 =A0 =A0 =A0 default > test =A0delegation =A0 =A0 on =A0 =A0 =A0 =A0 =A0default > test =A0autoreplace =A0 =A0off =A0 =A0 =A0 =A0 default > test =A0cachefile =A0 =A0 =A0- =A0 =A0 =A0 =A0 =A0 default > test =A0failmode =A0 =A0 =A0 wait =A0 =A0 =A0 =A0default > test =A0listsnapshots =A0off =A0 =A0 =A0 =A0 default > [root@template-8_CURRENT /test/data2]# zfs get all test > NAME =A0PROPERTY =A0 =A0 =A0 =A0 =A0 =A0 =A0VALUE =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0SOURCE > test =A0type =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filesystem =A0 =A0 =A0 = =A0 =A0 =A0 - > test =A0creation =A0 =A0 =A0 =A0 =A0 =A0 =A0Fri Jun 19 21:01 2009 =A0- > test =A0used =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01.96G =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0- > test =A0available =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0- > test =A0referenced =A0 =A0 =A0 =A0 =A0 =A026.6K =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0- > test =A0compressratio =A0 =A0 =A0 =A0 1.00x =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0- > test =A0mounted =A0 =A0 =A0 =A0 =A0 =A0 =A0 yes =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0- > test =A0quota =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 none =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > test =A0reservation =A0 =A0 =A0 =A0 =A0 none =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 default > test =A0recordsize =A0 =A0 =A0 =A0 =A0 =A0128K =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 default > test =A0mountpoint =A0 =A0 =A0 =A0 =A0 =A0/test =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0default > test =A0sharenfs =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0default > test =A0checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > test =A0compression =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0default > test =A0atime =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > test =A0devices =A0 =A0 =A0 =A0 =A0 =A0 =A0 on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > test =A0exec =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 default > test =A0setuid =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > test =A0readonly =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0default > test =A0jailed =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0default > test =A0snapdir =A0 =A0 =A0 =A0 =A0 =A0 =A0 hidden =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 default > test =A0aclmode =A0 =A0 =A0 =A0 =A0 =A0 =A0 groupmask =A0 =A0 =A0 =A0 =A0= =A0 =A0default > test =A0aclinherit =A0 =A0 =A0 =A0 =A0 =A0restricted =A0 =A0 =A0 =A0 =A0 = =A0 default > test =A0canmount =A0 =A0 =A0 =A0 =A0 =A0 =A0on =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 default > test =A0shareiscsi =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0default > test =A0xattr =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0temporary > test =A0copies =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0default > test =A0version =A0 =A0 =A0 =A0 =A0 =A0 =A0 3 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0- > test =A0utf8only =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0- > test =A0normalization =A0 =A0 =A0 =A0 none =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 - > test =A0casesensitivity =A0 =A0 =A0 sensitive =A0 =A0 =A0 =A0 =A0 =A0 =A0= - > test =A0vscan =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0default > test =A0nbmand =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0default > test =A0sharesmb =A0 =A0 =A0 =A0 =A0 =A0 =A0off =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0default > test =A0refquota =A0 =A0 =A0 =A0 =A0 =A0 =A0none =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 default > test =A0refreservation =A0 =A0 =A0 =A0none =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 default > test =A0primarycache =A0 =A0 =A0 =A0 =A0all =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0default > test =A0secondarycache =A0 =A0 =A0 =A0all =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0default > test =A0usedbysnapshots =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0- > test =A0usedbydataset =A0 =A0 =A0 =A0 26.6K =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0- > test =A0usedbychildren =A0 =A0 =A0 =A01.96G =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0- > test =A0usedbyrefreservation =A00 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0- > [root@template-8_CURRENT /test/data2]# zfs list -t snapshot > no datasets available > > > >> Confirm that you've deleted your snapshots and are running pool v13. >> >> Future ZFS mail should be directed to freebsd-fs@ > Sorry for that. fixed now ;-)) > > Regards, > --- > Mr. Olli > > >> >> >> On Sat, Jun 20, 2009 at 10:36 AM, Mister Olli wrote: >> > Hi, >> > >> > after filling up a ZFS volume until the last byte, I'm unable to delet= e >> > files, with error 'No space left on the device'. >> > >> > >> > >> > [root@template-8_CURRENT /test/data2]# df -h >> > Filesystem =A0 =A0 Size =A0 =A0Used =A0 Avail Capacity =A0Mounted on >> > /dev/ad0s1a =A0 =A08.7G =A0 =A05.2G =A0 =A02.8G =A0 =A065% =A0 =A0/ >> > devfs =A0 =A0 =A0 =A0 =A01.0K =A0 =A01.0K =A0 =A0 =A00B =A0 100% =A0 = =A0/dev >> > test =A0 =A0 =A0 =A0 =A0 =A0 0B =A0 =A0 =A00B =A0 =A0 =A00B =A0 100% = =A0 =A0/test >> > test/data1 =A0 =A0 1.6G =A0 =A01.6G =A0 =A0 =A00B =A0 100% =A0 =A0/tes= t/data1 >> > test/data2 =A0 =A0 341M =A0 =A0341M =A0 =A0 =A00B =A0 100% =A0 =A0/tes= t/data2 >> > [root@template-8_CURRENT /test/data2]# zfs list >> > NAME =A0 =A0 =A0 =A0 USED =A0AVAIL =A0REFER =A0MOUNTPOINT >> > test =A0 =A0 =A0 =A01.96G =A0 =A0 =A00 =A026.6K =A0/test >> > test/data1 =A01.62G =A0 =A0 =A00 =A01.62G =A0/test/data1 >> > test/data2 =A0 341M =A0 =A0 =A00 =A0 341M =A0/test/data2 >> > [root@template-8_CURRENT /test/data2]# ls -l data1 |tail -n 20 =A0 =A0= =A0 =A0 =A0<-- there are quite a lot of files, so I truncated ;-)) >> > -rw-r--r-- =A01 root =A0wheel =A0 =A0 =A03072 Jun 20 17:13 20090620165= 743 >> > -rw-r--r-- =A01 root =A0wheel =A0 9771008 Jun 20 17:11 20090620165803 >> > -rw-r--r-- =A01 root =A0wheel =A0 =A0624640 Jun 20 17:12 2009062016580= 9 >> > -rw-r--r-- =A01 root =A0wheel =A0 1777664 Jun 20 17:14 20090620165810 >> > -rw-r--r-- =A01 root =A0wheel =A0 4059136 Jun 20 17:15 20090620165817 >> > -rw-r--r-- =A01 root =A0wheel =A023778304 Jun 20 17:13 20090620165925 >> > -rw-r--r-- =A01 root =A0wheel =A020318208 Jun 20 17:13 20090620165952 >> > -rw-r--r-- =A01 root =A0wheel =A028394496 Jun 20 17:10 20090620170013 >> > -rw-r--r-- =A01 root =A0wheel =A023698432 Jun 20 17:12 20090620170021 >> > -rw-r--r-- =A01 root =A0wheel =A026476544 Jun 20 17:19 20090620170100 >> > -rw-r--r-- =A01 root =A0wheel =A019904512 Jun 20 17:15 20090620170132 >> > -rw-r--r-- =A01 root =A0wheel =A023815168 Jun 20 17:14 20090620170142 >> > -rw-r--r-- =A01 root =A0wheel =A0 6683648 Jun 20 17:11 20090620170225 >> > -rw-r--r-- =A01 root =A0wheel =A019619840 Jun 20 17:11 20090620170322 >> > -rw-r--r-- =A01 root =A0wheel =A013902848 Jun 20 17:13 20090620170331 >> > -rw-r--r-- =A01 root =A0wheel =A028981248 Jun 20 17:13 20090620170346 >> > -rw-r--r-- =A01 root =A0wheel =A018287616 Jun 20 17:11 20090620170355 >> > -rw-r--r-- =A01 root =A0wheel =A016762880 Jun 20 17:16 20090620170405 >> > -rw-r--r-- =A01 root =A0wheel =A026966016 Jun 20 17:10 20090620170429 >> > -rw-r--r-- =A01 root =A0wheel =A0 5252096 Jun 20 17:14 20090620170502 >> > [root@template-8_CURRENT /test/data2]# =A0rm -rf data1 >> > rm: data1/20090620141524: No space left on device >> > rm: data1/20090620025202: No space left on device >> > rm: data1/20090620014926: No space left on device >> > rm: data1/20090620075405: No space left on device >> > rm: data1/20090620155124: No space left on device >> > rm: data1/20090620105723: No space left on device >> > rm: data1/20090620170100: No space left on device >> > rm: data1/20090620040149: No space left on device >> > rm: data1/20090620002512: No space left on device >> > rm: data1/20090620052315: No space left on device >> > rm: data1/20090620083750: No space left on device >> > rm: data1/20090620063831: No space left on device >> > rm: data1/20090620155029: No space left on device >> > rm: data1/20090619234313: No space left on device >> > rm: data1/20090620115346: No space left on device >> > rm: data1/20090620075508: No space left on device >> > rm: data1/20090620145541: No space left on device >> > rm: data1/20090620093335: No space left on device >> > rm: data1/20090620101846: No space left on device >> > rm: data1/20090620132456: No space left on device >> > rm: data1/20090620040044: No space left on device >> > rm: data1/20090620091401: No space left on device >> > rm: data1/20090620162251: No space left on device >> > rm: data1/20090619220813: No space left on device >> > rm: data1/20090620010643: No space left on device >> > rm: data1/20090620052218: No space left on device >> > >> > >> > >> > >> > >> > Regards, >> > --- >> > Mr. Olli >> > >> > _______________________________________________ >> > freebsd-current@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-current >> > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.= org" >> > >> >> >> > > --=20 When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle. Edmund Burke From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 19:42:41 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 476981065674; Sat, 20 Jun 2009 19:42:41 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from yw-out-2324.google.com (yw-out-2324.google.com [74.125.46.28]) by mx1.freebsd.org (Postfix) with ESMTP id 9C1F08FC27; Sat, 20 Jun 2009 19:42:38 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by yw-out-2324.google.com with SMTP id 9so1081221ywe.13 for ; Sat, 20 Jun 2009 12:42:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=XMrfkyvxV8RJwpTP3HHmxaDWqnVSz0r0cI+w1G9MkLg=; b=FYgNe4koU7QNNcd5I+Jbh8RypUOC4Vkh1H5+3Gw2NvVKvewqyYhu+/J8aCMdyyyUlR 31dUV6AdbsmLKAdDYpQASE8sE+h/pg3ntYWPcGTAETqjWc0+cd5NbxJ/Zcz9NbZBM5bL cq7AEpQxiuThB6RX16Oi/VEKXYM2HXfuNZElw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=loYpMkBhUZRK0DiWHowKVswqGGNRwyuB8eckmtiLLWnpYBghpWE3MF2IX61csKRUxM ds8WBCAtzDM+OPsaW24rS7olBsRXIWAQzinHdGGRLJiuxIN20l/2uyQ3qTw3dnPsSCZM lWLKf5yKy0YEux8hZuy9YGYAusT3klg6lHxgw= MIME-Version: 1.0 Received: by 10.100.141.15 with SMTP id o15mr5719029and.20.1245526957800; Sat, 20 Jun 2009 12:42:37 -0700 (PDT) In-Reply-To: <3c1674c90906201232x63ddee19yf91aeac30f3401bb@mail.gmail.com> References: <1245519413.26909.60.camel@phoenix.blechhirn.net> <3c1674c90906201050w15e4cd5dpae76cd70d64b4e92@mail.gmail.com> <1245525965.26909.69.camel@phoenix.blechhirn.net> <3c1674c90906201232x63ddee19yf91aeac30f3401bb@mail.gmail.com> Date: Sat, 20 Jun 2009 22:42:37 +0300 Message-ID: From: Dan Naumov To: Kip Macy Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: Unable to delete files on ZFS volume X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 19:42:42 -0000 Hi. As Kip pointed out, this is a known issue with write allocate filesystems in general (not just ZFS). This is one of the several reasons why SUN recommends you do not completely fill up a zpool (they actually recommend to stay at or below 80% utilization). I have a workaround for you, however: Pick a file you don't need on the filled up ZFS volume. "Empty" the file contents in a way of your chosing. This should give you some disk space needed to use "rm" and further empty up your filesystem and allow for normal operation. This is a bit ugly, but it works. - Sincerely, Dan Naumov >>> On Sat, Jun 20, 2009 at 10:36 AM, Mister Olli wrote: >>> > Hi, >>> > >>> > after filling up a ZFS volume until the last byte, I'm unable to dele= te >>> > files, with error 'No space left on the device'. >>> > >>> > >>> > >>> > [root@template-8_CURRENT /test/data2]# df -h >>> > Filesystem =A0 =A0 Size =A0 =A0Used =A0 Avail Capacity =A0Mounted on >>> > /dev/ad0s1a =A0 =A08.7G =A0 =A05.2G =A0 =A02.8G =A0 =A065% =A0 =A0/ >>> > devfs =A0 =A0 =A0 =A0 =A01.0K =A0 =A01.0K =A0 =A0 =A00B =A0 100% =A0 = =A0/dev >>> > test =A0 =A0 =A0 =A0 =A0 =A0 0B =A0 =A0 =A00B =A0 =A0 =A00B =A0 100% = =A0 =A0/test >>> > test/data1 =A0 =A0 1.6G =A0 =A01.6G =A0 =A0 =A00B =A0 100% =A0 =A0/te= st/data1 >>> > test/data2 =A0 =A0 341M =A0 =A0341M =A0 =A0 =A00B =A0 100% =A0 =A0/te= st/data2 >>> > [root@template-8_CURRENT /test/data2]# zfs list >>> > NAME =A0 =A0 =A0 =A0 USED =A0AVAIL =A0REFER =A0MOUNTPOINT >>> > test =A0 =A0 =A0 =A01.96G =A0 =A0 =A00 =A026.6K =A0/test >>> > test/data1 =A01.62G =A0 =A0 =A00 =A01.62G =A0/test/data1 >>> > test/data2 =A0 341M =A0 =A0 =A00 =A0 341M =A0/test/data2 >>> > [root@template-8_CURRENT /test/data2]# ls -l data1 |tail -n 20 =A0 = =A0 =A0 =A0 =A0<-- there are quite a lot of files, so I truncated ;-)) >>> > -rw-r--r-- =A01 root =A0wheel =A0 =A0 =A03072 Jun 20 17:13 2009062016= 5743 >>> > -rw-r--r-- =A01 root =A0wheel =A0 9771008 Jun 20 17:11 20090620165803 >>> > -rw-r--r-- =A01 root =A0wheel =A0 =A0624640 Jun 20 17:12 200906201658= 09 >>> > -rw-r--r-- =A01 root =A0wheel =A0 1777664 Jun 20 17:14 20090620165810 >>> > -rw-r--r-- =A01 root =A0wheel =A0 4059136 Jun 20 17:15 20090620165817 >>> > -rw-r--r-- =A01 root =A0wheel =A023778304 Jun 20 17:13 20090620165925 >>> > -rw-r--r-- =A01 root =A0wheel =A020318208 Jun 20 17:13 20090620165952 >>> > -rw-r--r-- =A01 root =A0wheel =A028394496 Jun 20 17:10 20090620170013 >>> > -rw-r--r-- =A01 root =A0wheel =A023698432 Jun 20 17:12 20090620170021 >>> > -rw-r--r-- =A01 root =A0wheel =A026476544 Jun 20 17:19 20090620170100 >>> > -rw-r--r-- =A01 root =A0wheel =A019904512 Jun 20 17:15 20090620170132 >>> > -rw-r--r-- =A01 root =A0wheel =A023815168 Jun 20 17:14 20090620170142 >>> > -rw-r--r-- =A01 root =A0wheel =A0 6683648 Jun 20 17:11 20090620170225 >>> > -rw-r--r-- =A01 root =A0wheel =A019619840 Jun 20 17:11 20090620170322 >>> > -rw-r--r-- =A01 root =A0wheel =A013902848 Jun 20 17:13 20090620170331 >>> > -rw-r--r-- =A01 root =A0wheel =A028981248 Jun 20 17:13 20090620170346 >>> > -rw-r--r-- =A01 root =A0wheel =A018287616 Jun 20 17:11 20090620170355 >>> > -rw-r--r-- =A01 root =A0wheel =A016762880 Jun 20 17:16 20090620170405 >>> > -rw-r--r-- =A01 root =A0wheel =A026966016 Jun 20 17:10 20090620170429 >>> > -rw-r--r-- =A01 root =A0wheel =A0 5252096 Jun 20 17:14 20090620170502 >>> > [root@template-8_CURRENT /test/data2]# =A0rm -rf data1 >>> > rm: data1/20090620141524: No space left on device >>> > rm: data1/20090620025202: No space left on device >>> > rm: data1/20090620014926: No space left on device >>> > rm: data1/20090620075405: No space left on device >>> > rm: data1/20090620155124: No space left on device >>> > rm: data1/20090620105723: No space left on device >>> > rm: data1/20090620170100: No space left on device >>> > rm: data1/20090620040149: No space left on device >>> > rm: data1/20090620002512: No space left on device >>> > rm: data1/20090620052315: No space left on device >>> > rm: data1/20090620083750: No space left on device >>> > rm: data1/20090620063831: No space left on device >>> > rm: data1/20090620155029: No space left on device >>> > rm: data1/20090619234313: No space left on device >>> > rm: data1/20090620115346: No space left on device >>> > rm: data1/20090620075508: No space left on device >>> > rm: data1/20090620145541: No space left on device >>> > rm: data1/20090620093335: No space left on device >>> > rm: data1/20090620101846: No space left on device >>> > rm: data1/20090620132456: No space left on device >>> > rm: data1/20090620040044: No space left on device >>> > rm: data1/20090620091401: No space left on device >>> > rm: data1/20090620162251: No space left on device >>> > rm: data1/20090619220813: No space left on device >>> > rm: data1/20090620010643: No space left on device >>> > rm: data1/20090620052218: No space left on device >>> > >>> > >>> > >>> > >>> > >>> > Regards, >>> > --- >>> > Mr. Olli >>> > >>> > _______________________________________________ >>> > freebsd-current@freebsd.org mailing list >>> > http://lists.freebsd.org/mailman/listinfo/freebsd-current >>> > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd= .org" >>> > >>> >>> >>> >> >> > > > > -- > When bad men combine, the good must associate; else they will fall one > by one, an unpitied sacrifice in a contemptible struggle. > > =A0 =A0Edmund Burke > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 19:49:54 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A71BD1065674; Sat, 20 Jun 2009 19:49:54 +0000 (UTC) (envelope-from mister.olli@googlemail.com) Received: from mail-fx0-f206.google.com (mail-fx0-f206.google.com [209.85.220.206]) by mx1.freebsd.org (Postfix) with ESMTP id D55CD8FC1B; Sat, 20 Jun 2009 19:49:53 +0000 (UTC) (envelope-from mister.olli@googlemail.com) Received: by fxm2 with SMTP id 2so245139fxm.43 for ; Sat, 20 Jun 2009 12:49:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:received:received:subject:from:reply-to:to:cc :in-reply-to:references:content-type:date:message-id:mime-version :x-mailer:content-transfer-encoding; bh=008Z3ANuCm7irvtHUvBVCq7BiDMkxHBm0VxSDrlnR08=; b=NVzzscopox22dIpQ2+pbYCGSdSASb0JhXkvBvgGE3arQnyG/xxIvEB1UDXIYtUm9tq 0Vk2sNbubqoyus0EFUdXVutxx+7bFd9HOH3QWuF63rHWMeCXRNTqcpe9EeZ8mqSP2OlN Xt8/qyrg2WmcR3CvEiiAq6LMHDnVrqo+VcO1Q= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=subject:from:reply-to:to:cc:in-reply-to:references:content-type :date:message-id:mime-version:x-mailer:content-transfer-encoding; b=DLUD9FnJ4awgJhlLm86HkP8+aoAZvJ/4YSUH3AOvSywT5/QFjUWr6sfTgZCeVGRXBC +VTL8whWm90IltGGqTzcEpD7/C5khr/DOA1ftR+BSjckPF/2b0hkKqaBfVLAM+yWc9cg jclADb1+cvv6ix28nROrZjX0hD/B53cCZ9K9c= Received: by 10.204.118.134 with SMTP id v6mr4096018bkq.31.1245527392864; Sat, 20 Jun 2009 12:49:52 -0700 (PDT) Received: from ?88.128.7.206? ([88.128.7.206]) by mx.google.com with ESMTPS id 9sm8640044fks.58.2009.06.20.12.49.49 (version=SSLv3 cipher=RC4-MD5); Sat, 20 Jun 2009 12:49:52 -0700 (PDT) From: Mister Olli To: Kip Macy In-Reply-To: <3c1674c90906201232x63ddee19yf91aeac30f3401bb@mail.gmail.com> References: <1245519413.26909.60.camel@phoenix.blechhirn.net> <3c1674c90906201050w15e4cd5dpae76cd70d64b4e92@mail.gmail.com> <1245525965.26909.69.camel@phoenix.blechhirn.net> <3c1674c90906201232x63ddee19yf91aeac30f3401bb@mail.gmail.com> Content-Type: text/plain Date: Sat, 20 Jun 2009 21:49:41 +0200 Message-Id: <1245527381.26909.82.camel@phoenix.blechhirn.net> Mime-Version: 1.0 X-Mailer: Evolution 2.24.5 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: Unable to delete files on ZFS volume X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: mister.olli@googlemail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 19:49:54 -0000 Hi, > This is a a known issue with write allocate file systems and snapshots. great so it's not something completly unknown... > I haven't seen this before on v13 without any snapshots. maybe I should mention that ZFS is running in a xen domU with 786MB ram, on i386 (as I already read that i386 can be troublesome).. > A few questions: some, yeah ;-)) > - How many file systems? I'm not sure how to count correclty, hut the 'zfs list' output is complete, with filesystems - test - test/data1 - test/data2 nothing more > - How old are the file systems? as in 'zpool get all' not older as 48 hours. > - How much churn has there been on the file system? not sure what you mean with 'churn' (there seem to be no translation to german that makes sense ;-)) > - Was this an upgraded v6 or created as v13? no. > - How many files on test? quite a lot, as I started with a bash loop that created 3k large files for 1/2 day, then switched to randomized sizes. - test/data1 has 57228 - test/data2 has 9024 (measured with 'ls -l /test/data2/data1 | cat -n | tail -n 10' -1) > ... as well as any other things that occur to you to characterize the > file system. all data on test/data1 was created using an endless bash loop to test if the system crashes, using while ( true ) ; do dd if=/dev/random of=/test/data1/`date +%Y%m%d%H%M%S` bs=1k count=3 ; sleep 1s; done while 'count=3' was replaced by 'count=$RANDOM' after approx. 16 hours. test/data2 is a copy of test/data1 which started as data1 used 1.62GB and ran until all space in pool was filled up. This lead to remaining copy processes aborted with 'no space left on device' failure. as the dir listing of test/data1 is too long for shell (sh/ bash) I did copying like this: cp -r /test/data1 /test/data2 That's pretty much everything I did. Let me know if you need further details. Regards, --- Mr. Olli > > Cheers, > Kip > > > On Sat, Jun 20, 2009 at 12:26 PM, Mister Olli wrote: > > Hi, > > > >> Do you have snapshots or run ZFS v6? > > neither one or the other. Here are my pool/ ZFS details. > > > > [root@template-8_CURRENT /test/data2]# zpool get all test > > NAME PROPERTY VALUE SOURCE > > test size 2.98G - > > test used 2.94G - > > test available 47.9M - > > test capacity 98% - > > test altroot - default > > test health ONLINE - > > test guid 5305090209740383945 - > > test version 13 default > > test bootfs - default > > test delegation on default > > test autoreplace off default > > test cachefile - default > > test failmode wait default > > test listsnapshots off default > > [root@template-8_CURRENT /test/data2]# zfs get all test > > NAME PROPERTY VALUE SOURCE > > test type filesystem - > > test creation Fri Jun 19 21:01 2009 - > > test used 1.96G - > > test available 0 - > > test referenced 26.6K - > > test compressratio 1.00x - > > test mounted yes - > > test quota none default > > test reservation none default > > test recordsize 128K default > > test mountpoint /test default > > test sharenfs off default > > test checksum on default > > test compression off default > > test atime on default > > test devices on default > > test exec on default > > test setuid on default > > test readonly off default > > test jailed off default > > test snapdir hidden default > > test aclmode groupmask default > > test aclinherit restricted default > > test canmount on default > > test shareiscsi off default > > test xattr off temporary > > test copies 1 default > > test version 3 - > > test utf8only off - > > test normalization none - > > test casesensitivity sensitive - > > test vscan off default > > test nbmand off default > > test sharesmb off default > > test refquota none default > > test refreservation none default > > test primarycache all default > > test secondarycache all default > > test usedbysnapshots 0 - > > test usedbydataset 26.6K - > > test usedbychildren 1.96G - > > test usedbyrefreservation 0 - > > [root@template-8_CURRENT /test/data2]# zfs list -t snapshot > > no datasets available > > > > > > > >> Confirm that you've deleted your snapshots and are running pool v13. > >> > >> Future ZFS mail should be directed to freebsd-fs@ > > Sorry for that. fixed now ;-)) > > > > Regards, > > --- > > Mr. Olli > > > > > >> > >> > >> On Sat, Jun 20, 2009 at 10:36 AM, Mister Olli wrote: > >> > Hi, > >> > > >> > after filling up a ZFS volume until the last byte, I'm unable to delete > >> > files, with error 'No space left on the device'. > >> > > >> > > >> > > >> > [root@template-8_CURRENT /test/data2]# df -h > >> > Filesystem Size Used Avail Capacity Mounted on > >> > /dev/ad0s1a 8.7G 5.2G 2.8G 65% / > >> > devfs 1.0K 1.0K 0B 100% /dev > >> > test 0B 0B 0B 100% /test > >> > test/data1 1.6G 1.6G 0B 100% /test/data1 > >> > test/data2 341M 341M 0B 100% /test/data2 > >> > [root@template-8_CURRENT /test/data2]# zfs list > >> > NAME USED AVAIL REFER MOUNTPOINT > >> > test 1.96G 0 26.6K /test > >> > test/data1 1.62G 0 1.62G /test/data1 > >> > test/data2 341M 0 341M /test/data2 > >> > [root@template-8_CURRENT /test/data2]# ls -l data1 |tail -n 20 <-- there are quite a lot of files, so I truncated ;-)) > >> > -rw-r--r-- 1 root wheel 3072 Jun 20 17:13 20090620165743 > >> > -rw-r--r-- 1 root wheel 9771008 Jun 20 17:11 20090620165803 > >> > -rw-r--r-- 1 root wheel 624640 Jun 20 17:12 20090620165809 > >> > -rw-r--r-- 1 root wheel 1777664 Jun 20 17:14 20090620165810 > >> > -rw-r--r-- 1 root wheel 4059136 Jun 20 17:15 20090620165817 > >> > -rw-r--r-- 1 root wheel 23778304 Jun 20 17:13 20090620165925 > >> > -rw-r--r-- 1 root wheel 20318208 Jun 20 17:13 20090620165952 > >> > -rw-r--r-- 1 root wheel 28394496 Jun 20 17:10 20090620170013 > >> > -rw-r--r-- 1 root wheel 23698432 Jun 20 17:12 20090620170021 > >> > -rw-r--r-- 1 root wheel 26476544 Jun 20 17:19 20090620170100 > >> > -rw-r--r-- 1 root wheel 19904512 Jun 20 17:15 20090620170132 > >> > -rw-r--r-- 1 root wheel 23815168 Jun 20 17:14 20090620170142 > >> > -rw-r--r-- 1 root wheel 6683648 Jun 20 17:11 20090620170225 > >> > -rw-r--r-- 1 root wheel 19619840 Jun 20 17:11 20090620170322 > >> > -rw-r--r-- 1 root wheel 13902848 Jun 20 17:13 20090620170331 > >> > -rw-r--r-- 1 root wheel 28981248 Jun 20 17:13 20090620170346 > >> > -rw-r--r-- 1 root wheel 18287616 Jun 20 17:11 20090620170355 > >> > -rw-r--r-- 1 root wheel 16762880 Jun 20 17:16 20090620170405 > >> > -rw-r--r-- 1 root wheel 26966016 Jun 20 17:10 20090620170429 > >> > -rw-r--r-- 1 root wheel 5252096 Jun 20 17:14 20090620170502 > >> > [root@template-8_CURRENT /test/data2]# rm -rf data1 > >> > rm: data1/20090620141524: No space left on device > >> > rm: data1/20090620025202: No space left on device > >> > rm: data1/20090620014926: No space left on device > >> > rm: data1/20090620075405: No space left on device > >> > rm: data1/20090620155124: No space left on device > >> > rm: data1/20090620105723: No space left on device > >> > rm: data1/20090620170100: No space left on device > >> > rm: data1/20090620040149: No space left on device > >> > rm: data1/20090620002512: No space left on device > >> > rm: data1/20090620052315: No space left on device > >> > rm: data1/20090620083750: No space left on device > >> > rm: data1/20090620063831: No space left on device > >> > rm: data1/20090620155029: No space left on device > >> > rm: data1/20090619234313: No space left on device > >> > rm: data1/20090620115346: No space left on device > >> > rm: data1/20090620075508: No space left on device > >> > rm: data1/20090620145541: No space left on device > >> > rm: data1/20090620093335: No space left on device > >> > rm: data1/20090620101846: No space left on device > >> > rm: data1/20090620132456: No space left on device > >> > rm: data1/20090620040044: No space left on device > >> > rm: data1/20090620091401: No space left on device > >> > rm: data1/20090620162251: No space left on device > >> > rm: data1/20090619220813: No space left on device > >> > rm: data1/20090620010643: No space left on device > >> > rm: data1/20090620052218: No space left on device > >> > > >> > > >> > > >> > > >> > > >> > Regards, > >> > --- > >> > Mr. Olli From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 19:50:42 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7D6DF1065674 for ; Sat, 20 Jun 2009 19:50:42 +0000 (UTC) (envelope-from mister.olli@googlemail.com) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.154]) by mx1.freebsd.org (Postfix) with ESMTP id D64498FC0A for ; Sat, 20 Jun 2009 19:50:41 +0000 (UTC) (envelope-from mister.olli@googlemail.com) Received: by fg-out-1718.google.com with SMTP id e12so296588fga.12 for ; Sat, 20 Jun 2009 12:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:received:received:subject:from:reply-to:to:cc :in-reply-to:references:content-type:date:message-id:mime-version :x-mailer:content-transfer-encoding; bh=F2tRS0GJpFWKASst80W2oJ2ZrfcshcjL1ohxPvayWZA=; b=RkfM65cGAvCRXeoVV70XthZLKAKQwj7mINscLoxu/CzsJTvKjWRbnbJehB6YqaDfuT HV2+bu4289LF8A23u+v4Ftlr749mraCk359gu4zxx9uz9X1mZSy/1B2nHPhEroYU2vK2 myZbdwfz/fpXQ4c3puRvAag1WQ7HctMsdAXx4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=subject:from:reply-to:to:cc:in-reply-to:references:content-type :date:message-id:mime-version:x-mailer:content-transfer-encoding; b=DxI69oMcfH4Ig6o58I0v5B9nQnv5Lbd+NMZzaiOBvzmWNjiQvAuSVlcL0Fx9MM6d7R 8as7TwrCGS/WzGHwGL2XhaMiyLSU3u5lnq6QW9KUv7taz80ivSGzV9onRzJR9xt2syl2 w+7EZ+5PMiyaRsVeuGhq6VwZmhY2LoBG13gyg= Received: by 10.86.82.17 with SMTP id f17mr4484681fgb.65.1245525976752; Sat, 20 Jun 2009 12:26:16 -0700 (PDT) Received: from ?88.128.7.206? ([88.128.7.206]) by mx.google.com with ESMTPS id d6sm3262114fga.14.2009.06.20.12.26.13 (version=SSLv3 cipher=RC4-MD5); Sat, 20 Jun 2009 12:26:15 -0700 (PDT) From: Mister Olli To: Kip Macy In-Reply-To: <3c1674c90906201050w15e4cd5dpae76cd70d64b4e92@mail.gmail.com> References: <1245519413.26909.60.camel@phoenix.blechhirn.net> <3c1674c90906201050w15e4cd5dpae76cd70d64b4e92@mail.gmail.com> Content-Type: text/plain Date: Sat, 20 Jun 2009 21:26:05 +0200 Message-Id: <1245525965.26909.69.camel@phoenix.blechhirn.net> Mime-Version: 1.0 X-Mailer: Evolution 2.24.5 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: Unable to delete files on ZFS volume X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: mister.olli@googlemail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 19:50:43 -0000 Hi, > Do you have snapshots or run ZFS v6? neither one or the other. Here are my pool/ ZFS details. [root@template-8_CURRENT /test/data2]# zpool get all test NAME PROPERTY VALUE SOURCE test size 2.98G - test used 2.94G - test available 47.9M - test capacity 98% - test altroot - default test health ONLINE - test guid 5305090209740383945 - test version 13 default test bootfs - default test delegation on default test autoreplace off default test cachefile - default test failmode wait default test listsnapshots off default [root@template-8_CURRENT /test/data2]# zfs get all test NAME PROPERTY VALUE SOURCE test type filesystem - test creation Fri Jun 19 21:01 2009 - test used 1.96G - test available 0 - test referenced 26.6K - test compressratio 1.00x - test mounted yes - test quota none default test reservation none default test recordsize 128K default test mountpoint /test default test sharenfs off default test checksum on default test compression off default test atime on default test devices on default test exec on default test setuid on default test readonly off default test jailed off default test snapdir hidden default test aclmode groupmask default test aclinherit restricted default test canmount on default test shareiscsi off default test xattr off temporary test copies 1 default test version 3 - test utf8only off - test normalization none - test casesensitivity sensitive - test vscan off default test nbmand off default test sharesmb off default test refquota none default test refreservation none default test primarycache all default test secondarycache all default test usedbysnapshots 0 - test usedbydataset 26.6K - test usedbychildren 1.96G - test usedbyrefreservation 0 - [root@template-8_CURRENT /test/data2]# zfs list -t snapshot no datasets available > Confirm that you've deleted your snapshots and are running pool v13. > > Future ZFS mail should be directed to freebsd-fs@ Sorry for that. fixed now ;-)) Regards, --- Mr. Olli > > > On Sat, Jun 20, 2009 at 10:36 AM, Mister Olli wrote: > > Hi, > > > > after filling up a ZFS volume until the last byte, I'm unable to delete > > files, with error 'No space left on the device'. > > > > > > > > [root@template-8_CURRENT /test/data2]# df -h > > Filesystem Size Used Avail Capacity Mounted on > > /dev/ad0s1a 8.7G 5.2G 2.8G 65% / > > devfs 1.0K 1.0K 0B 100% /dev > > test 0B 0B 0B 100% /test > > test/data1 1.6G 1.6G 0B 100% /test/data1 > > test/data2 341M 341M 0B 100% /test/data2 > > [root@template-8_CURRENT /test/data2]# zfs list > > NAME USED AVAIL REFER MOUNTPOINT > > test 1.96G 0 26.6K /test > > test/data1 1.62G 0 1.62G /test/data1 > > test/data2 341M 0 341M /test/data2 > > [root@template-8_CURRENT /test/data2]# ls -l data1 |tail -n 20 <-- there are quite a lot of files, so I truncated ;-)) > > -rw-r--r-- 1 root wheel 3072 Jun 20 17:13 20090620165743 > > -rw-r--r-- 1 root wheel 9771008 Jun 20 17:11 20090620165803 > > -rw-r--r-- 1 root wheel 624640 Jun 20 17:12 20090620165809 > > -rw-r--r-- 1 root wheel 1777664 Jun 20 17:14 20090620165810 > > -rw-r--r-- 1 root wheel 4059136 Jun 20 17:15 20090620165817 > > -rw-r--r-- 1 root wheel 23778304 Jun 20 17:13 20090620165925 > > -rw-r--r-- 1 root wheel 20318208 Jun 20 17:13 20090620165952 > > -rw-r--r-- 1 root wheel 28394496 Jun 20 17:10 20090620170013 > > -rw-r--r-- 1 root wheel 23698432 Jun 20 17:12 20090620170021 > > -rw-r--r-- 1 root wheel 26476544 Jun 20 17:19 20090620170100 > > -rw-r--r-- 1 root wheel 19904512 Jun 20 17:15 20090620170132 > > -rw-r--r-- 1 root wheel 23815168 Jun 20 17:14 20090620170142 > > -rw-r--r-- 1 root wheel 6683648 Jun 20 17:11 20090620170225 > > -rw-r--r-- 1 root wheel 19619840 Jun 20 17:11 20090620170322 > > -rw-r--r-- 1 root wheel 13902848 Jun 20 17:13 20090620170331 > > -rw-r--r-- 1 root wheel 28981248 Jun 20 17:13 20090620170346 > > -rw-r--r-- 1 root wheel 18287616 Jun 20 17:11 20090620170355 > > -rw-r--r-- 1 root wheel 16762880 Jun 20 17:16 20090620170405 > > -rw-r--r-- 1 root wheel 26966016 Jun 20 17:10 20090620170429 > > -rw-r--r-- 1 root wheel 5252096 Jun 20 17:14 20090620170502 > > [root@template-8_CURRENT /test/data2]# rm -rf data1 > > rm: data1/20090620141524: No space left on device > > rm: data1/20090620025202: No space left on device > > rm: data1/20090620014926: No space left on device > > rm: data1/20090620075405: No space left on device > > rm: data1/20090620155124: No space left on device > > rm: data1/20090620105723: No space left on device > > rm: data1/20090620170100: No space left on device > > rm: data1/20090620040149: No space left on device > > rm: data1/20090620002512: No space left on device > > rm: data1/20090620052315: No space left on device > > rm: data1/20090620083750: No space left on device > > rm: data1/20090620063831: No space left on device > > rm: data1/20090620155029: No space left on device > > rm: data1/20090619234313: No space left on device > > rm: data1/20090620115346: No space left on device > > rm: data1/20090620075508: No space left on device > > rm: data1/20090620145541: No space left on device > > rm: data1/20090620093335: No space left on device > > rm: data1/20090620101846: No space left on device > > rm: data1/20090620132456: No space left on device > > rm: data1/20090620040044: No space left on device > > rm: data1/20090620091401: No space left on device > > rm: data1/20090620162251: No space left on device > > rm: data1/20090619220813: No space left on device > > rm: data1/20090620010643: No space left on device > > rm: data1/20090620052218: No space left on device > > > > > > > > > > > > Regards, > > --- > > Mr. Olli > > > > _______________________________________________ > > freebsd-current@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > > > > > From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 19:55:12 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C93EB1065670; Sat, 20 Jun 2009 19:55:12 +0000 (UTC) (envelope-from mister.olli@googlemail.com) Received: from mail-bw0-f215.google.com (mail-bw0-f215.google.com [209.85.218.215]) by mx1.freebsd.org (Postfix) with ESMTP id 0D3508FC13; Sat, 20 Jun 2009 19:55:11 +0000 (UTC) (envelope-from mister.olli@googlemail.com) Received: by bwz11 with SMTP id 11so208641bwz.43 for ; Sat, 20 Jun 2009 12:55:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:received:received:subject:from:reply-to:to:cc :in-reply-to:references:content-type:date:message-id:mime-version :x-mailer:content-transfer-encoding; bh=bG1f9j1ItBN9Wo/8Vj+HvPgW+sb3czwIcRp5PieVRpQ=; b=v63R9f/1f/Ll0VLYc+sEKpmhgZy3iYlKN3qId1cOq5BKwnKmR3AbXxbxUe53wk7CZv dXYsd3cAxEQME8HUwU38rlKJV6+92o0xlThyC+MTAj6GkFF82ZjYLL+ygiLObNn/zfyo Zz+ft361hX8Nnr/BterH5uK6Jjk+4QuScpGQc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=subject:from:reply-to:to:cc:in-reply-to:references:content-type :date:message-id:mime-version:x-mailer:content-transfer-encoding; b=IdpeYg5ySbZjMtzUjLI8IMGSRPwDxcjqYJS4lADIbSU5xQImC4mwR16vqVP8K7GLyW CRWmUhMzg8i3gZ5OCK+rrRa8iamQL4BRWi28sRMy4SW15sBdXuf3sq7N/12V+s10ybZQ u88v1Zjns1CviEeIhEeq1DMlwkcHW2AXxBYjY= Received: by 10.204.57.138 with SMTP id c10mr4106402bkh.56.1245527710836; Sat, 20 Jun 2009 12:55:10 -0700 (PDT) Received: from ?88.128.7.206? ([88.128.7.206]) by mx.google.com with ESMTPS id p17sm8664808fka.42.2009.06.20.12.55.07 (version=SSLv3 cipher=RC4-MD5); Sat, 20 Jun 2009 12:55:10 -0700 (PDT) From: Mister Olli To: Dan Naumov In-Reply-To: References: <1245519413.26909.60.camel@phoenix.blechhirn.net> <3c1674c90906201050w15e4cd5dpae76cd70d64b4e92@mail.gmail.com> <1245525965.26909.69.camel@phoenix.blechhirn.net> <3c1674c90906201232x63ddee19yf91aeac30f3401bb@mail.gmail.com> Content-Type: text/plain Date: Sat, 20 Jun 2009 21:55:00 +0200 Message-Id: <1245527700.26909.86.camel@phoenix.blechhirn.net> Mime-Version: 1.0 X-Mailer: Evolution 2.24.5 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: Unable to delete files on ZFS volume X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: mister.olli@googlemail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 19:55:13 -0000 Hi, sounds like a great idea, I'm gonna try that as soon as Kip Macy does not need further informations. The reason I filled up the pool was that I just got ZFS to work and started playing around to see how stable it works. As I wanna deploy it on my home FS (which has no heavy usage) it tried to simulate some work on the FS and came up with the bash loops (as described in my other mail). Filling up the pool happend 'accidentally'. btw I'm pretty much impressed how good it works. From the readings I assumed the first crash within minutes. Great job. Regards, --- Mr. Olli On Sat, 2009-06-20 at 22:42 +0300, Dan Naumov wrote: > Hi. > > As Kip pointed out, this is a known issue with write allocate > filesystems in general (not just ZFS). This is one of the several > reasons why SUN recommends you do not completely fill up a zpool (they > actually recommend to stay at or below 80% utilization). I have a > workaround for you, however: > > Pick a file you don't need on the filled up ZFS volume. "Empty" the > file contents in a way of your chosing. This should give you some disk > space needed to use "rm" and further empty up your filesystem and > allow for normal operation. This is a bit ugly, but it works. > > - Sincerely, > Dan Naumov > > > > > >>> On Sat, Jun 20, 2009 at 10:36 AM, Mister Olli wrote: > >>> > Hi, > >>> > > >>> > after filling up a ZFS volume until the last byte, I'm unable to delete > >>> > files, with error 'No space left on the device'. > >>> > > >>> > > >>> > > >>> > [root@template-8_CURRENT /test/data2]# df -h > >>> > Filesystem Size Used Avail Capacity Mounted on > >>> > /dev/ad0s1a 8.7G 5.2G 2.8G 65% / > >>> > devfs 1.0K 1.0K 0B 100% /dev > >>> > test 0B 0B 0B 100% /test > >>> > test/data1 1.6G 1.6G 0B 100% /test/data1 > >>> > test/data2 341M 341M 0B 100% /test/data2 > >>> > [root@template-8_CURRENT /test/data2]# zfs list > >>> > NAME USED AVAIL REFER MOUNTPOINT > >>> > test 1.96G 0 26.6K /test > >>> > test/data1 1.62G 0 1.62G /test/data1 > >>> > test/data2 341M 0 341M /test/data2 > >>> > [root@template-8_CURRENT /test/data2]# ls -l data1 |tail -n 20 <-- there are quite a lot of files, so I truncated ;-)) > >>> > -rw-r--r-- 1 root wheel 3072 Jun 20 17:13 20090620165743 > >>> > -rw-r--r-- 1 root wheel 9771008 Jun 20 17:11 20090620165803 > >>> > -rw-r--r-- 1 root wheel 624640 Jun 20 17:12 20090620165809 > >>> > -rw-r--r-- 1 root wheel 1777664 Jun 20 17:14 20090620165810 > >>> > -rw-r--r-- 1 root wheel 4059136 Jun 20 17:15 20090620165817 > >>> > -rw-r--r-- 1 root wheel 23778304 Jun 20 17:13 20090620165925 > >>> > -rw-r--r-- 1 root wheel 20318208 Jun 20 17:13 20090620165952 > >>> > -rw-r--r-- 1 root wheel 28394496 Jun 20 17:10 20090620170013 > >>> > -rw-r--r-- 1 root wheel 23698432 Jun 20 17:12 20090620170021 > >>> > -rw-r--r-- 1 root wheel 26476544 Jun 20 17:19 20090620170100 > >>> > -rw-r--r-- 1 root wheel 19904512 Jun 20 17:15 20090620170132 > >>> > -rw-r--r-- 1 root wheel 23815168 Jun 20 17:14 20090620170142 > >>> > -rw-r--r-- 1 root wheel 6683648 Jun 20 17:11 20090620170225 > >>> > -rw-r--r-- 1 root wheel 19619840 Jun 20 17:11 20090620170322 > >>> > -rw-r--r-- 1 root wheel 13902848 Jun 20 17:13 20090620170331 > >>> > -rw-r--r-- 1 root wheel 28981248 Jun 20 17:13 20090620170346 > >>> > -rw-r--r-- 1 root wheel 18287616 Jun 20 17:11 20090620170355 > >>> > -rw-r--r-- 1 root wheel 16762880 Jun 20 17:16 20090620170405 > >>> > -rw-r--r-- 1 root wheel 26966016 Jun 20 17:10 20090620170429 > >>> > -rw-r--r-- 1 root wheel 5252096 Jun 20 17:14 20090620170502 > >>> > [root@template-8_CURRENT /test/data2]# rm -rf data1 > >>> > rm: data1/20090620141524: No space left on device > >>> > rm: data1/20090620025202: No space left on device > >>> > rm: data1/20090620014926: No space left on device > >>> > rm: data1/20090620075405: No space left on device > >>> > rm: data1/20090620155124: No space left on device > >>> > rm: data1/20090620105723: No space left on device > >>> > rm: data1/20090620170100: No space left on device > >>> > rm: data1/20090620040149: No space left on device > >>> > rm: data1/20090620002512: No space left on device > >>> > rm: data1/20090620052315: No space left on device > >>> > rm: data1/20090620083750: No space left on device > >>> > rm: data1/20090620063831: No space left on device > >>> > rm: data1/20090620155029: No space left on device > >>> > rm: data1/20090619234313: No space left on device > >>> > rm: data1/20090620115346: No space left on device > >>> > rm: data1/20090620075508: No space left on device > >>> > rm: data1/20090620145541: No space left on device > >>> > rm: data1/20090620093335: No space left on device > >>> > rm: data1/20090620101846: No space left on device > >>> > rm: data1/20090620132456: No space left on device > >>> > rm: data1/20090620040044: No space left on device > >>> > rm: data1/20090620091401: No space left on device > >>> > rm: data1/20090620162251: No space left on device > >>> > rm: data1/20090619220813: No space left on device > >>> > rm: data1/20090620010643: No space left on device > >>> > rm: data1/20090620052218: No space left on device > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > Regards, > >>> > --- > >>> > Mr. Olli > >>> > > >>> > _______________________________________________ > >>> > freebsd-current@freebsd.org mailing list > >>> > http://lists.freebsd.org/mailman/listinfo/freebsd-current > >>> > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > >>> > > >>> > >>> > >>> > >> > >> > > > > > > > > -- > > When bad men combine, the good must associate; else they will fall one > > by one, an unpitied sacrifice in a contemptible struggle. > > > > Edmund Burke > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 21:29:27 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 77D711065674; Sat, 20 Jun 2009 21:29:27 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.240]) by mx1.freebsd.org (Postfix) with ESMTP id 23A308FC14; Sat, 20 Jun 2009 21:29:26 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by an-out-0708.google.com with SMTP id c3so1079624ana.13 for ; Sat, 20 Jun 2009 14:29:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type:content-transfer-encoding; bh=rO53cPkpJfOHMISRRIGlmuGfc85i9Jl/c3EYhiF/fic=; b=CEd9X6OQLPTWGmVh8p7KaPSc9PSWeVZfwfDadEvn9PRTmse+LlfiYxp4wUg/+NlQ9y 5Dd+/N+vhskfU9lHZj6N9IxAcu1bV4CQwJx8+Sjq4iZeaXwSiDWMKRsXMjuaJRMfPgCe 9HmapbsS50KRCQmVDBOmhhMHGVMzBw/wX9HkU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; b=wkSOKGntqnBsOzuEbb0541Gnolterw16gake97k0EfJemzpAkK8y/9N05RnS/422C/ s0ajPNsXEDc0SB0iyhrhk67npcd3rQjB43hjUVj4ndUbw+Td/gAbrRGJZvpPZ6TAtHvz rwLv8DWoFWNICUKtU/Bwg5fwyEQ0J9CMbyB/g= MIME-Version: 1.0 Received: by 10.100.133.2 with SMTP id g2mr5833015and.23.1245533366453; Sat, 20 Jun 2009 14:29:26 -0700 (PDT) Date: Sun, 21 Jun 2009 00:29:26 +0300 Message-ID: From: Dan Naumov To: FreeBSD-STABLE Mailing List , freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Subject: ufs2 / softupdates / ZFS / disk write cache X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 21:29:28 -0000 I have the following setup: A single consumer-grade 2tb SATA disk: Western Digital Green (model WDC WD20EADS-00R6B0). This disk is setup like this: 16gb root partition with UFS2 + softupdates, containing mostly static things: /bin /boot /etc /root /sbin /usr /var and such a 1,9tb non-redundant zfs pool on top of a slice, it hosts things like: /DATA, /home, /usr/local, /var/log and such. What should I do to ensure (as much as possible) filesystem consistency of the root filesystem in the case of the power loss? I know there have been a lot of discussions on the subject of consumer-level disks literally lying about the state of files in transit (disks telling the system that files have been written to disk while in reality they are still in disk's write cache), in turn throwing softupdates off balance (since softupdates assumes the disks don't lie about such things), in turn sometimes resulting in severe data losses in the case of a system power loss during heavy disk IO. One of the solutions that was often brought up in the mailing lists is disabling the actual disk write cache via adding hw.ata.wc=0 to /boot/loader.conf, FreeBSD 4.3 actually even had this setting by default, but this was apparently reverted back because some people have reported a write performance regression on the tune of becoming 4-6 times slower. So what should I do in my case? Should I disable disk write cache via the hw.ata.wc tunable? As far as I know, ZFS has a write cache of it's own and since the ufs2 root filesystem in my case is mostly static data, I am guessing I "shouldn't" notice that big of a performance hit. Or am I completely in the wrong here and setting hw.ata.wc=0 is going to adversely affect the write performance on both the root partition AND the zfs pool despite zfs using it's own write cache? Another thing I have been pondering is: I do have 2gb of space left unused on the system (currently being used as swap, I have 2 swap slices, one 1gb at the very beginning of the disk, the other being 2gb at the end), which I could turn into a GJOURNAL for the root filesystem... Sincerely, - Dan Naumov From owner-freebsd-fs@FreeBSD.ORG Sat Jun 20 23:26:55 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 340ED1065675 for ; Sat, 20 Jun 2009 23:26:55 +0000 (UTC) (envelope-from erikt@midgard.homeip.net) Received: from ch-smtp01.sth.basefarm.net (ch-smtp01.sth.basefarm.net [80.76.149.212]) by mx1.freebsd.org (Postfix) with ESMTP id B8BAB8FC08 for ; Sat, 20 Jun 2009 23:26:54 +0000 (UTC) (envelope-from erikt@midgard.homeip.net) Received: from c83-255-48-78.bredband.comhem.se ([83.255.48.78]:54370 helo=falcon.midgard.homeip.net) by ch-smtp01.sth.basefarm.net with esmtp (Exim 4.69) (envelope-from ) id 1MI9if-0001cd-4s for freebsd-fs@freebsd.org; Sun, 21 Jun 2009 01:11:35 +0200 Received: (qmail 62711 invoked from network); 21 Jun 2009 01:11:30 +0200 Received: from owl.midgard.homeip.net (10.1.5.7) by falcon.midgard.homeip.net with ESMTP; 21 Jun 2009 01:11:30 +0200 Received: (qmail 89011 invoked by uid 1001); 21 Jun 2009 01:11:30 +0200 Date: Sun, 21 Jun 2009 01:11:30 +0200 From: Erik Trulsson To: Dan Naumov Message-ID: <20090620231130.GA88907@owl.midgard.homeip.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.19 (2009-01-05) X-Originating-IP: 83.255.48.78 X-Scan-Result: No virus found in message 1MI9if-0001cd-4s. X-Scan-Signature: ch-smtp01.sth.basefarm.net 1MI9if-0001cd-4s 9c6c4086124f887efce47a3c26464de8 Cc: freebsd-fs@freebsd.org, FreeBSD-STABLE Mailing List Subject: Re: ufs2 / softupdates / ZFS / disk write cache X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jun 2009 23:26:55 -0000 On Sun, Jun 21, 2009 at 12:29:26AM +0300, Dan Naumov wrote: > I have the following setup: > > A single consumer-grade 2tb SATA disk: Western Digital Green (model > WDC WD20EADS-00R6B0). This disk is setup like this: > > 16gb root partition with UFS2 + softupdates, containing mostly static things: > /bin /boot /etc /root /sbin /usr /var and such > > a 1,9tb non-redundant zfs pool on top of a slice, it hosts things like: > /DATA, /home, /usr/local, /var/log and such. > > What should I do to ensure (as much as possible) filesystem > consistency of the root filesystem in the case of the power loss? I > know there have been a lot of discussions on the subject of > consumer-level disks literally lying about the state of files in > transit (disks telling the system that files have been written to disk > while in reality they are still in disk's write cache), in turn > throwing softupdates off balance (since softupdates assumes the disks > don't lie about such things), in turn sometimes resulting in severe > data losses in the case of a system power loss during heavy disk IO. Note that this is not something specific to softupdates, but applies when you are not using softupdates as well. > > One of the solutions that was often brought up in the mailing lists is > disabling the actual disk write cache via adding hw.ata.wc=0 to > /boot/loader.conf, FreeBSD 4.3 actually even had this setting by > default, but this was apparently reverted back because some people > have reported a write performance regression on the tune of becoming > 4-6 times slower. So what should I do in my case? Should I disable > disk write cache via the hw.ata.wc tunable? As far as I know, ZFS has > a write cache of it's own and since the ufs2 root filesystem in my > case is mostly static data, I am guessing I "shouldn't" notice that > big of a performance hit. Or am I completely in the wrong here and > setting hw.ata.wc=0 is going to adversely affect the write performance > on both the root partition AND the zfs pool despite zfs using it's own > write cache? Why don't you try it and see if you notice the performance hit? You will almost certainly see some reduced write performance if you disable the disk's cache, but how noticable this will be for your setup and your disk usage is something only you can answer. My guess is that it will be quite noticable, but that is only a guess. (Keep in mind that UFS+softupdates does quite a bit of write-caching on its own, so just switching to ZFS is unlikely to improve write performance significantly compared to using UFS.) -- Erik Trulsson ertr1013@student.uu.se