From owner-freebsd-fs@FreeBSD.ORG Sun Dec 14 00:15:30 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B5A9D106564A; Sun, 14 Dec 2008 00:15:30 +0000 (UTC) (envelope-from bms@incunabulum.net) Received: from out1.smtp.messagingengine.com (out1.smtp.messagingengine.com [66.111.4.25]) by mx1.freebsd.org (Postfix) with ESMTP id 855FB8FC24; Sun, 14 Dec 2008 00:15:30 +0000 (UTC) (envelope-from bms@incunabulum.net) Received: from compute1.internal (compute1.internal [10.202.2.41]) by out1.messagingengine.com (Postfix) with ESMTP id 01EB11E6649; Sat, 13 Dec 2008 19:15:30 -0500 (EST) Received: from heartbeat2.messagingengine.com ([10.202.2.161]) by compute1.internal (MEProxy); Sat, 13 Dec 2008 19:15:30 -0500 X-Sasl-enc: rPWoR1wW7Mn2y5wGqUpbKSgJpx+RtsefLoQrQWh8JCKK 1229213729 Received: from empiric.lon.incunabulum.net (82-35-112-254.cable.ubr07.dals.blueyonder.co.uk [82.35.112.254]) by mail.messagingengine.com (Postfix) with ESMTPSA id 24D49277C2; Sat, 13 Dec 2008 19:15:28 -0500 (EST) Message-ID: <4944501E.40900@incunabulum.net> Date: Sun, 14 Dec 2008 00:15:26 +0000 From: Bruce M Simpson User-Agent: Thunderbird 2.0.0.18 (X11/20081205) MIME-Version: 1.0 To: "Paul B. Mahol" References: <8cb6106e0811241129o642dcf28re4ae177c8ccbaa25@mail.gmail.com> <20081125150342.GL2042@deviant.kiev.zoral.com.ua> <8cb6106e0812031453j6dc2f2f4i374145823c084eaa@mail.gmail.com> <200812041747.09040.gnemmi@gmail.com> <4938FE44.9090608@FreeBSD.org> <4939133E.2000701@FreeBSD.org> <493CEE90.7050104@FreeBSD.org> <3a142e750812090553l564bff84pe1f02cd1b03090ff@mail.gmail.com> <4943F43B.4060105@incunabulum.net> <3a142e750812131403p31841403ub9d5693278c74111@mail.gmail.com> In-Reply-To: <3a142e750812131403p31841403ub9d5693278c74111@mail.gmail.com> X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org, "Bruce M. Simpson" Subject: Re: ext2fuse: user-space ext2 implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Dec 2008 00:15:30 -0000 Paul B. Mahol wrote: >> Can you please relay this feedback to the authors of ext2fuse? >> >> As mentioned earlier in the thread, the ext2fuse code could benefit from >> UBLIO-ization. Are you or any other volunteers happy to help out here? >> > > Well, first higher priority would be to fix existing bugs. It would be > very little > gain with user cache, because it is already too much IMHO slow and > adding user cache > will not make it faster, but that is not port problem. > I'm not aware of bugs with ext2fuse itself; my work on the port was merely to try to raise awareness that a user-space project for ext2 filesystem access existed. Can you elaborate further on your experience with ext2fuse which seems to you to be buggy, i.e. symptoms, root cause analysis etc. ? Have you reported these to the author(s)? Have you measured the performance? Is the performance sufficient for the needs of an occasional desktop user? I realise we are largely involved in content-free argument here, however the trade-off of ext2fuse vs ext2fs in the FreeBSD kernel source tree, is that of a hopefully more actively maintained implementation vs one which is not maintained at all, and any alternatives for FreeBSD users would be welcome. thanks BMS From owner-freebsd-fs@FreeBSD.ORG Sun Dec 14 15:29:09 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1EE521065670 for ; Sun, 14 Dec 2008 15:29:09 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id AB2E48FC08 for ; Sun, 14 Dec 2008 15:29:08 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from volatile.chemikals.org (unknown [74.193.182.107]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id A92B584CD702 for ; Sun, 14 Dec 2008 09:29:05 -0600 (CST) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id mBEFT295067916 for ; Sun, 14 Dec 2008 09:29:02 -0600 (CST) (envelope-from morganw@chemikals.org) Date: Sun, 14 Dec 2008 09:29:02 -0600 (CST) From: Wes Morgan To: freebsd-fs@freebsd.org Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Subject: SFF-8087 to fanout cable question X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Dec 2008 15:29:09 -0000 Like some of you who have built big ZFS arrays on multi-port controllers, I've got a mess of cables in my chassis... Now I have some SAS backplanes with mini-sas (SFF-8087) plugs. I know they work because I've used them with an Areca 1680 controller and some standard mini-sas cables. I decided that I wanted to go a different direction, and purchased an ASUS P5BV/SAS board that has a builtin 8-port LSI 1068-based SAS controller (and I highly recommend it). Now I have the 4x SAS to SFF-8087, and it doesn't want to work... But it worked going from the Areca controller to a regular 8-port SAS/SATA backplane. Are these cables only usable in one direction? In summary: mini-sas (controller) to mini-sas (backplane) - works mini-sas (controller) to 8-port SAS/SATA (backplane) - works 8-port SAS (controller) to mini-sas (backplane) - nada 8-port SAS (controller) to 8-port SAS/SATA (backplane) - works Any ideas? Sorry if this is just a "duh" question. I don't do this for a living, I'm just obsessive about my media server :) From owner-freebsd-fs@FreeBSD.ORG Sun Dec 14 15:47:29 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C1F7E1065670; Sun, 14 Dec 2008 15:47:29 +0000 (UTC) (envelope-from onemda@gmail.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.29]) by mx1.freebsd.org (Postfix) with ESMTP id 4B7A08FC21; Sun, 14 Dec 2008 15:47:29 +0000 (UTC) (envelope-from onemda@gmail.com) Received: by yx-out-2324.google.com with SMTP id 8so942238yxb.13 for ; Sun, 14 Dec 2008 07:47:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references; bh=5FFmonQ1Zuc0CjPKY3+ALKvpy2FVIzHg5DaZQOz3iSs=; b=E9WHhKL63uzEDX6KjpzB8nVkr7M0cgFMHNSUpUUHi0EIVs9W0OMF2dmDsmD87mURQQ ifQH0mKg0QIQtELnr/bj4EYZibbn5TpStm81sYgDsXv+Z0e0NE7ATz1i1JIxDR6ZMjjE IJgIbOoVFR8d3BMSCtpayyCBLAQzCnJuvVV5s= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references; b=Owibq86s8SiTwIA+N/StEzdDFLY0QIU0e1aMt0uuhkDAvwJ1U63ex+GmN5QD8rJ7vD W49NQpa7EBB80cL/Dx6wRtTKGtzcLBXVQIrAgpJXohBEYC226q8/FzD5HZP6f9Ij/QM+ hGiOrMtvqW9XAwdJ+LZOapkPANs1BWqSM2U/8= Received: by 10.231.18.130 with SMTP id w2mr68628iba.11.1229269648344; Sun, 14 Dec 2008 07:47:28 -0800 (PST) Received: by 10.231.11.72 with HTTP; Sun, 14 Dec 2008 07:47:28 -0800 (PST) Message-ID: <3a142e750812140747r2eb5ebadp7ac2b2c8ae357bae@mail.gmail.com> Date: Sun, 14 Dec 2008 16:47:28 +0100 From: "Paul B. Mahol" To: "Bruce M Simpson" In-Reply-To: <4944501E.40900@incunabulum.net> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <8cb6106e0811241129o642dcf28re4ae177c8ccbaa25@mail.gmail.com> <8cb6106e0812031453j6dc2f2f4i374145823c084eaa@mail.gmail.com> <200812041747.09040.gnemmi@gmail.com> <4938FE44.9090608@FreeBSD.org> <4939133E.2000701@FreeBSD.org> <493CEE90.7050104@FreeBSD.org> <3a142e750812090553l564bff84pe1f02cd1b03090ff@mail.gmail.com> <4943F43B.4060105@incunabulum.net> <3a142e750812131403p31841403ub9d5693278c74111@mail.gmail.com> <4944501E.40900@incunabulum.net> Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org, "Bruce M. Simpson" Subject: Re: ext2fuse: user-space ext2 implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Dec 2008 15:47:29 -0000 On 12/14/08, Bruce M Simpson wrote: > Paul B. Mahol wrote: >>> Can you please relay this feedback to the authors of ext2fuse? >>> >>> As mentioned earlier in the thread, the ext2fuse code could benefit from >>> UBLIO-ization. Are you or any other volunteers happy to help out here? >>> >> >> Well, first higher priority would be to fix existing bugs. It would be >> very little >> gain with user cache, because it is already too much IMHO slow and >> adding user cache >> will not make it faster, but that is not port problem. >> > > I'm not aware of bugs with ext2fuse itself; my work on the port was > merely to try to raise awareness that a user-space project for ext2 > filesystem access existed. > > Can you elaborate further on your experience with ext2fuse which seems > to you to be buggy, i.e. symptoms, root cause analysis etc. ? Have you > reported these to the author(s)? I have read TODO. > Have you measured the performance? Is the performance sufficient for the > needs of an occasional desktop user? Performance was not sufficient, and adding user cache will not improve access speed on first read. After mounting ext2fs volume (via md(4)) created with e2fsprogs port and copying data from ufs to ext2, reading was quite slow. Also ext2fuse after mount doesnt exits it is still running displaying debug data - explaining why project itselfs is in alpha state. > I realise we are largely involved in content-free argument here, however > the trade-off of ext2fuse vs ext2fs in the FreeBSD kernel source tree, > is that of a hopefully more actively maintained implementation vs one > which is not maintained at all, and any alternatives for FreeBSD users > would be welcome. Project itself doesnt look very active, but I may be wrong. It is in alpha state as reported on SF. IMHO it is better to maintain our own because it is in better shape, but I'm not intersted in ext* as developer. -- Paul From owner-freebsd-fs@FreeBSD.ORG Sun Dec 14 16:33:55 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 86E601065686 for ; Sun, 14 Dec 2008 16:33:55 +0000 (UTC) (envelope-from michael@fuckner.net) Received: from dedihh.fuckner.net (dedihh.fuckner.net [81.209.183.161]) by mx1.freebsd.org (Postfix) with ESMTP id 42B568FC1F for ; Sun, 14 Dec 2008 16:33:55 +0000 (UTC) (envelope-from michael@fuckner.net) Received: from localhost (localhost [127.0.0.1]) by dedihh.fuckner.net (Postfix) with ESMTP id CE1CD61D07; Sun, 14 Dec 2008 17:33:53 +0100 (CET) X-Virus-Scanned: amavisd-new at fuckner.net Received: from dedihh.fuckner.net ([127.0.0.1]) by localhost (dedihh.fuckner.net [127.0.0.1]) (amavisd-new, port 10024) with SMTP id 5tLsT9j6id0r; Sun, 14 Dec 2008 17:33:49 +0100 (CET) Received: from dedihh.fuckner.net (localhost [127.0.0.1]) by dedihh.fuckner.net (Postfix) with ESMTP id A4BE061D01; Sun, 14 Dec 2008 17:33:48 +0100 (CET) Received: from 85.176.191.115 (SquirrelMail authenticated user molli123) by dedihh.fuckner.net with HTTP; Sun, 14 Dec 2008 17:33:48 +0100 (CET) Message-ID: <34a9ac6411eca56bca7766a5c745c694.squirrel@dedihh.fuckner.net> In-Reply-To: References: Date: Sun, 14 Dec 2008 17:33:48 +0100 (CET) From: "Michael Fuckner" To: "Wes Morgan" User-Agent: SquirrelMail/1.4.18 [SVN] MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 X-Priority: 3 (Normal) Importance: Normal Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: SFF-8087 to fanout cable question X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Dec 2008 16:33:55 -0000 > Like some of you who have built big ZFS arrays on multi-port controller= s, > I've got a mess of cables in my chassis... Now I have some SAS backplan= es > with mini-sas (SFF-8087) plugs. I know they work because I've used them > with an Areca 1680 controller and some standard mini-sas cables. I deci= ded > that I wanted to go a different direction, and purchased an ASUS P5BV/S= AS > board that has a builtin 8-port LSI 1068-based SAS controller (and I > highly recommend it). Now I have the 4x SAS to SFF-8087, and it doesn't > want to work... But it worked going from the Areca controller to a regu= lar > 8-port SAS/SATA backplane. > Are these cables only usable in one direction? Yes! Multilane-multilane cables are all the same (even if some vendors have different part numbers for different directions). If you connect discrete ports to a multilane-backplane you need a reverse break-out cable. See: http://3ware.com/products/pdf/3ware_Cable_Brochure.pdf In contrast you need forward breakout for multilane controllers and discrete backplanes. > In summary: > > mini-sas (controller) to mini-sas (backplane) - works > mini-sas (controller) to 8-port SAS/SATA (backplane) - works > 8-port SAS (controller) to mini-sas (backplane) - nada > 8-port SAS (controller) to 8-port SAS/SATA (backplane) - works > > Any ideas? Sorry if this is just a "duh" question. I don't do this for = a > living, I'm just obsessive about my media server :) Trust me, it took me quite a while to figure this out... Regards, Michael! From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 11:06:51 2008 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E278A1065673 for ; Mon, 15 Dec 2008 11:06:51 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D50658FC12 for ; Mon, 15 Dec 2008 11:06:51 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id mBFB6pLJ004320 for ; Mon, 15 Dec 2008 11:06:51 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id mBFB6pE4004316 for freebsd-fs@FreeBSD.org; Mon, 15 Dec 2008 11:06:51 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 15 Dec 2008 11:06:51 GMT Message-Id: <200812151106.mBFB6pE4004316@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2008 11:06:52 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129174 fs [nfs][zfs][panic] NFS v3 Panic when under high load ex o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129084 fs [udf] [panic] udf panic: getblk: size(67584) > MAXBSIZ f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/128633 fs [zfs] [lor] lock order reversal in zfs o kern/128514 fs [zfs] [mpt] problems with ZFS and LSILogic SAS/SATA Ad o kern/128173 fs [ext2fs] ls gives "Input/output error" on mounted ext3 o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127213 fs [tmpfs] sendfile on tmpfs data corruption o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125536 fs [ext2fs] ext 2 mounts cleanly but fails on commands li o kern/125149 fs [nfs][panic] changing into .zfs dir from nfs client ca o kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition o kern/122888 fs [zfs] zfs hang w/ prefetch on, zil off while running t o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o bin/118249 fs mv(1): moving a directory changes its mtime o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D 28 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 17:05:09 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D885F106564A for ; Mon, 15 Dec 2008 17:05:09 +0000 (UTC) (envelope-from des@des.no) Received: from tim.des.no (tim.des.no [194.63.250.121]) by mx1.freebsd.org (Postfix) with ESMTP id 93E898FC0C for ; Mon, 15 Dec 2008 17:05:09 +0000 (UTC) (envelope-from des@des.no) Received: from ds4.des.no (des.no [84.49.246.2]) by smtp.des.no (Postfix) with ESMTP id 465906D43F; Mon, 15 Dec 2008 17:05:08 +0000 (UTC) Received: by ds4.des.no (Postfix, from userid 1001) id 18C42844AD; Mon, 15 Dec 2008 18:05:08 +0100 (CET) From: =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= To: rick-freebsd2008@kiwi-computer.com References: <20081213173902.GA96883@keira.kiwi-computer.com> <20081213183058.GA20992@a91-153-125-115.elisa-laajakaista.fi> <20081213192320.GA97766@keira.kiwi-computer.com> Date: Mon, 15 Dec 2008 18:05:07 +0100 In-Reply-To: <20081213192320.GA97766@keira.kiwi-computer.com> (Rick C. Petty's message of "Sat, 13 Dec 2008 13:23:20 -0600") Message-ID: <86y6yh5pz0.fsf@ds4.des.no> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.60 (berkeley-unix) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: UFS label limitations X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2008 17:05:09 -0000 "Rick C. Petty" writes: > Well at the very least can we allow all characters between 0x20 and 0x7e > except for: "&/<>\ Stick to the POSIX portable file name character set: [A-Za-z0-9._-] DES --=20 Dag-Erling Sm=C3=B8rgrav - des@des.no From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 19:55:41 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E1EA3106564A for ; Mon, 15 Dec 2008 19:55:41 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.29]) by mx1.freebsd.org (Postfix) with ESMTP id 95B6D8FC13 for ; Mon, 15 Dec 2008 19:55:41 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: by yx-out-2324.google.com with SMTP id 8so1322388yxb.13 for ; Mon, 15 Dec 2008 11:55:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=9ttu1cbQsUjVRnHj1AhhMnmtiH0uD16urCiBQ4vCSWQ=; b=vT1G+Da0iScY1CtKEgrPd2nVgHheao5KulNKmb1fVRdvRJJHqgt9MCxwsL3XGT+OiW Mfk/e/yb6EWD76rKvdFzUgd1tQzwiQNp0MobNWuLoiVEyzT4AeAaEAoKpSOj8hf6fFkW d7epwP+GsjkBNUKdxXNJ8FwwFimQ4DxWhSUEU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=FBCR2s9M4iGYQj8Z9fJF2auJMDbgFALmCTO58e3KdT2IYn224kRDPmmmUrrPgmz/gn 4YTJP9tVRF/mvv1eAiDx18exGWsGWyuVXgStSqdWl2G+LcR5UiZPhncacNq+AAy3KHre dR/GaTUHBrvWTOPI2FbU9F1Fp3+96xZdQUTqU= Received: by 10.151.13.7 with SMTP id q7mr3976362ybi.180.1229370940893; Mon, 15 Dec 2008 11:55:40 -0800 (PST) Received: by 10.151.130.10 with HTTP; Mon, 15 Dec 2008 11:55:40 -0800 (PST) Message-ID: <5f67a8c40812151155o166b96b1meef07e685307c9ba@mail.gmail.com> Date: Mon, 15 Dec 2008 14:55:40 -0500 From: "Zaphod Beeblebrox" To: "Christopher Arnold" In-Reply-To: MIME-Version: 1.0 References: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS and other filesystem semantics. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2008 19:55:42 -0000 On Tue, Dec 9, 2008 at 9:01 AM, Christopher Arnold wrote: > i have been thinking a bit about filesytem semantics lately. Mainly about > open files. > > Classicly if a file is open the filedescriptor continues accessing the same > file regardless if it is deleted or someone did a mv and replaced it. > > But what happens in ZFS? > > * delete file in ZFS > I guess this is a no brainer, standard unix way of accessing the old file. When all references to data are freed, the data is freed. directory entries and open files are both references. > * The fs get snapshotted and file deleted > Same as above i guess. A snapshot counts as a reference > * The fs gets snapshotted and later the snapshot get deleted... > What happens here? A snapshot is a reference. When the file is "deleted" the snapshot still references the data. When the snapshot is deleted, if the data has no other references, it is freed. > Or maybe even: > * The fs gets snapshotted, file deleted, then snapshot deleted. > > (These questions are actually just a sidestep from the issue im trying to > figure out right no. But i guess they are nevertheless interesting.) > > The reason i have been thinking about this is that i'm implementing a > remote RO filesystem with local caching. And to reduce latency i download > chunks of the files and cache these chunks. I'm trying to keep the > filesystem stateless, but my issue is that if the file get changed under our > feet the resulting chunks would be from different files. > > Have anyone seen a nice solution to this issue? > > Does anyone have any ideas of how to implement unix like semantics over a > stateless procotol without to much magic? The semantics you desire are basically reference counting. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 20:39:42 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 146701065679 for ; Mon, 15 Dec 2008 20:39:42 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-gx0-f10.google.com (mail-gx0-f10.google.com [209.85.217.10]) by mx1.freebsd.org (Postfix) with ESMTP id 701A28FC14 for ; Mon, 15 Dec 2008 20:39:41 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: by gxk3 with SMTP id 3so2285251gxk.12 for ; Mon, 15 Dec 2008 12:39:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=d3jxlC3/OvT1M3VFwkEK2zuHF1f6fO65yKmpRLpIHYs=; b=HsaJ+/bW0s4c/ven3odI7Kn/C4V2o1x1+lbW84xPQcDwNOMPxZ9lswFHGcrUd32mmY A05dL3PIKRTqWQovdCtRdsI7gTk5nvyHfi5h5oxz6FwZBL9afW3j/GgWVpgrsxtZeThg kC6kE1axc+kzy6MmErtYJ6ajR8nRpT851i+l0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=nziRhlfH/IzLTEg8yrUqtYK5ZZ27+BRzx6ndnBs+GvL/1fLwYXQDebdCif7YjIUr3p r/jhfFXsk6ozt6YiaMFIu81QeZtsDivZa8cACgGq6mgMZNQr5brCDAJ6q4BJwQdWC/vY NgaeSh/d19k4yEeNWXRAQoZGm5X3bZeMa1B3w= Received: by 10.150.177.20 with SMTP id z20mr13247463ybe.193.1229373580513; Mon, 15 Dec 2008 12:39:40 -0800 (PST) Received: by 10.151.130.10 with HTTP; Mon, 15 Dec 2008 12:39:40 -0800 (PST) Message-ID: <5f67a8c40812151239o2b1f1f4cje7170cb1221133cd@mail.gmail.com> Date: Mon, 15 Dec 2008 15:39:40 -0500 From: "Zaphod Beeblebrox" To: "Bryan Alves" In-Reply-To: <92f477740812090804k102dcb62qcd893b3263da56a9@mail.gmail.com> MIME-Version: 1.0 References: <92f477740812082155y3365bec7v5574206dd1a98e26@mail.gmail.com> <493E2AD2.8070704@jrv.org> <92f477740812090804k102dcb62qcd893b3263da56a9@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS resize disk vdev X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2008 20:39:42 -0000 On Tue, Dec 9, 2008 at 11:04 AM, Bryan Alves wrote: > On Tue, Dec 9, 2008 at 3:22 AM, James R. Van Artsdalen < > james-freebsd-fs2@jrv.org> wrote: > > > > I'm not sure how ZFS reacts to an existing disk drive suddenly becoming > > larger. Real disk drives don't do that and ZFS is intended to use real > > disks. There are some uberblocks (pool superblocks) at the end of the > > disk and ZFS probably won't be able to find them if the uberblocks at > > the front of the disk are clobbered and the "end of the disk" has moved > > out away from the remaining uberblocks. > Very well, in fact. In fact, one way to "grow" a RAID Z1 or Z2 pool is to replace each disk with a larger one. When the last one is finished resilvering, you will have more space. My reason for wanting to use my hardware controller isn't for speed, it's > for the ability to migrate in place. I'm currently using 5 750GB drives, > and I would like the flexibility to be able to purchase a 6th and grow my > array by 750GB in place. If I could achieve something, anything, similar > in > ZFS (namely, buy an amount of disks smaller than the number of total disks > in the array and see a gain in storage capacity), I would use ZFS. You can't add one disk... but you can add several (easily). There are two ways ZFS grows and both are well documented. The first is add another set of disks (at least 2 for mirroring, 3 for Z1 and 4 for z2). ZFS recomends not more than 9 disks per RAID group anyways. In my case, I have 6 750G drives in my array. They're pretty much full... so I'm looking at adding another 6 1T drives shortly. This is transparent and the "industry" would call this RAID50... that is two raid 5 (Z1) groups striped together. The second way to add space is to replace disks with larger ones (one-by-one). Lets say, down the road, that my disks are full again and 4T disks are common and cheap. I replace each 750G disk with a 4T disk and let things resilver. My array would have been 8.75 gig (3.75T from the 750's and 5T from the 1T drives) and it would suddenly be 25T (20 from the 4T drives and 5T from the 1T drives). This increase in space occurs when the last drive is resilvered. This last step is good because at some point drives are not worth the power to run. I turned off my array of 18G SCSI drives a couple of years ago --- it wasn't worth the power. In the ZFS realm... instead of transferring the data and turning off the system, you upgrade. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 21:12:09 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 387581065672 for ; Mon, 15 Dec 2008 21:12:09 +0000 (UTC) (envelope-from bryanalves@gmail.com) Received: from wa-out-1112.google.com (wa-out-1112.google.com [209.85.146.183]) by mx1.freebsd.org (Postfix) with ESMTP id 068EC8FC1A for ; Mon, 15 Dec 2008 21:12:08 +0000 (UTC) (envelope-from bryanalves@gmail.com) Received: by wa-out-1112.google.com with SMTP id m34so1384203wag.27 for ; Mon, 15 Dec 2008 13:12:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=rHn9XaBRCsVcKttsN+JskJKAOxsxVdoFk2Odb9ClEF8=; b=puKoWMjwqNYtR/m/WqJWP5zXEHr4MDYjPq4eOACgBwcKOFz/UzYZor4WZgbbMljGLV gxfmFadWhYrLdYX4Q5vC6thhqT1XPxIBqt96RQtWIirxFia4vtIaWkms65eqfin47dmN Fajxg9siHr7YAy0U+r79hBi7vNMpmRQuI4GxQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=GlBHB50PNly7bJ+EwxD0Jw+IUZdhX9Quhe5tIIJ9sJXnB3a7XcVrq97zKrtpNriFPC UrOoRtkKMBJ6EeduhYlPwnQs7zXm1Q3Q16JXd3pLHJUzQicnFMegZ5IYlceCR3vxEJlJ LFm5oiDQUmWiuh4X+rG7VSbhZoRXeOoZ2Ob0Y= Received: by 10.115.94.1 with SMTP id w1mr5198491wal.177.1229375528568; Mon, 15 Dec 2008 13:12:08 -0800 (PST) Received: by 10.114.103.20 with HTTP; Mon, 15 Dec 2008 13:12:08 -0800 (PST) Message-ID: <92f477740812151312vccef91eu171062a50eb46ca1@mail.gmail.com> Date: Mon, 15 Dec 2008 16:12:08 -0500 From: "Bryan Alves" To: "Zaphod Beeblebrox" In-Reply-To: <5f67a8c40812151239o2b1f1f4cje7170cb1221133cd@mail.gmail.com> MIME-Version: 1.0 References: <92f477740812082155y3365bec7v5574206dd1a98e26@mail.gmail.com> <493E2AD2.8070704@jrv.org> <92f477740812090804k102dcb62qcd893b3263da56a9@mail.gmail.com> <5f67a8c40812151239o2b1f1f4cje7170cb1221133cd@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS resize disk vdev X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2008 21:12:09 -0000 On Mon, Dec 15, 2008 at 3:39 PM, Zaphod Beeblebrox wrote: > On Tue, Dec 9, 2008 at 11:04 AM, Bryan Alves wrote: > >> On Tue, Dec 9, 2008 at 3:22 AM, James R. Van Artsdalen < >> james-freebsd-fs2@jrv.org> wrote: >> > > >> >> > I'm not sure how ZFS reacts to an existing disk drive suddenly becoming >> > larger. Real disk drives don't do that and ZFS is intended to use real >> > disks. There are some uberblocks (pool superblocks) at the end of the >> > disk and ZFS probably won't be able to find them if the uberblocks at >> > the front of the disk are clobbered and the "end of the disk" has moved >> > out away from the remaining uberblocks. >> > > Very well, in fact. In fact, one way to "grow" a RAID Z1 or Z2 pool is to > replace each disk with a larger one. When the last one is finished > resilvering, you will have more space. > > My reason for wanting to use my hardware controller isn't for speed, it's >> for the ability to migrate in place. I'm currently using 5 750GB drives, >> and I would like the flexibility to be able to purchase a 6th and grow my >> array by 750GB in place. If I could achieve something, anything, similar >> in >> ZFS (namely, buy an amount of disks smaller than the number of total disks >> in the array and see a gain in storage capacity), I would use ZFS. > > > You can't add one disk... but you can add several (easily). There are two > ways ZFS grows and both are well documented. > > The first is add another set of disks (at least 2 for mirroring, 3 for Z1 > and 4 for z2). ZFS recomends not more than 9 disks per RAID group anyways. > In my case, I have 6 750G drives in my array. They're pretty much full... > so I'm looking at adding another 6 1T drives shortly. This is transparent > and the "industry" would call this RAID50... that is two raid 5 (Z1) groups > striped together. > > The second way to add space is to replace disks with larger ones > (one-by-one). Lets say, down the road, that my disks are full again and 4T > disks are common and cheap. I replace each 750G disk with a 4T disk and let > things resilver. My array would have been 8.75 gig (3.75T from the 750's > and 5T from the 1T drives) and it would suddenly be 25T (20 from the 4T > drives and 5T from the 1T drives). This increase in space occurs when the > last drive is resilvered. > > This last step is good because at some point drives are not worth the power > to run. I turned off my array of 18G SCSI drives a couple of years ago --- > it wasn't worth the power. In the ZFS realm... instead of transferring the > data and turning off the system, you upgrade. > In the case of option one, after this stripe of 2 raidz's is created though, those old drives can't be pulled from the array can they? More specifically, after we "upgrade" to what would be termed Raid50, we can't "downgrade" back to Raid5, right? From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 22:09:23 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C5B5A1065675 for ; Mon, 15 Dec 2008 22:09:23 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from rn-out-0910.google.com (rn-out-0910.google.com [64.233.170.191]) by mx1.freebsd.org (Postfix) with ESMTP id 67BEE8FC2C for ; Mon, 15 Dec 2008 22:09:23 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: by rn-out-0910.google.com with SMTP id j71so3323341rne.12 for ; Mon, 15 Dec 2008 14:09:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=iY31tRdzFAM2YMcyIDhOIfGfuBKd1aLWd29HKJJ19sw=; b=YIZfKCIpzP6xsPHYylOCfMeEfpfxrZdSUDodunsytmijI/nC9bTOzEEW+f2XEFsIjL 1te9ChbIgC39H/I//sJCDTix9rOaCnvttdMByjgVyK4NiTYeTzqVR58HAmNWTsG+6a0M Fd0Inz9w+WPZ5asfboHuoP2FsK/TmU/U6BWZc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=ZNF5BdRH3Zb3+LgrSdkakTmW5ShF9o6kfmA8rqqB3ZFwGASTSmy36iQvlq9/NOE+A0 1pmHxp0gJwJauKyk2YPNTSJYsnD4t1NLhw3NOQt00t3KD3o1C5zrBNgSemcxexvopEk7 BskK0NFzPX/XHmA83MUcIrKhq6bte7jJapggE= Received: by 10.150.216.3 with SMTP id o3mr13396789ybg.113.1229378962535; Mon, 15 Dec 2008 14:09:22 -0800 (PST) Received: by 10.151.130.10 with HTTP; Mon, 15 Dec 2008 14:09:22 -0800 (PST) Message-ID: <5f67a8c40812151409g665b81f2v261a8aa035db679b@mail.gmail.com> Date: Mon, 15 Dec 2008 17:09:22 -0500 From: "Zaphod Beeblebrox" To: "Bryan Alves" In-Reply-To: <92f477740812151312vccef91eu171062a50eb46ca1@mail.gmail.com> MIME-Version: 1.0 References: <92f477740812082155y3365bec7v5574206dd1a98e26@mail.gmail.com> <493E2AD2.8070704@jrv.org> <92f477740812090804k102dcb62qcd893b3263da56a9@mail.gmail.com> <5f67a8c40812151239o2b1f1f4cje7170cb1221133cd@mail.gmail.com> <92f477740812151312vccef91eu171062a50eb46ca1@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS resize disk vdev X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2008 22:09:23 -0000 On Mon, Dec 15, 2008 at 4:12 PM, Bryan Alves wrote: > In the case of option one, after this stripe of 2 raidz's is created > though, those old drives can't be pulled from the array can they? More > specifically, after we "upgrade" to what would be termed Raid50, we can't > "downgrade" back to Raid5, right? > According to the ZFS website, the ability of removing vdevs is planned but not yet implemented. You can also not shink a vdev... so you're not losing any functionality there. Even if your hardware raid supported shrinking the array, ZFS would not. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 15 23:48:11 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 41FC11065676 for ; Mon, 15 Dec 2008 23:48:11 +0000 (UTC) (envelope-from rick@kiwi-computer.com) Received: from kiwi-computer.com (keira.kiwi-computer.com [63.224.10.3]) by mx1.freebsd.org (Postfix) with SMTP id DE2BB8FC19 for ; Mon, 15 Dec 2008 23:48:10 +0000 (UTC) (envelope-from rick@kiwi-computer.com) Received: (qmail 24565 invoked by uid 2001); 15 Dec 2008 23:48:09 -0000 Date: Mon, 15 Dec 2008 17:48:09 -0600 From: "Rick C. Petty" To: Dag-Erling =?iso-8859-1?Q?Sm=F8rgrav?= Message-ID: <20081215234809.GA24403@keira.kiwi-computer.com> References: <20081213173902.GA96883@keira.kiwi-computer.com> <20081213183058.GA20992@a91-153-125-115.elisa-laajakaista.fi> <20081213192320.GA97766@keira.kiwi-computer.com> <86y6yh5pz0.fsf@ds4.des.no> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <86y6yh5pz0.fsf@ds4.des.no> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@freebsd.org Subject: Re: UFS label limitations X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: rick-freebsd2008@kiwi-computer.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2008 23:48:11 -0000 On Mon, Dec 15, 2008 at 06:05:07PM +0100, Dag-Erling Smørgrav wrote: > "Rick C. Petty" writes: > > Well at the very least can we allow all characters between 0x20 and 0x7e > > except for: "&/<>\ > > Stick to the POSIX portable file name character set: [A-Za-z0-9._-] Good idea. It gives me the separators I need. Would a committer be willing to review and commit the attached (inline) patch? -- Rick C. Petty --- src/sbin/newfs/newfs.c.orig 2007-03-02 14:07:59.000000000 -0600 +++ src/sbin/newfs/newfs.c 2008-12-15 17:29:26.000000000 -0600 @@ -168,11 +168,15 @@ case 'L': volumelabel = optarg; i = -1; - while (isalnum(volumelabel[++i])); - if (volumelabel[i] != '\0') { - errx(1, "bad volume label. Valid characters are alphanumerics."); - } - if (strlen(volumelabel) >= MAXVOLLEN) { + while ((ch = volumelabel[++i]) != '\0') + if (ch != '-' && ch != '.' && ch != '_' && + (ch < '0' || ch > '9') && + (ch < 'A' || ch > 'Z') && + (ch < 'a' || ch > 'z')) + errx(1, + "bad volume label. Valid characters are " + "[0-9A-Za-z._-]."); + if (i >= MAXVOLLEN) { errx(1, "bad volume label. Length is longer than %d.", MAXVOLLEN); } --- src/sbin/tunefs/tunefs.c.orig 2008-02-26 14:25:35.000000000 -0600 +++ src/sbin/tunefs/tunefs.c 2008-12-15 17:27:58.000000000 -0600 @@ -153,13 +153,16 @@ name = "volume label"; Lvalue = optarg; i = -1; - while (isalnum(Lvalue[++i])); - if (Lvalue[i] != '\0') { + while ((ch = Lvalue[++i]) != '\0') + if (ch != '-' && ch != '.' && ch != '_' && + (ch < '0' || ch > '9') && + (ch < 'A' || ch > 'Z') && + (ch < 'a' || ch > 'z')) errx(10, - "bad %s. Valid characters are alphanumerics.", + "bad %s. Valid characters are " + "[0-9A-Za-z._-].", name); - } - if (strlen(Lvalue) >= MAXVOLLEN) { + if (i >= MAXVOLLEN) { errx(10, "bad %s. Length is longer than %d.", name, MAXVOLLEN - 1); } From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 13:29:53 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B14FF1065670 for ; Tue, 16 Dec 2008 13:29:53 +0000 (UTC) (envelope-from des@des.no) Received: from tim.des.no (tim.des.no [194.63.250.121]) by mx1.freebsd.org (Postfix) with ESMTP id 789298FC1B for ; Tue, 16 Dec 2008 13:29:53 +0000 (UTC) (envelope-from des@des.no) Received: from ds4.des.no (des.no [84.49.246.2]) by smtp.des.no (Postfix) with ESMTP id 628C86D43F; Tue, 16 Dec 2008 13:29:52 +0000 (UTC) Received: by ds4.des.no (Postfix, from userid 1001) id 429E3844BA; Tue, 16 Dec 2008 14:29:52 +0100 (CET) From: =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= To: rick-freebsd2008@kiwi-computer.com References: <20081213173902.GA96883@keira.kiwi-computer.com> <20081213183058.GA20992@a91-153-125-115.elisa-laajakaista.fi> <20081213192320.GA97766@keira.kiwi-computer.com> <86y6yh5pz0.fsf@ds4.des.no> <20081215234809.GA24403@keira.kiwi-computer.com> Date: Tue, 16 Dec 2008 14:29:52 +0100 In-Reply-To: <20081215234809.GA24403@keira.kiwi-computer.com> (Rick C. Petty's message of "Mon, 15 Dec 2008 17:48:09 -0600") Message-ID: <8663lk5ju7.fsf@ds4.des.no> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.60 (berkeley-unix) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: UFS label limitations X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2008 13:29:53 -0000 "Rick C. Petty" writes: > Dag-Erling Sm=C3=B8rgrav writes: > > Stick to the POSIX portable file name character set: [A-Za-z0-9._-] > Good idea. It gives me the separators I need. Would a committer be > willing to review and commit the attached (inline) patch? Take a look at strspn(3). DES --=20 Dag-Erling Sm=C3=B8rgrav - des@des.no From owner-freebsd-fs@FreeBSD.ORG Tue Dec 16 20:10:48 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 500471065674 for ; Tue, 16 Dec 2008 20:10:48 +0000 (UTC) (envelope-from rick@kiwi-computer.com) Received: from kiwi-computer.com (keira.kiwi-computer.com [63.224.10.3]) by mx1.freebsd.org (Postfix) with SMTP id DC1C48FC30 for ; Tue, 16 Dec 2008 20:10:47 +0000 (UTC) (envelope-from rick@kiwi-computer.com) Received: (qmail 34863 invoked by uid 2001); 16 Dec 2008 20:10:46 -0000 Date: Tue, 16 Dec 2008 14:10:46 -0600 From: "Rick C. Petty" To: Dag-Erling =?iso-8859-1?Q?Sm=F8rgrav?= Message-ID: <20081216201046.GA34809@keira.kiwi-computer.com> References: <20081213173902.GA96883@keira.kiwi-computer.com> <20081213183058.GA20992@a91-153-125-115.elisa-laajakaista.fi> <20081213192320.GA97766@keira.kiwi-computer.com> <86y6yh5pz0.fsf@ds4.des.no> <20081215234809.GA24403@keira.kiwi-computer.com> <8663lk5ju7.fsf@ds4.des.no> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <8663lk5ju7.fsf@ds4.des.no> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@freebsd.org Subject: Re: UFS label limitations X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: rick-freebsd2008@kiwi-computer.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Dec 2008 20:10:48 -0000 On Tue, Dec 16, 2008 at 02:29:52PM +0100, Dag-Erling Smørgrav wrote: > "Rick C. Petty" writes: > > Dag-Erling Smørgrav writes: > > > Stick to the POSIX portable file name character set: [A-Za-z0-9._-] > > Good idea. It gives me the separators I need. Would a committer be > > willing to review and commit the attached (inline) patch? > > Take a look at strspn(3). You think it's better to use strspn than a couple range checks? I would have thought the character lookup with strspn would be slower and more ghastly to look at. -- Rick C. Petty From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 07:18:18 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 343031065672 for ; Wed, 17 Dec 2008 07:18:18 +0000 (UTC) (envelope-from osharoiko@gmail.com) Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.169]) by mx1.freebsd.org (Postfix) with ESMTP id C11068FC12 for ; Wed, 17 Dec 2008 07:18:17 +0000 (UTC) (envelope-from osharoiko@gmail.com) Received: by ug-out-1314.google.com with SMTP id 30so298930ugs.39 for ; Tue, 16 Dec 2008 23:18:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:from:to:content-type :date:message-id:mime-version:x-mailer:content-transfer-encoding; bh=8/E8QtFD3ifAWFgdv/GSxjmj9cGNlBOUBYfERLjJ16c=; b=g5JkbxvlrCy9c92/AoWk5zXPpDJ2S/SKoxowKHn2O1yjTI/gaGLNDjedblVPX5TyX6 xu2dK89hyngCaJKZkWOoiPFk/kRFf1RWL8vMluyXZRVPLdE4sxZfDVV6WiIgu2Q6z5RC avFgUXL1akXg9+rdHL0IdbhH1Yo9cAs4L6pp0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; b=EaQ7JW/MLrEFwKbRv87lKdNJ20Nsvc3YPALIv6q1aTM84aBylJMJaclgnahesXSSFn J5mGFDvGfV0p3mK1pS6BArHTwZmms2w66Rd2Nfh38udpKlxfE5XcLK0Dv0KSLua6Dp96 gYFgo956iujnGDH/SQi1GBgn2P/PdaJTgUrWc= Received: by 10.67.123.8 with SMTP id a8mr3220322ugn.74.1229497080619; Tue, 16 Dec 2008 22:58:00 -0800 (PST) Received: from ?195.208.252.154? (brain.cc.rsu.ru [195.208.252.154]) by mx.google.com with ESMTPS id l20sm283623uga.14.2008.12.16.22.57.59 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 16 Dec 2008 22:58:00 -0800 (PST) From: Oleg Sharoyko To: freebsd-fs@freebsd.org Content-Type: text/plain Date: Wed, 17 Dec 2008 09:57:55 +0300 Message-Id: <1229497075.1182.9.camel@brain.cc.rsu.ru> Mime-Version: 1.0 X-Mailer: Evolution 2.22.3.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit Subject: Strange behaviour with unionfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2008 07:18:18 -0000 Hi! Could please someone check the following sequence of commands in recent -CURRENT: cd /tmp mkdir sandbox cd sandbox/ mkdir -p 1 mkdir -p 2/2 mkdir -p 3/3 echo Test > 3/3/test.txt mount -t unionfs 2 1 mount -t nullfs 3 1/2 cat 1/2/3/test.txt test -d 1/2 cat 1/2/3/test.txt I'm running -STABLE with patch for unix sockets (which I converted from -CURRENT) and it gives me really strange results: hetzner-srv1, /tmp # cd /tmp hetzner-srv1, /tmp # mkdir sandbox hetzner-srv1, /tmp # cd sandbox/ hetzner-srv1, /tmp/sandbox # mkdir -p 1 hetzner-srv1, /tmp/sandbox # mkdir -p 2/2 hetzner-srv1, /tmp/sandbox # mkdir -p 3/3 hetzner-srv1, /tmp/sandbox # echo Test > 3/3/test.txt hetzner-srv1, /tmp/sandbox # mount -t unionfs 2 1 hetzner-srv1, /tmp/sandbox # mount -t nullfs 3 1/2 hetzner-srv1, /tmp/sandbox # cat 1/2/3/test.txt cat: 1/2/3/test.txt: No such file or directory hetzner-srv1, /tmp/sandbox # test -d 1/2 hetzner-srv1, /tmp/sandbox # cat 1/2/3/test.txt Test hetzner-srv1, /tmp/sandbox # It looks like files in subdirectories of filesystems mounted on top of unionfs are not visible until I somehow test the mountpoint. -- Oleg Sharoyko. Software and Network Engineer Computer Center of Rostov State University. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 18:25:54 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4559C1065674 for ; Wed, 17 Dec 2008 18:25:54 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from itchy.rabson.org (unknown [IPv6:2002:50b1:e8f2:1::143]) by mx1.freebsd.org (Postfix) with ESMTP id 024D78FC1C for ; Wed, 17 Dec 2008 18:25:54 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from [IPv6:2001:470:909f:1:21b:63ff:feb8:5abc] (unknown [IPv6:2001:470:909f:1:21b:63ff:feb8:5abc]) by itchy.rabson.org (Postfix) with ESMTP id 8E7A93FB7 for ; Wed, 17 Dec 2008 18:25:25 +0000 (GMT) Message-Id: <9461581F-F354-486D-961D-3FD5B1EF007C@rabson.org> From: Doug Rabson To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Wed, 17 Dec 2008 18:25:51 +0000 X-Mailer: Apple Mail (2.930.3) Subject: Booting from ZFS raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2008 18:25:54 -0000 I've been working on adding raidz and raidz2 support to the boot code and I have a patch which could use some testing if anyone here is interested. This http://people.freebsd.org/~dfr/ raidzboot-17122008.diff adds support for raidz and raidz2. The easiest way to prepare a bootable pool is to put a GPT boot partition on each disk that will make up the raidz pool and install gptzfsboot on the boot partition of every drive. You can boot from any of the drives and as long as the BIOS can see enough drives you should be able to boot. The boot code supports booting from degraded pools and pools where some of the data is corrupt (as long as it has enough data available to repair the problem). Currently the ZFS kernel code refuses to allow you to set the bootfs pool property on raidz pools (because Solaris can't boot from them). This means that you are limited to booting from the root filesystem of the pool for now (it shouldn't be hard to relax this restriction). The root filesystem of the pool should contain a directory /boot with the usual contents which must include a /boot/loader which was built with the 'LOADER_ZFS_SUPPORT' make option. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 21:51:02 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 259411065673 for ; Wed, 17 Dec 2008 21:51:02 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from rn-out-0910.google.com (rn-out-0910.google.com [64.233.170.188]) by mx1.freebsd.org (Postfix) with ESMTP id BFFB98FC20 for ; Wed, 17 Dec 2008 21:51:01 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: by rn-out-0910.google.com with SMTP id j71so171002rne.12 for ; Wed, 17 Dec 2008 13:51:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:mime-version:content-type; bh=GPXZKKUYx6NzckyY2NtZTiEVLqFH8kKW+hcKpfqANS8=; b=YlKuRzUF9YT+ibcE43UWxqew6PwhsVxQVRzFLd9HKtXgtwmiCINccDBkR8iF84TDXx GLcDzgzr3D8eGFB4j4xfQKllC2cX/+Yc8Mr68qVwokaVtmcbhBD3Mm47fPOT2sVtdU6H hh7rlLYbjulDa5HORy/BGcKgOS3V/j7QVQo88= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type; b=T1g6a4x99jjXXJ5pD8GIjJoJ75gfiAi2En54Xv/XwTKdDXfDaLkVrTl61vf9c2QLW6 +EQobaHqhZ4z1wqx4p5UKRr6lYZ3/P9uz4sHbUygJjjzBDqMNVdBTn9N/aiFX9qC2kdG 0+yISfuQLw3TR+p2CUVBAtoYls5N8cGnT03E0= Received: by 10.151.143.3 with SMTP id v3mr2118728ybn.101.1229550660515; Wed, 17 Dec 2008 13:51:00 -0800 (PST) Received: by 10.151.130.10 with HTTP; Wed, 17 Dec 2008 13:51:00 -0800 (PST) Message-ID: <5f67a8c40812171351j66dc5484pee631198030a5739@mail.gmail.com> Date: Wed, 17 Dec 2008 16:51:00 -0500 From: "Zaphod Beeblebrox" To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: More on ZFS filesystem sizes. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2008 21:51:02 -0000 So... I posted before about the widly different sizes reported by zfs list and du -h for my ports repository. Nobody explained this to any satisfying degree. I now have another quandry. I have ZFS on my laptop (two drives, mirrored) and I "zfs send" backups to my big array (6 drives, raid-Z1). The problem is that they don't match up: On the 6 drive array: vr2/backup/canoe/64/usr@20080307-1541 746M - 4.82G - vr2/backup/canoe/64/usr@20080309-1443 221M - 4.79G - vr2/backup/canoe/64/usr@20080319-1722 334M - 4.97G - vr2/backup/canoe/64/usr@20080329-0041 27.8M - 5.24G - vr2/backup/canoe/64/usr@20080402-2300 21.9M - 5.27G - vr2/backup/canoe/64/usr@20080416-0223 18.5M - 5.29G - vr2/backup/canoe/64/usr@20080417-0117 18.6M - 5.29G - On the 2 drive laptop: canoe/64/usr@20080307-1541 738M - 4.76G - canoe/64/usr@20080309-1443 217M - 4.73G - canoe/64/usr@20080319-1722 330M - 4.90G - canoe/64/usr@20080329-0041 26.7M - 5.17G - canoe/64/usr@20080402-2300 20.6M - 5.20G - canoe/64/usr@20080416-0223 17.5M - 5.22G - canoe/64/usr@20080417-0117 17.5M - 5.22G - ... note that the snapshot sizes differ by many megabytes ... and not seemingly any fixed amount, either. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 17 23:17:13 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 29A73106564A for ; Wed, 17 Dec 2008 23:17:13 +0000 (UTC) (envelope-from brooks@lor.one-eyed-alien.net) Received: from lor.one-eyed-alien.net (cl-162.ewr-01.us.sixxs.net [IPv6:2001:4830:1200:a1::2]) by mx1.freebsd.org (Postfix) with ESMTP id 827EC8FC1F for ; Wed, 17 Dec 2008 23:17:12 +0000 (UTC) (envelope-from brooks@lor.one-eyed-alien.net) Received: from lor.one-eyed-alien.net (localhost [127.0.0.1]) by lor.one-eyed-alien.net (8.14.3/8.14.2) with ESMTP id mBHNHvwV030105; Wed, 17 Dec 2008 17:17:57 -0600 (CST) (envelope-from brooks@lor.one-eyed-alien.net) Received: (from brooks@localhost) by lor.one-eyed-alien.net (8.14.3/8.14.3/Submit) id mBHNHvF8030104; Wed, 17 Dec 2008 17:17:57 -0600 (CST) (envelope-from brooks) Date: Wed, 17 Dec 2008 17:17:57 -0600 From: Brooks Davis To: Zaphod Beeblebrox Message-ID: <20081217231757.GE27041@lor.one-eyed-alien.net> References: <5f67a8c40812171351j66dc5484pee631198030a5739@mail.gmail.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="imjhCm/Pyz7Rq5F2" Content-Disposition: inline In-Reply-To: <5f67a8c40812171351j66dc5484pee631198030a5739@mail.gmail.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-3.0 (lor.one-eyed-alien.net [127.0.0.1]); Wed, 17 Dec 2008 17:17:57 -0600 (CST) Cc: freebsd-fs@freebsd.org Subject: Re: More on ZFS filesystem sizes. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Dec 2008 23:17:13 -0000 --imjhCm/Pyz7Rq5F2 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Dec 17, 2008 at 04:51:00PM -0500, Zaphod Beeblebrox wrote: > So... I posted before about the widly different sizes reported by zfs list > and du -h for my ports repository. Nobody explained this to any satisfyi= ng > degree. >=20 > I now have another quandry. I have ZFS on my laptop (two drives, mirrore= d) > and I "zfs send" backups to my big array (6 drives, raid-Z1). The problem > is that they don't match up: >=20 > On the 6 drive array: >=20 > vr2/backup/canoe/64/usr@20080307-1541 746M - 4.82G - > vr2/backup/canoe/64/usr@20080309-1443 221M - 4.79G - > vr2/backup/canoe/64/usr@20080319-1722 334M - 4.97G - > vr2/backup/canoe/64/usr@20080329-0041 27.8M - 5.24G - > vr2/backup/canoe/64/usr@20080402-2300 21.9M - 5.27G - > vr2/backup/canoe/64/usr@20080416-0223 18.5M - 5.29G - > vr2/backup/canoe/64/usr@20080417-0117 18.6M - 5.29G - >=20 > On the 2 drive laptop: >=20 > canoe/64/usr@20080307-1541 738M - 4.76G - > canoe/64/usr@20080309-1443 217M - 4.73G - > canoe/64/usr@20080319-1722 330M - 4.90G - > canoe/64/usr@20080329-0041 26.7M - 5.17G - > canoe/64/usr@20080402-2300 20.6M - 5.20G - > canoe/64/usr@20080416-0223 17.5M - 5.22G - > canoe/64/usr@20080417-0117 17.5M - 5.22G - >=20 > ... note that the snapshot sizes differ by many megabytes ... and not > seemingly any fixed amount, either. Have you tried asking the zfs developers? I'd tend to assume zfs is reporting the amount of space it thinks it's using and that as long as the numbers are close to expected it's not likely to be a FreeBSD issue. It might well be the case that a given bit of data takes different amounts of space when stored on different pool types due to needing different meta data. -- Brooks --imjhCm/Pyz7Rq5F2 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (FreeBSD) iD8DBQFJSYikXY6L6fI4GtQRAjecAJ98uV9dDJUF8oe3yHArF2AmzyHpGwCeJGPv yuuAa+FxYg2FPEmJuKYGRv8= =H5XJ -----END PGP SIGNATURE----- --imjhCm/Pyz7Rq5F2-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 18 00:10:57 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D61681065675 for ; Thu, 18 Dec 2008 00:10:57 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 8EBF38FC12 for ; Thu, 18 Dec 2008 00:10:57 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id AE8EB17E55; Thu, 18 Dec 2008 11:10:54 +1100 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=0.1 required=10.0 tests=ALL_TRUSTED,MISSING_HEADERS autolearn=no version=3.2.3 Received: from [10.1.50.60] (ppp121-44-3-41.lns10.syd7.internode.on.net [121.44.3.41]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id B427017E49; Thu, 18 Dec 2008 11:10:49 +1100 (EST) Message-ID: <494994F9.4010105@modulus.org> Date: Thu, 18 Dec 2008 11:10:33 +1100 From: Andrew Snow User-Agent: Thunderbird 2.0.0.14 (X11/20080523) MIME-Version: 1.0 References: <5f67a8c40812171351j66dc5484pee631198030a5739@mail.gmail.com> <20081217231757.GE27041@lor.one-eyed-alien.net> In-Reply-To: <20081217231757.GE27041@lor.one-eyed-alien.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: More on ZFS filesystem sizes. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Dec 2008 00:10:57 -0000 > I now have another quandry. I have ZFS on my laptop (two drives, mirrored) > and I "zfs send" backups to my big array (6 drives, raid-Z1). The problem > is that they don't match up As you know, ZFS has variable block sizes from 512 bytes to 128kb with every power of 2 in between. Each block has a fair chunk of meta-data to go with it (those 128 bit pointers aren't very space efficient!) I suppose what you're seeing is due to fragmentation, since with copy-on-write for snapshots, big blocks can be replaced with smaller ones when a file is partially updated, but these can be written more efficiently during the send/receive process, as only the actually referenced data needs to be stored. Given all of that, your numbers are only out by 1 to 1.5%, so is it really that surprising? Regarding du on ZFS, it calculates the result based on the number of blocks consumed by the file, excluding metadata and parity and checksums, and after compression. /usr/ports will be full of tiny, compressable files resulting in a large ratio of metadata to actual file data. "zfs list" returns the space consumed including metadata, parity, and checksums. (Also, filesystem metadata is stored twice by default, or three times optionally, in addition to whatever RAID you are using.) So it is weird, but I believe what you're seeing is normal. Maybe you need special ZFS sunglasses which black out whenever you start trying to look at what ZFS is doing to your files :-) - Andrew From owner-freebsd-fs@FreeBSD.ORG Thu Dec 18 15:44:34 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 45845106567D; Thu, 18 Dec 2008 15:44:34 +0000 (UTC) (envelope-from bms@incunabulum.net) Received: from out1.smtp.messagingengine.com (out1.smtp.messagingengine.com [66.111.4.25]) by mx1.freebsd.org (Postfix) with ESMTP id 13E948FC1B; Thu, 18 Dec 2008 15:44:34 +0000 (UTC) (envelope-from bms@incunabulum.net) Received: from compute1.internal (compute1.internal [10.202.2.41]) by out1.messagingengine.com (Postfix) with ESMTP id AE0111E7A45; Thu, 18 Dec 2008 10:44:33 -0500 (EST) Received: from heartbeat1.messagingengine.com ([10.202.2.160]) by compute1.internal (MEProxy); Thu, 18 Dec 2008 10:44:33 -0500 X-Sasl-enc: R9WdMZmryzBAEhBpnub089UsRhfpjcKr1Qfjm+Rk04oB 1229615073 Received: from anglepoise.lon.incunabulum.net (82-35-112-254.cable.ubr07.dals.blueyonder.co.uk [82.35.112.254]) by mail.messagingengine.com (Postfix) with ESMTPSA id 0BCC33297C; Thu, 18 Dec 2008 10:44:32 -0500 (EST) Message-ID: <494A6FDF.8030103@incunabulum.net> Date: Thu, 18 Dec 2008 15:44:31 +0000 From: Bruce Simpson User-Agent: Thunderbird 2.0.0.18 (X11/20081204) MIME-Version: 1.0 To: "Paul B. Mahol" References: <8cb6106e0811241129o642dcf28re4ae177c8ccbaa25@mail.gmail.com> <8cb6106e0812031453j6dc2f2f4i374145823c084eaa@mail.gmail.com> <200812041747.09040.gnemmi@gmail.com> <4938FE44.9090608@FreeBSD.org> <4939133E.2000701@FreeBSD.org> <493CEE90.7050104@FreeBSD.org> <3a142e750812090553l564bff84pe1f02cd1b03090ff@mail.gmail.com> <4943F43B.4060105@incunabulum.net> <3a142e750812131403p31841403ub9d5693278c74111@mail.gmail.com> <4944501E.40900@incunabulum.net> <3a142e750812140747r2eb5ebadp7ac2b2c8ae357bae@mail.gmail.com> In-Reply-To: <3a142e750812140747r2eb5ebadp7ac2b2c8ae357bae@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org, "Bruce M. Simpson" Subject: Re: ext2fuse: user-space ext2 implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Dec 2008 15:44:34 -0000 Paul B. Mahol wrote: > Project itself doesnt look very active, but I may be wrong. It is in alpha state > as reported on SF. > IMHO it is better to maintain our own because it is in better shape, but I'm not > intersted in ext* as developer. > Shelved due to lack of interest, then... others can feel free to pick up. thanks BMS From owner-freebsd-fs@FreeBSD.ORG Thu Dec 18 18:19:37 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 637D6106567E for ; Thu, 18 Dec 2008 18:19:37 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.28]) by mx1.freebsd.org (Postfix) with ESMTP id 28F0E8FC0C for ; Thu, 18 Dec 2008 18:19:36 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: by yx-out-2324.google.com with SMTP id 8so727674yxb.13 for ; Thu, 18 Dec 2008 10:19:36 -0800 (PST) Received: by 10.142.143.14 with SMTP id q14mr918668wfd.66.1229624375117; Thu, 18 Dec 2008 10:19:35 -0800 (PST) Received: from ?10.0.1.193? (c-67-168-10-190.hsd1.wa.comcast.net [67.168.10.190]) by mx.google.com with ESMTPS id 22sm18166382wfd.53.2008.12.18.10.19.33 (version=TLSv1/SSLv3 cipher=RC4-MD5); Thu, 18 Dec 2008 10:19:34 -0800 (PST) Message-Id: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Thu, 18 Dec 2008 10:19:42 -0800 X-Mailer: Apple Mail (2.930.3) From: Matt Simerson Subject: ZFS performance gains real or imaginary? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Dec 2008 18:19:37 -0000 Did I miss some major ZFS performance enhancements? I upgraded the disks in my home file server to 1.5TB disks. Rather than using gmirror as I did last time, I decided to use ZFS to mirror them. The file server was running 7.0 and booted off a CF card so it was simply a matter of adding in the extra disks, configuring them with ZFS, and copying all the data over. [root@storage] ~ # zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 ad11 ONLINE 0 0 0 ad13 ONLINE 0 0 0 ZFS under FreeBSD 7 is horrendously slow. It took almost two days to copy 600GB of data (a bunch of MP3s, movies, and UFS backups of my servers in data centers) to the ZFS volume. Once completed, I removed the old disks. The file system performance after switching to ZFS is quite underwhelming. I notice it when doing any sort of writes to it. This echoes my experience with ZFS on my production backup servers at work. (all systems are multi-core Intel with 4GB+ RAM). $ ssh back01 uname -a FreeBSD back01.int.spry.com 8.0-CURRENT FreeBSD 8.0-CURRENT #0: Fri Aug 15 16:42:36 PDT 2008 root@back01.int.spry.com:/usr/obj/usr/src/ sys/BACK01 amd64 $ ssh back02 uname -a FreeBSD back02.int.spry.com 8.0-CURRENT FreeBSD 8.0-CURRENT #1: Wed Aug 13 13:57:19 PDT 2008 root@back02.int.spry.com:/usr/obj/usr/src/ sys/BACK02-HEAD amd64 On the two systems above (amd64 with 16GB of RAM and 24 1TB disks) I get about 30 days of uptime before the system hangs with a ZFS error. They write backups to disk 24x7 and never stop. I could not anything near that level of stability with back03 (below) which was much older hardware maxed out at 4GB of RAM. I finally resolved the stability issues on back03 by ditching ZFS and using geom_stripe across the two hardware RAID arrays. $ ssh back03 uname -a FreeBSD back03.int.spry.com 8.0-CURRENT FreeBSD 8.0-CURRENT #0: Tue Oct 28 16:54:22 PDT 2008 root@back03.int.spry.com:/usr/obj/usr/src/ sys/GENERIC amd64 Yesterday I did a cvsup to 8-HEAD and built a new kernel and world. I installed the new kernel, and then paniced slightly when I booted off the new kernel and the ZFS utilities proved completely worthless in attempts to get /usr and /var mounted (which are both on ZFS). It took a quick Google search to remember the solution: mount -t zfs tank/usr /usr mount -t zfs tank/var /var After installing world and rebooting, the system is positively snappy. File system interaction, which is lethargic on every ZFS system I've installed seems to be much faster. I haven't benchmarked the IO performance but something definitely changed. It's almost like the latency has decreased. Would changes committed since mid-August (when I built my last ZFS servers from -HEAD + the patch) and now explain this? If so, then I really should be upgrading my production ZFS servers to the latest -HEAD. Matt PS: I am using compression and getting the following results: [root@storage] ~ # zfs get compressratio NAME PROPERTY VALUE SOURCE tank compressratio 1.12x - tank/usr compressratio 1.12x - tank/usr/.snapshots compressratio 2.09x - tank/var compressratio 2.13x - In retrospect, I wouldn't bother with compression on /usr. But, / usr/.snapshots is my rsnapshot based backups of my servers sitting in remote data centers. Since the majority of changes between snapshots is log files, the data is quite compressible and ZFS compressions is quite effective. It's also quite effective on /var, as is shown. ZFS compression is effectively getting me 1/3 more disk space off my 1.5TB disks. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 00:13:03 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6EBA8106568A for ; Fri, 19 Dec 2008 00:13:03 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 341A28FC25 for ; Fri, 19 Dec 2008 00:13:03 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id 2074B17E56; Fri, 19 Dec 2008 11:13:01 +1100 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.1.50.60] (ppp121-44-3-41.lns10.syd7.internode.on.net [121.44.3.41]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 2F2E617D9C; Fri, 19 Dec 2008 11:12:57 +1100 (EST) Message-ID: <494AE6F4.30506@modulus.org> Date: Fri, 19 Dec 2008 11:12:36 +1100 From: Andrew Snow User-Agent: Thunderbird 2.0.0.14 (X11/20080523) MIME-Version: 1.0 To: Matt Simerson References: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> In-Reply-To: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance gains real or imaginary? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 00:13:03 -0000 > Did I miss some major ZFS performance enhancements? ZFS under 7 is almost completely useless, since I can make it crash reliably by running "rsync", there's not alot of point talking about its speed! Would changes committed since mid-August (when I > built my last ZFS servers from -HEAD + the patch) and now explain this? Yes. > If so, then I really should be upgrading my production ZFS servers to > the latest -HEAD. Thats correct, that is the only way to get the best working version of ZFS. Of course, then everything is unstable and broken - eg. SMBFS became unusable for me and would crash the server. . ZFS > compression is effectively getting me 1/3 more disk space off my 1.5TB > disks You should try gzip-9 compression mode, it saves almost that much space again all over :-) - Andrew From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 00:48:21 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6F89B1065675 for ; Fri, 19 Dec 2008 00:48:21 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 8F1658FC28 for ; Fri, 19 Dec 2008 00:48:20 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from volatile.chemikals.org (unknown [74.193.182.107]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 44D86A309C9D; Thu, 18 Dec 2008 18:48:19 -0600 (CST) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.3/8.14.3) with ESMTP id mBJ0mGUp094889; Thu, 18 Dec 2008 18:48:16 -0600 (CST) (envelope-from morganw@chemikals.org) Date: Thu, 18 Dec 2008 18:48:16 -0600 (CST) From: Wes Morgan To: Matt Simerson In-Reply-To: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> Message-ID: References: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance gains real or imaginary? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 00:48:21 -0000 On Thu, 18 Dec 2008, Matt Simerson wrote: > ZFS under FreeBSD 7 is horrendously slow. It took almost two days to copy > 600GB of data (a bunch of MP3s, movies, and UFS backups of my servers in data > centers) to the ZFS volume. Once completed, I removed the old disks. The file > system performance after switching to ZFS is quite underwhelming. I notice it > when doing any sort of writes to it. This echoes my experience with ZFS on > my production backup servers at work. (all systems are multi-core Intel with > 4GB+ RAM). That sounds completely contrary to my experience. I was able to migrate a 1.3 TB 6-disk raidz to a 8-disk raidz2, so the data had to come off and go back on. Took about 12-14 hours in total. My original setup included an SiS 2-port PCI SATA controller, which was a dog. Upgrading to a better setup improved the write performance drastically. But I don't think I load my systems down quite as much. I did have to upgrade to -current once I went to a board with higher throughput, as -stable would eventually deadlock each pool. > > On the two systems above (amd64 with 16GB of RAM and 24 1TB disks) I get > about 30 days of uptime before the system hangs with a ZFS error. They write > backups to disk 24x7 and never stop. I could not anything near that level of > stability with back03 (below) which was much older hardware maxed out at 4GB > of RAM. I finally resolved the stability issues on back03 by ditching ZFS > and using geom_stripe across the two hardware RAID arrays. Were you doing a zfs mirror across two hardware raid arrays? The performance of that type of setup would probably be sub-optimal versus a zpool with two raidz volumes. > Yesterday I did a cvsup to 8-HEAD and built a new kernel and world. I > installed the new kernel, and then paniced slightly when I booted off the new > kernel and the ZFS utilities proved completely worthless in attempts to get > /usr and /var mounted (which are both on ZFS). It took a quick Google search > to remember the solution: *cough* ABI compatibility isn't always preserved across releases. The best way to go from 7 to 8 is usually to perform the buildworld and buildkernel, drop into single user mode and install them both, then reboot. However, you're likely to run into problems that would require to to export/import your pools. > After installing world and rebooting, the system is positively snappy. File > system interaction, which is lethargic on every ZFS system I've installed > seems to be much faster. I haven't benchmarked the IO performance but > something definitely changed. It's almost like the latency has decreased. > Would changes committed since mid-August (when I built my last ZFS servers > from -HEAD + the patch) and now explain this? > > If so, then I really should be upgrading my production ZFS servers to the > latest -HEAD. > > Matt > > PS: I am using compression and getting the following results: > > [root@storage] ~ # zfs get compressratio > NAME PROPERTY VALUE SOURCE > tank compressratio 1.12x - > tank/usr compressratio 1.12x - > tank/usr/.snapshots compressratio 2.09x - > tank/var compressratio 2.13x - > > In retrospect, I wouldn't bother with compression on /usr. But, > /usr/.snapshots is my rsnapshot based backups of my servers sitting in remote > data centers. Since the majority of changes between snapshots is log files, > the data is quite compressible and ZFS compressions is quite effective. It's > also quite effective on /var, as is shown. ZFS compression is effectively > getting me 1/3 more disk space off my 1.5TB > disks._______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 05:37:42 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 665871065679 for ; Fri, 19 Dec 2008 05:37:42 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: from wf-out-1314.google.com (wf-out-1314.google.com [209.85.200.175]) by mx1.freebsd.org (Postfix) with ESMTP id 440518FC21 for ; Fri, 19 Dec 2008 05:37:42 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: by wf-out-1314.google.com with SMTP id 24so1125473wfg.7 for ; Thu, 18 Dec 2008 21:37:41 -0800 (PST) Received: by 10.142.125.9 with SMTP id x9mr1154440wfc.236.1229665061840; Thu, 18 Dec 2008 21:37:41 -0800 (PST) Received: from imac24.simerson.net (c-67-168-10-190.hsd1.wa.comcast.net [67.168.10.190]) by mx.google.com with ESMTPS id 32sm15184476wfc.39.2008.12.18.21.37.40 (version=TLSv1/SSLv3 cipher=RC4-MD5); Thu, 18 Dec 2008 21:37:41 -0800 (PST) Message-Id: From: Matt Simerson To: freebsd-fs@freebsd.org In-Reply-To: Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Thu, 18 Dec 2008 21:37:39 -0800 References: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> X-Mailer: Apple Mail (2.930.3) Subject: Re: ZFS performance gains real or imaginary? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 05:37:42 -0000 On Dec 18, 2008, at 4:48 PM, Wes Morgan wrote: >> On the two systems above (amd64 with 16GB of RAM and 24 1TB disks) >> I get about 30 days of uptime before the system hangs with a ZFS >> error. They write backups to disk 24x7 and never stop. I could not >> anything near that level of stability with back03 (below) which was >> much older hardware maxed out at 4GB of RAM. I finally resolved >> the stability issues on back03 by ditching ZFS and using >> geom_stripe across the two hardware RAID arrays. > > Were you doing a zfs mirror across two hardware raid arrays? The > performance of that type of setup would probably be sub-optimal > versus a zpool with two raidz volumes. I haven't benchmarked it with -HEAD but with FreeBSD 7, using a ZFS mirror across two 12-disk hardware RAID arrays (Areca 1231ML) was significantly (not quite double) faster than using JBOD and raidz. I tested a few variations (four disk pools, six disk zpools, 8 disk zpools, etc). I'll be getting another 24 disk system to add to my backup pool in a month or two. When it arrives, I'll run some additional benchmarks with -HEAD and see where the numbers fall. I'll be quite surprised if raidz can outrun a hardware RAID controller with 512MB of BBWC. Matt From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 09:09:10 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EFAF9106564A for ; Fri, 19 Dec 2008 09:09:10 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id CEE3E8FC17 for ; Fri, 19 Dec 2008 09:09:09 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.13.1) with ESMTP id mBJ8vRkJ041939; Fri, 19 Dec 2008 02:57:30 -0600 (CST) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=EwjN7PrcokQBfHZg/T7uQOJVIherP1AHj6WObH0A6l5z5owoALJ48CCT7hjzUpNHw D2svVE0pfqV/dj8YzblePgb9q+N2b0Ixjxulw0qHXClIZKbkOcWwRfWLLd7wJ3mFGPI TaPbpK31ERr+6BmNUXX6QmZ7e0g1+bimAqp7HRc= Message-ID: <494B61F7.3030904@jrv.org> Date: Fri, 19 Dec 2008 02:57:27 -0600 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.18 (Macintosh/20081105) MIME-Version: 1.0 To: Matt Simerson References: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance gains real or imaginary? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 09:09:11 -0000 Matt Simerson wrote: > I haven't benchmarked it with -HEAD but with FreeBSD 7, using a ZFS > mirror across two 12-disk hardware RAID arrays (Areca 1231ML) was > significantly (not quite double) faster than using JBOD and raidz. I > tested a few variations (four disk pools, six disk zpools, 8 disk > zpools, etc). A backup server is a *highly* specialized type of server. It's likely that data is only rarely updated, meaning that there are very few partial parity-stripe writes for the Areca to deal with. A database server receiving many updates would have an entirely different pattern of write I/O, possibly forcing many partial stripe updates. Since ZFS (almost?) never does partial stripe writes in a RAIDZ the performance comparison between ZFS with JBOD and your hardware setup might change considerably with a database workload. Not to mention the dominance of sequential I/O in a backup server, etc. For a backup server ZFS has other advantages. A client's backup server recently ran low on space so I took over another 4x1GB enclosure and added it to the pool with no downtime: there were a couple of large file writes to that pool running when I arrived that were still going when I left. There's also the issue of cost: once SATA port multiplier support works in FreeBSD it will be very practical to build cheap ~15TB servers for a small business using ZFS. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 12:44:52 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 59E7F1065674; Fri, 19 Dec 2008 12:44:52 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 314738FC12; Fri, 19 Dec 2008 12:44:52 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (linimon@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id mBJCiq8C075371; Fri, 19 Dec 2008 12:44:52 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id mBJCiqe4075367; Fri, 19 Dec 2008 12:44:52 GMT (envelope-from linimon) Date: Fri, 19 Dec 2008 12:44:52 GMT Message-Id: <200812191244.mBJCiqe4075367@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/129760: [nfs] after 'umount -f' of a stale NFS share FreeBSD locks up X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 12:44:52 -0000 Old Synopsis: after 'umount -f' of a stale NFS share FreeBSD locks up New Synopsis: [nfs] after 'umount -f' of a stale NFS share FreeBSD locks up Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Dec 19 12:44:33 UTC 2008 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=129760 From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 17:30:29 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F348A1065674 for ; Fri, 19 Dec 2008 17:30:28 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.28]) by mx1.freebsd.org (Postfix) with ESMTP id A570B8FC1E for ; Fri, 19 Dec 2008 17:30:28 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: by yx-out-2324.google.com with SMTP id 8so1091446yxb.13 for ; Fri, 19 Dec 2008 09:30:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=YxbULAtCe5fcqIMtR0tdtplkv0zcm6W4kllfOOBZc8Q=; b=ioI4QzblXy4xG+KoGoRRZOcW3n7d9lOst1Y4uTF8SUeNlh1PdZF+OIKLp87v7NDlUv PuDJSYjQsA96ezkZnDf+j068483Opjo5RcoM/9OWPjyK4ZSc4QPizVSJ+MEWseGGo6ih R2Z26agGYoo1Hamt2q4N9x6NeYoh3WW52nRbU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=xsQBExTmWtEoJDyjQS8ckwZK09hpaCdF2OuePY8fs8gKOCC/yYdR9fNVBkYib0RutT 79h0aaKwc17dXLxG7EQNVimUE40Aov64CJ8OJSRc9YClBKwS1yWaQE1k26bJzXLwgR0h mkM0FvwqIaFPC4WG52Rvpdk5UloMjMdRGwF18= Received: by 10.151.108.15 with SMTP id k15mr5937822ybm.179.1229707827794; Fri, 19 Dec 2008 09:30:27 -0800 (PST) Received: by 10.151.130.10 with HTTP; Fri, 19 Dec 2008 09:30:27 -0800 (PST) Message-ID: <5f67a8c40812190930s51353898w2c8479b6afc25c8b@mail.gmail.com> Date: Fri, 19 Dec 2008 12:30:27 -0500 From: "Zaphod Beeblebrox" To: "James R. Van Artsdalen" In-Reply-To: <494B61F7.3030904@jrv.org> MIME-Version: 1.0 References: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com> <494B61F7.3030904@jrv.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS performance gains real or imaginary? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 17:30:29 -0000 On Fri, Dec 19, 2008 at 3:57 AM, James R. Van Artsdalen < james-freebsd-fs2@jrv.org> wrote: > There's also the issue of cost: once SATA port multiplier support works > in FreeBSD it will be very practical to build cheap ~15TB servers for a > small business using ZFS. It's certainly not bad already. There are consumer cases that will take 15 to 18 hard drives internally. There are motherboards with 6 or 8 SATA ports. And there are simple SATA cards that are cheap enough these days. I think I got a 4 port for $40 for my machine. I see "buy it now"'s for 8 port cards around $100 on eBay. 16 ports * 1T drives is ~15TB. Make it 1.5T drives and RAID-Z2 and you have more protection and a bit more space. From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 22:18:07 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4CE2F1065674 for ; Fri, 19 Dec 2008 22:18:06 +0000 (UTC) (envelope-from bahamasfranks@gmail.com) Received: from yw-out-2324.google.com (yw-out-2324.google.com [74.125.46.28]) by mx1.freebsd.org (Postfix) with ESMTP id EA0CB8FC22 for ; Fri, 19 Dec 2008 22:18:05 +0000 (UTC) (envelope-from bahamasfranks@gmail.com) Received: by yw-out-2324.google.com with SMTP id 9so1138261ywe.13 for ; Fri, 19 Dec 2008 14:18:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references; bh=CbFMCaXSnqCyQsMJFg75av8iS/VfrRdzfoCF5WpVPzg=; b=Wuy3add+RzKKcEWelBZc2U7jokIZ+L3xsBttQTruD6Qg8k4xsXUUeHC0fRNyOLxdqd ZCh6GBEJfp4RLnSJ7UHjMjhquYNu1Cs6we5g3J1KRhDiP/lFYwE+oIKOPHGR95hB+sY6 3bR5rhgqsd3G+MC2DtKQOXSaap9422So1DxzU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references; b=IF+JLheD6QD7ktu81lulkaBLIo9B3LQpXT3ETxhuV56aCeTeXZLrM6MHWwaQlzPT02 oLvacQ3xs67EaOwJN+EeVX0FtuPlfhF37s8L6Xw62UxD5mzj/B0NGP6LSi3KkhK0JGnO bnBmyVAVp/A8QGjf/uke4u0PTmjdPnDrXHi5g= Received: by 10.100.128.2 with SMTP id a2mr2526692and.158.1229723514081; Fri, 19 Dec 2008 13:51:54 -0800 (PST) Received: by 10.100.4.14 with HTTP; Fri, 19 Dec 2008 13:51:54 -0800 (PST) Message-ID: <539c60b90812191351i6090f24ejb9006471f74f01b9@mail.gmail.com> Date: Fri, 19 Dec 2008 14:51:54 -0700 From: "Steve Franks" To: "Paul B. Mahol" In-Reply-To: <3a142e750812140747r2eb5ebadp7ac2b2c8ae357bae@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <8cb6106e0811241129o642dcf28re4ae177c8ccbaa25@mail.gmail.com> <200812041747.09040.gnemmi@gmail.com> <4938FE44.9090608@FreeBSD.org> <4939133E.2000701@FreeBSD.org> <493CEE90.7050104@FreeBSD.org> <3a142e750812090553l564bff84pe1f02cd1b03090ff@mail.gmail.com> <4943F43B.4060105@incunabulum.net> <3a142e750812131403p31841403ub9d5693278c74111@mail.gmail.com> <4944501E.40900@incunabulum.net> <3a142e750812140747r2eb5ebadp7ac2b2c8ae357bae@mail.gmail.com> Cc: freebsd-fs@freebsd.org, "Bruce M. Simpson" , Bruce M Simpson , freebsd-stable@freebsd.org Subject: Re: ext2fuse: user-space ext2 implementation X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 22:18:07 -0000 On Sun, Dec 14, 2008 at 8:47 AM, Paul B. Mahol wrote: > On 12/14/08, Bruce M Simpson wrote: >> Paul B. Mahol wrote: >>>> Can you please relay this feedback to the authors of ext2fuse? >>>> >>>> As mentioned earlier in the thread, the ext2fuse code could benefit from >>>> UBLIO-ization. Are you or any other volunteers happy to help out here? >>>> >>> >>> Well, first higher priority would be to fix existing bugs. It would be >>> very little >>> gain with user cache, because it is already too much IMHO slow and >>> adding user cache >>> will not make it faster, but that is not port problem. >>> >> >> I'm not aware of bugs with ext2fuse itself; my work on the port was >> merely to try to raise awareness that a user-space project for ext2 >> filesystem access existed. >> >> Can you elaborate further on your experience with ext2fuse which seems >> to you to be buggy, i.e. symptoms, root cause analysis etc. ? Have you >> reported these to the author(s)? > > I have read TODO. > >> Have you measured the performance? Is the performance sufficient for the >> needs of an occasional desktop user? > > Performance was not sufficient, and adding user cache will not improve access > speed on first read. > After mounting ext2fs volume (via md(4)) created with e2fsprogs port > and copying data > from ufs to ext2, reading was quite slow. Also ext2fuse after mount > doesnt exits it > is still running displaying debug data - explaining why project > itselfs is in alpha > state. > >> I realise we are largely involved in content-free argument here, however >> the trade-off of ext2fuse vs ext2fs in the FreeBSD kernel source tree, >> is that of a hopefully more actively maintained implementation vs one >> which is not maintained at all, and any alternatives for FreeBSD users >> would be welcome. > > Project itself doesnt look very active, but I may be wrong. It is in alpha state > as reported on SF. > IMHO it is better to maintain our own because it is in better shape, but I'm not > intersted in ext* as developer. AFAIK our ext* either barfs or corrupts ext3, and since linux is pretty much all using ext3 these days, we're stuck in read-only for ext3, which is rather undesirable, methinks (seems everyone's using fuse's ntfs for this same reason [which is stable, however]). Which is not to say stealing the ext3 (journal?) implementation and putting it in our code isn't a better choice, I'm just pointing out there is no good choice right now... Steve From owner-freebsd-fs@FreeBSD.ORG Fri Dec 19 22:23:27 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B80231065677 for ; Fri, 19 Dec 2008 22:23:27 +0000 (UTC) (envelope-from stb@lassitu.de) Received: from koef.zs64.net (koef.zs64.net [212.12.50.230]) by mx1.freebsd.org (Postfix) with ESMTP id 5FC838FC25 for ; Fri, 19 Dec 2008 22:23:27 +0000 (UTC) (envelope-from stb@lassitu.de) Received: from localhost by koef.zs64.net (8.14.3/8.14.3) with ESMTP id mBJLkbnU089166 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Fri, 19 Dec 2008 22:46:37 +0100 (CET) (envelope-from stb@lassitu.de) (authenticated as stb) Message-Id: From: Stefan Bethke To: Doug Rabson In-Reply-To: <9461581F-F354-486D-961D-3FD5B1EF007C@rabson.org> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Fri, 19 Dec 2008 22:46:37 +0100 References: <9461581F-F354-486D-961D-3FD5B1EF007C@rabson.org> X-Mailer: Apple Mail (2.930.3) Cc: freebsd-fs@freebsd.org Subject: Re: Booting from ZFS raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 22:23:27 -0000 Am 17.12.2008 um 19:25 schrieb Doug Rabson: > I've been working on adding raidz and raidz2 support to the boot > code and I have a patch which could use some testing if anyone here > is interested. This http://people.freebsd.org/~dfr/raidzboot-17122008.diff > adds support for raidz and raidz2. The easiest way to prepare a > bootable pool is to put a GPT boot partition on each disk that will > make up the raidz pool and install gptzfsboot on the boot partition > of every drive. Not sure I did things the right way, and it doesn't appear to be working correctly. I'm trying this in VMware Fusion, with three SCSI disks, which I configured like this: Updated sources yesterday, then applied the patch and added LOADER_ZFS_SUPPORT?=YES to make.conf, then make buildworld buildkernel. Created a GPT label and one partition on each of the three drives: gpart create -s gpt $1 gpart add -b 34 -s 128 -t freebsd-boot $1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $1 gpart add -b 512 -s 41900000 -t freebsd-zfs $1 gpart list $1 (The disks are 20GB each) root@freebsd-current:~# gpart list da3 Geom name: da3 fwheads: 255 fwsectors: 63 last: 41943006 first: 34 entries: 128 scheme: GPT Providers: 1. Name: da3p1 Mediasize: 65536 (64K) Sectorsize: 512 Mode: r0w0e0 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: (null) length: 65536 offset: 17408 type: freebsd-boot index: 1 2. Name: da3p2 Mediasize: 21452800000 (20G) Sectorsize: 512 Mode: r1w1e1 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 21452800000 offset: 262144 type: freebsd-zfs index: 2 Consumers: 1. Name: da3 Mediasize: 21474836480 (20G) Sectorsize: 512 Mode: r1w1e2 Created a raidz pool: # zpool create tank raidz da1p2 da2p2 da3p2 Populated the filesystem with # cd /usr/src && make installworld installkernel distribution DESTDIR=/ tank Added zfs_load="YES" and vfs.root.mountfrom="zfs:tank" to loader.conf When trying to boot, I get a number of "error 4 lba xxx", then "ZFS: i/ o error - all block copies are unavailable". The loader starts up, but cannot load /boot/loader.conf or /boot/device.hints. The LBA blocks are all towards the end of the disks, in the 4294626000 and up range. Booted again from a different disk and ran zpool scrub; waited for that to complete without errors. Next boot try now gives me (transcribed by hand): ZFS: i/o error - all block copies unavailable ZFS: can't read MOS ZFS: unexpected object set type lld ZFS: unexpected object set type lld FreeBSD/i386 boot Default: tank:/boot/kernel/kernel boot: ZFS: unexpected object set type lld FreeBSD/i386 boot Default: tank:/boot/kernel/kernel boot: Booting again from a different disk, running zpool status reveals no errors. Running scrub again, then next boot try. root@freebsd-current:~# zpool scrub tank root@freebsd-current:~# zpool status pool: tank state: ONLINE scrub: scrub in progress for 0h0m, 11.18% done, 0h0m to go config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 da2p2 ONLINE 0 0 0 da3p2 ONLINE 0 0 0 errors: No known data errors root@freebsd-current:~# zpool status pool: tank state: ONLINE scrub: scrub completed after 0h0m with 0 errors on Fri Dec 19 22:40:18 2008 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 da2p2 ONLINE 0 0 0 da3p2 ONLINE 0 0 0 errors: No known data errors On the third boot try, same errors as on the second one. Stefan -- Stefan Bethke Fon +49 170 346 0140 From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 03:40:04 2008 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CD7071065673 for ; Sat, 20 Dec 2008 03:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A0CBE8FC0C for ; Sat, 20 Dec 2008 03:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id mBK3e3cg039992 for ; Sat, 20 Dec 2008 03:40:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id mBK3e3Fm039991; Sat, 20 Dec 2008 03:40:03 GMT (envelope-from gnats) Date: Sat, 20 Dec 2008 03:40:03 GMT Message-Id: <200812200340.mBK3e3Fm039991@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: "Garrett Cooper" Cc: Subject: Re: bin/129760: after 'umount -f' of a stale NFS share FreeBSD locks up X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Garrett Cooper List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2008 03:40:04 -0000 The following reply was made to PR kern/129760; it has been noted by GNATS. From: "Garrett Cooper" To: "Eugene M. Zheganin" Cc: freebsd-gnats-submit@freebsd.org Subject: Re: bin/129760: after 'umount -f' of a stale NFS share FreeBSD locks up Date: Fri, 19 Dec 2008 19:31:38 -0800 This has been an outstanding issue with FreeBSD that's only been fixed recently in OSX. Maybe it deserves a backport? -Garrett From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 14:23:10 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D6D581065675 for ; Sat, 20 Dec 2008 14:23:10 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from itchy.rabson.org (unknown [IPv6:2002:50b1:e8f2:1::143]) by mx1.freebsd.org (Postfix) with ESMTP id 914AF8FC2C for ; Sat, 20 Dec 2008 14:23:10 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from [IPv6:2001:470:909f:1:21b:63ff:feb8:5abc] (unknown [IPv6:2001:470:909f:1:21b:63ff:feb8:5abc]) by itchy.rabson.org (Postfix) with ESMTP id 0405A3FA9; Sat, 20 Dec 2008 14:23:08 +0000 (GMT) Message-Id: <2F0DF92C-4240-48D4-9A5F-8B826D6D6E95@rabson.org> From: Doug Rabson To: Stefan Bethke In-Reply-To: Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Sat, 20 Dec 2008 14:23:08 +0000 References: <9461581F-F354-486D-961D-3FD5B1EF007C@rabson.org> X-Mailer: Apple Mail (2.930.3) Cc: freebsd-fs@freebsd.org Subject: Re: Booting from ZFS raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2008 14:23:10 -0000 On 19 Dec 2008, at 21:46, Stefan Bethke wrote: > Am 17.12.2008 um 19:25 schrieb Doug Rabson: > >> I've been working on adding raidz and raidz2 support to the boot >> code and I have a patch which could use some testing if anyone here >> is interested. This http://people.freebsd.org/~dfr/raidzboot-17122008.diff >> adds support for raidz and raidz2. The easiest way to prepare a >> bootable pool is to put a GPT boot partition on each disk that will >> make up the raidz pool and install gptzfsboot on the boot partition >> of every drive. > > Not sure I did things the right way, and it doesn't appear to be > working correctly. I'm trying this in VMware Fusion, with three SCSI > disks, which I configured like this: > > Updated sources yesterday, then applied the patch and added > LOADER_ZFS_SUPPORT?=YES to make.conf, then make buildworld > buildkernel. > > Created a GPT label and one partition on each of the three drives: > > gpart create -s gpt $1 > gpart add -b 34 -s 128 -t freebsd-boot $1 > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $1 > gpart add -b 512 -s 41900000 -t freebsd-zfs $1 > gpart list $1 > > (The disks are 20GB each) > > root@freebsd-current:~# gpart list da3 > ... > > Created a raidz pool: > # zpool create tank raidz da1p2 da2p2 da3p2 > > Populated the filesystem with > # cd /usr/src && make installworld installkernel distribution > DESTDIR=/tank > > Added zfs_load="YES" and vfs.root.mountfrom="zfs:tank" to loader.conf > > > When trying to boot, I get a number of "error 4 lba xxx", then "ZFS: > i/o error - all block copies are unavailable". The loader starts up, > but cannot load /boot/loader.conf or /boot/device.hints. The LBA > blocks are all towards the end of the disks, in the 4294626000 and > up range. I did my testing in vmware too with a slightly different configuration (4x2G virtual disks in various arrangements). I just tried to reproduce your exact sequence of steps and it worked fine up to the mountroot prompt. I don't think ZFS likes having the root filesystam at the root of the pool. A few things to check: 1. Are you absolutely sure you are using gptzfsboot built with the patch - the steps you list above show you building it but not installing it on the system which is initialising the pool. 2. Do you have the changes from r186243? This might cause something like your problem - there was an overflow in the code which looked up a ZFS object from an inode number. 3. I'm a little confused as to how you are getting LBA numbers above 4G - a 20G virtual disk should only have 40 million 512 byte blocks. From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 15:01:28 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8E914106564A for ; Sat, 20 Dec 2008 15:01:28 +0000 (UTC) (envelope-from stb@lassitu.de) Received: from koef.zs64.net (koef.zs64.net [212.12.50.230]) by mx1.freebsd.org (Postfix) with ESMTP id 2B0858FC21 for ; Sat, 20 Dec 2008 15:01:27 +0000 (UTC) (envelope-from stb@lassitu.de) Received: from localhost by koef.zs64.net (8.14.3/8.14.3) with ESMTP id mBKF1PZ7021553 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Sat, 20 Dec 2008 16:01:26 +0100 (CET) (envelope-from stb@lassitu.de) (authenticated as stb) Message-Id: <87E89284-D3BF-4A5A-B6F7-C30709A3F2D9@lassitu.de> From: Stefan Bethke To: Doug Rabson In-Reply-To: <2F0DF92C-4240-48D4-9A5F-8B826D6D6E95@rabson.org> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Sat, 20 Dec 2008 16:01:25 +0100 References: <9461581F-F354-486D-961D-3FD5B1EF007C@rabson.org> <2F0DF92C-4240-48D4-9A5F-8B826D6D6E95@rabson.org> X-Mailer: Apple Mail (2.930.3) Cc: freebsd-fs@freebsd.org Subject: Re: Booting from ZFS raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2008 15:01:28 -0000 Am 20.12.2008 um 15:23 schrieb Doug Rabson: > On 19 Dec 2008, at 21:46, Stefan Bethke wrote: > >> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $1 > 1. Are you absolutely sure you are using gptzfsboot built with the > patch - the steps you list above show you building it but not > installing it on the system which is initialising the pool. Ugh, sorry. That is in fact the old version from before the patch. I will try again tonight, with updated sources and the right gptzfsboot. Thanks, Stefan -- Stefan Bethke Fon +49 170 346 0140 From owner-freebsd-fs@FreeBSD.ORG Sat Dec 20 15:06:30 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AC9B21065673 for ; Sat, 20 Dec 2008 15:06:30 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from itchy.rabson.org (unknown [IPv6:2002:50b1:e8f2:1::143]) by mx1.freebsd.org (Postfix) with ESMTP id 58AB58FC12 for ; Sat, 20 Dec 2008 15:06:30 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from [IPv6:2001:470:909f:1:21b:63ff:feb8:5abc] (unknown [IPv6:2001:470:909f:1:21b:63ff:feb8:5abc]) by itchy.rabson.org (Postfix) with ESMTP id 3A0F93F8F; Sat, 20 Dec 2008 15:06:29 +0000 (GMT) Message-Id: <4AC3BEB2-B47E-4280-85E1-C72891412D09@rabson.org> From: Doug Rabson To: Stefan Bethke In-Reply-To: <87E89284-D3BF-4A5A-B6F7-C30709A3F2D9@lassitu.de> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Date: Sat, 20 Dec 2008 15:06:28 +0000 References: <9461581F-F354-486D-961D-3FD5B1EF007C@rabson.org> <2F0DF92C-4240-48D4-9A5F-8B826D6D6E95@rabson.org> <87E89284-D3BF-4A5A-B6F7-C30709A3F2D9@lassitu.de> X-Mailer: Apple Mail (2.930.3) Cc: freebsd-fs@freebsd.org Subject: Re: Booting from ZFS raidz X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2008 15:06:30 -0000 On 20 Dec 2008, at 15:01, Stefan Bethke wrote: > Am 20.12.2008 um 15:23 schrieb Doug Rabson: > >> On 19 Dec 2008, at 21:46, Stefan Bethke wrote: >> >>> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $1 > >> 1. Are you absolutely sure you are using gptzfsboot built with the >> patch - the steps you list above show you building it but not >> installing it on the system which is initialising the pool. > > Ugh, sorry. That is in fact the old version from before the patch. I > will try again tonight, with updated sources and the right gptzfsboot. You should be able to re-install gptzfsboot without changing anything else using something like: # dd if=/boot/gptzfsboot of=dap1 conv=osync