From owner-freebsd-fs@FreeBSD.ORG Thu Apr 5 03:52:30 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 52A27106566C for ; Thu, 5 Apr 2012 03:52:30 +0000 (UTC) (envelope-from j.freebsd-zfs@enone.net) Received: from flabnapple.net (flabnapple.net [216.129.104.99]) by mx1.freebsd.org (Postfix) with ESMTP id 387988FC0A for ; Thu, 5 Apr 2012 03:52:30 +0000 (UTC) Received: from [10.0.1.6] (c-98-207-6-192.hsd1.ca.comcast.net [98.207.6.192]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by flabnapple.net (Postfix) with ESMTPSA id 6226D1CC079; Wed, 4 Apr 2012 20:42:53 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii From: Taylor In-Reply-To: <20120402133721.Horde.KOqoS5jmRSRPeY9xDWLhHWA@webmail.leidinger.net> Date: Wed, 4 Apr 2012 20:42:52 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <88EC48E8-6E77-417B-9CC1-1812617A57D1@enone.net> References: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> <20120324174218.00005f63@unknown> <20120402133721.Horde.KOqoS5jmRSRPeY9xDWLhHWA@webmail.leidinger.net> To: Alexander Leidinger X-Mailer: Apple Mail (2.1084) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS extra space overhead for ashift=12 vs ashift=9 raidz2 pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Apr 2012 03:52:30 -0000 Alex, I think you are correct. It occurred to me some time after reading your = original email that the sector size problem could also be applied to the metadata for the filesystem as well as the = data. As I previously stated, the overhead of the filesystem goes from 2.59% to 8.06% when increasing sector size = from 512B to 4KiB , which is an increase of 3.11x, well in line with your 8x observation. Likewise this thread also seems = to confirm that lots of the metadata takes up < 512B and there is no real attempts to optimize this for 4K sector = size: = http://mail.opensolaris.org/pipermail/zfs-discuss/2011-October/049959.html= I ended up using 512B sector size for the array since I valued the extra = space more than the extra bandwidth. :)=20 Thanks again for your response, -Taylor On Apr 2, 2012, at 4:37 AM, Alexander Leidinger wrote: > Quoting Taylor (from Sat, 24 Mar 2012 = 11:41:20 -0700): >=20 >> Alex, >>=20 >> Thank you for your response. I'm not particularly concerned about the = overhead of file fragmentation, >> as most of the space will be take by fairly large files (10's of = GiB). >>=20 >> My original question concerned the amount of space reported available = by zfs for a >> freshly-created *empty* raidz2 filesystem. >>=20 >> To re-iterate, I find 2.79TiB more space available with ashift=3D9 = (49.62 TiB) vs ashift=3D12 (46.83TiB) >> for a new 3.64TiB 16-disk raidz2 pool. >=20 > I do not know for the actual amount, but at least some overhead is not = surprising to me. >=20 > You have some meta data in ZFS (file permissions, ACLs, checksums, = ...). This meta data should be more often much less than 4k in size, but = you need to allocate at least one block for this meta data. If we assume = (worst case) that most of the time the meta data would fit into 512 byte = but you always use a 4k sector, it should be clear that you use 8 times = more space on the disk for each meta data unit, than necessary. >=20 > Bye, > Alexander. >=20 > --=20 > Let me put it this way: today is going to be a learning experience. >=20 > http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID =3D = B0063FE7 > http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID =3D = 72077137 >=20