From owner-freebsd-current@FreeBSD.ORG Sat Jul 17 10:15:01 2010 Return-Path: Delivered-To: freebsd-current@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B96F81065678 for ; Sat, 17 Jul 2010 10:15:01 +0000 (UTC) (envelope-from marco+freebsd-current@lordsith.net) Received: from maul.lordsith.net (unknown [IPv6:2001:7b8:330::1]) by mx1.freebsd.org (Postfix) with ESMTP id 321678FC0C for ; Sat, 17 Jul 2010 10:15:00 +0000 (UTC) Received: by maul.lordsith.net (Postfix, from userid 1001) id 5AB63575471; Sat, 17 Jul 2010 12:14:59 +0200 (CEST) Date: Sat, 17 Jul 2010 12:14:59 +0200 From: Marco van Lienen To: freebsd-current@FreeBSD.org Message-ID: <20100717101459.GA13626@lordsith.net> Mail-Followup-To: Marco van Lienen , freebsd-current@FreeBSD.org References: <4C3C7202.7090103@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-ripemd160; protocol="application/pgp-signature"; boundary="lrZ03NoBR/3+SXJZ" Content-Disposition: inline In-Reply-To: <4C3C7202.7090103@FreeBSD.org> Organization: LordSith.Net X-Operating-System: FreeBSD 8.0-RELEASE-p2 X-FreeBSD: RULEZ Them All X-GPG-Fingerprint: A025 D8AA AC1B D2FC 380D 4FC1 8EA0 0BA8 8580 E6CB X-GPG-Key: http://lordsith.net/gpgkey X-Uptime: 11:38AM up 6 days, 12 hrs, 5 users, load averages: 0.00, 0.00, 0.00 User-Agent: Mutt/1.5.20 (2009-06-14) Cc: Subject: Re: [HEADSUP] ZFS version 15 committed to head X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Marco van Lienen List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 Jul 2010 10:15:01 -0000 --lrZ03NoBR/3+SXJZ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Jul 13, 2010 at 04:02:42PM +0200, you (Martin Matuska) sent the fol= lowing to the -current list: > Dear community, >=20 > Feel free to test everything and don't forget to report any bugs found. When I create a raidz pool of 3 equally sized hdd's (3x2Tb WD caviar green = drives) the reported available space by zpool and zfs is VERY different (no= t just the known differences). On a 9.0-CURRENT amd64 box: # uname -a FreeBSD trinity.lordsith.net 9.0-CURRENT FreeBSD 9.0-CURRENT #1: Tue Jul 13= 21:58:14 UTC 2010 root@trinity.lordsith.net:/usr/obj/usr/src/sys/trini= ty amd64 # zpool create pool1 raidz ada2 ada3 ada4 # zpool list pool1 NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool1 5.44T 147K 5.44T 0% ONLINE - # ada drives dmesg output: ada2 at ahcich4 bus 0 scbus5 target 0 lun 0 ada2: ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada3 at ahcich5 bus 0 scbus6 target 0 lun 0 ada3: ATA-8 SATA 2.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada4 at ahcich6 bus 0 scbus7 target 0 lun 0 ada4: ATA-8 SATA 2.x device ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada4: Command Queueing enabled ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) zfs list however only shows: # zfs list pool1 NAME USED AVAIL REFER MOUNTPOINT pool1 91.9K 3.56T 28.0K /pool1 I just lost the space of an entire hdd! To rule out a possible drive issue I created a raidz pool based on 3 65m fi= les. # dd if=3D/dev/zero of=3D/file1 bs=3D1m count=3D65=20 # dd if=3D/dev/zero of=3D/file2 bs=3D1m count=3D65=20 # dd if=3D/dev/zero of=3D/file3 bs=3D1m count=3D65=20 # zpool create test raidz /file1 /file2 /file3 # # zpool list test NAME SIZE USED AVAIL CAP HEALTH ALTROOT test 181M 147K 181M 0% ONLINE - # zfs list test NAME USED AVAIL REFER MOUNTPOINT test 91.9K 88.5M 28.0K /test When I create a non-redundant storage pool using the same 3 files or 3 driv= es the available space reported by zfs is what I'm expecting to see though = so it looks like creating a raidz storage pool is showing very weird behavi= or. This doesn't have as much to do with the ZFS v15 bits commited to -HEAD sin= ce I have the exact same behavior on a 8.0-RELEASE-p2 i386 box with ZFS v14. A friend of mine is running osol build 117 but he created his raidz pool on= an even older build though. His raidz pool also uses 3 equally-sized drives (3x2Tb) and his raidz pool = is showing: % zfs list -r pool2 NAME USED AVAIL REFER MOUNTPO= INT pool2 3.32T 2.06T 3.18T /export= /pool2 % df -h pool2 Filesystem size used avail capacity Mounted on pool2 5.4T 3.2T 2.1T 61% /export/pool2 To run further tests he also created a test raidz pool using 3 65m files: % zfs list test2 NAME USED AVAIL REFER MOUNTPOINT test2 73.5K 149M 21K /test2 So on osol build 117 the available space is what I'm expecting to see where= as on FreeBSD 9.0-CURRENT amd64 and 8.0-RELEASE-p2 i386=20 Is someone having the same issues? Cheers, marco --lrZ03NoBR/3+SXJZ Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.15 (FreeBSD) iEYEAREDAAYFAkxBgqMACgkQjqALqIWA5sthiQCguVe/jha27AJ0VTzyP/wW4P/F c1AAn0ABpmxDZGYzVu9lxRVqJm2Pwfxq =yPoG -----END PGP SIGNATURE----- --lrZ03NoBR/3+SXJZ--