From owner-freebsd-stable@FreeBSD.ORG Mon Aug 2 23:27:43 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1D2D81065670 for ; Mon, 2 Aug 2010 23:27:43 +0000 (UTC) (envelope-from dan@langille.org) Received: from nyi.unixathome.org (nyi.unixathome.org [64.147.113.42]) by mx1.freebsd.org (Postfix) with ESMTP id E44E78FC1C for ; Mon, 2 Aug 2010 23:27:42 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by nyi.unixathome.org (Postfix) with ESMTP id E287950B8A for ; Tue, 3 Aug 2010 00:27:41 +0100 (BST) X-Virus-Scanned: amavisd-new at unixathome.org Received: from nyi.unixathome.org ([127.0.0.1]) by localhost (nyi.unixathome.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cBBzHKrcZUyK for ; Tue, 3 Aug 2010 00:27:40 +0100 (BST) Received: from smtp-auth.unixathome.org (smtp-auth.unixathome.org [10.4.7.7]) (Authenticated sender: hidden) by nyi.unixathome.org (Postfix) with ESMTPSA id B927450A16 for ; Tue, 3 Aug 2010 00:27:40 +0100 (BST) Message-ID: <4C57545F.2050907@langille.org> Date: Mon, 02 Aug 2010 19:27:27 -0400 From: Dan Langille Organization: The FreeBSD Diary User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.7) Gecko/20100713 Thunderbird/3.1.1 MIME-Version: 1.0 To: freebsd-stable@freebsd.org References: <4C5750A4.7050104@langille.org> In-Reply-To: <4C5750A4.7050104@langille.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: Where's the space? raidz2 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Aug 2010 23:27:43 -0000 On 8/2/2010 7:11 PM, Dan Langille wrote: > I recently altered an existing raidz2 pool from using 7 vdevs of about > 931G to 1.81TB. In fact, the existing pool used half of each HDD. I then > wanted to go to using [almost] all of each HDD. > > I offline'd each vdev, adjusted the HDD paritions using gpart, then > replaced the vdev. After letting the resilver occur, I did the next vdev. > > The space available after this process did not go up as I expected. I > have about 4TB in the pool, not the 8 or 9TB I expected. This fixed it: # df -h Filesystem Size Used Avail Capacity Mounted on /dev/mirror/gm0s1a 989M 508M 402M 56% / devfs 1.0K 1.0K 0B 100% /dev /dev/mirror/gm0s1e 3.9G 500K 3.6G 0% /tmp /dev/mirror/gm0s1f 58G 4.6G 48G 9% /usr /dev/mirror/gm0s1d 3.9G 156M 3.4G 4% /var storage 512G 1.7G 510G 0% /storage storage/pgsql 512G 1.7G 510G 0% /storage/pgsql storage/bacula 3.7T 3.2T 510G 87% /storage/bacula storage/Retored 510G 39K 510G 0% /storage/Retored # zpool export storage # zpool import storage # df -h Filesystem Size Used Avail Capacity Mounted on /dev/mirror/gm0s1a 989M 508M 402M 56% / devfs 1.0K 1.0K 0B 100% /dev /dev/mirror/gm0s1e 3.9G 500K 3.6G 0% /tmp /dev/mirror/gm0s1f 58G 4.6G 48G 9% /usr /dev/mirror/gm0s1d 3.9G 156M 3.4G 4% /var storage 5.0T 1.7G 5.0T 0% /storage storage/Retored 5.0T 39K 5.0T 0% /storage/Retored storage/bacula 8.2T 3.2T 5.0T 39% /storage/bacula storage/pgsql 5.0T 1.7G 5.0T 0% /storage/pgsql -- Dan Langille - http://langille.org/