From owner-freebsd-stable@FreeBSD.ORG Fri Jan 11 15:32:29 2013 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 65039255 for ; Fri, 11 Jan 2013 15:32:29 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 01A5A158 for ; Fri, 11 Jan 2013 15:32:28 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1Ttgah-00051z-9P for freebsd-stable@freebsd.org; Fri, 11 Jan 2013 16:32:21 +0100 Received: from 212-182-167-131.ip.telfort.nl ([212.182.167.131] helo=ronaldradial.home) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1Ttgah-00017k-B5 for freebsd-stable@freebsd.org; Fri, 11 Jan 2013 16:32:19 +0100 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-stable@freebsd.org Subject: Re: Deleting the top-level ZFS file system (without affecting its children) References: Date: Fri, 11 Jan 2013 16:32:19 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.12 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.8 X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_50 autolearn=disabled version=3.3.1 X-Scan-Signature: e462de357cb394d64966911c06262bc8 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2013 15:32:29 -0000 On Fri, 11 Jan 2013 16:11:32 +0100, xenophon\+freebsd wrote: > When I originally set up ZFS on my server, I used the topmost file > system for the root file system. Last night, I used "zfs send" and "zfs > recv" to create a new root file system named "zroot/root". Then, I > adjusted the mount points in single-user mode. Based on my reading of > the contents of src/sys/boot/zfs/ and src/sys/boot/i386/zfsboot/ > (specifically the zfs_mount() and zfs_get_root() functions in > zfsimpl.c), I ran "zpool set bootfs=zroot/root zroot". This should > allow the boot program to find the new root file system. > > Now, I'd like to delete the old root file system and return its storage > to the pool. Clearly, "rm -rf /oldroot/*" wouldn't return the space > already allocated to the old root file system, but I don't want to run > "zfs destroy zroot", as that will probably affect its children (the > whole rest of the pool). At this point, I suspect that I'd have to > re-create the pool to get the desired configuration. > > Is my understanding correct? > > Right now, the pool's datasets look something like the following: > > xenophon@cinep001bsdgw:~>zfs list > NAME USED AVAIL REFER MOUNTPOINT > zroot 75.5G 143G 1.04G /oldroot > zroot/root 1.04G 143G 1.03G / > zroot/usr 28.6G 143G 10.2G /usr > (etc.) > > Best wishes, > Matthew > Why would rm -rf /oldroot/* not return all the allocated space? I can only think of snapshots keeping the space allocated, but you can remove those too. Can you elaborate on that? Ronald.