Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 2 Aug 2011 01:01:35 +1000
From:      andrew clarke <>
To:        Dan Nelson <>
Cc:        Dick Hoogendijk <>, FreeBSD Questions <>
Subject:   Re: larger disk for a zfs pool
Message-ID:  <>
In-Reply-To: <>
References:  <> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Mon 2011-08-01 09:37:55 UTC-0500, Dan Nelson ( wrote:

> In the last episode (Aug 01), Dick Hoogendijk said:
> > OK, my freebsd system runs on ZFS boot. W/ solaris getting larger disks 
> > for a pool was quit easy. Simply replace one disk from a mirror for a 
> > larger one, wait for the resilvering and after this replace the second 
> > one for a larger disk and wait for the resilvering again. That's it. 
> > Been there, done that. But my feeling tells me it is not that simple for 
> > a FreeBSD zfs root system, or is it?
> Should be the same procedure.  Make sure you either use "zpool online -e"
> when swapping in the new disks, or that you have the zpool autoexpand=on
> attribute set.

On my FreeBSD 8.2-RELEASE machine, "-e" is an "invalid option" and
"autoexpand" an "invalid property".  I suspect these are features of
ZFS v28 and are not provided with the ZFS v15 provided with FreeBSD

Judging from behaviour I experienced experimenting with ZFS in a
virtual machine using 8.2-REL, it was possible to replace all drives
in a ZFS mirror with larger ones and increase the size of the pool,
but (after resilvering) it required either a reboot, or (if I recall

  zpool export tank
  zpool import tank

for the increased size to become available.  So I assume "autoexpand"
was implied for ZFS v15.

However this was not with FreeBSD booting from 'tank'.  Trying to run
"zpool export tank" may result in a "Device busy" error if the boot
device was the "tank" pool.

It might be worthwhile experimenting in on a spare (or virtual)
machine to get a definitive answer, especially since there seem to be
differences depending on FreeBSD version.


Want to link to this message? Use this URL: <>