Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 5 Jun 2012 13:26:08 +0200
From:      "Patrick M. Hausen" <hausen@punkt.de>
To:        "freebsd-stable@freebsd.org Mailing List" <freebsd-stable@freebsd.org>
Subject:   ZFS autoexpand when there are 2 raidz2 vdevs
Message-ID:  <3CC42829-EED0-4F40-A046-7191498B1850@punkt.de>

next in thread | raw e-mail | index | archive | help
Hi, all,

during the last couple of years I occasionally increased the capacity
of raidz2 based zpools by replacing one disk at a time and resilvering,
subsequently. After replacing the final disk and a reboot (I guess
zpool export & zpool import would have done the trick, too) the capacity
of the FS on top of that pool was increased according to the size of
the new disk. All of theses systems had a pool built on one single vdev.

Last week I exchanged all disks of one vdev that is part of a 2 vdev
zpool. According to the Solaris documentation I found that should be
possible. I always assumed vdevs were sort of independent of each
other.

My observations:

During resilvering the activity LEDs of all 12 disks were showing heavy
load, not only the ones of the 6 disks being part of the vdev in =
question.

After exchanging all 6 disks the capacity stayed the same. I tried

zpool export, zpool import
reboot
zpool scrub

to no avail.

datatomb2# zpool status sx40
  pool: sx40
 state: ONLINE
 scan: scrub repaired 0 in 3h32m with 0 errors on Sat Jun  2 00:41:38 =
2012
config:

	NAME                 STATE     READ WRITE CKSUM
	sx40                 ONLINE       0     0     0
	  raidz2-0           ONLINE       0     0     0
	    gpt/sx40-disk0   ONLINE       0     0     0
	    gpt/sx40-disk1   ONLINE       0     0     0
	    gpt/sx40-disk2   ONLINE       0     0     0
	    gpt/sx40-disk3   ONLINE       0     0     0
	    gpt/sx40-disk4   ONLINE       0     0     0
	    gpt/sx40-disk5   ONLINE       0     0     0
	  raidz2-1           ONLINE       0     0     0
	    gpt/sx40-disk6   ONLINE       0     0     0
	    gpt/sx40-disk7   ONLINE       0     0     0
	    gpt/sx40-disk8   ONLINE       0     0     0
	    gpt/sx40-disk9   ONLINE       0     0     0
	    gpt/sx40-disk10  ONLINE       0     0     0
	    gpt/sx40-disk11  ONLINE       0     0     0

errors: No known data errors

datatomb2# zpool get all sx40
NAME  PROPERTY       VALUE       SOURCE
sx40  size           10.9T       -
sx40  capacity       78%         -
sx40  altroot        -           default
sx40  health         ONLINE      -
sx40  guid           1478259715706579670  default
sx40  version        28          default
sx40  bootfs         -           default
sx40  delegation     on          default
sx40  autoreplace    off         default
sx40  cachefile      -           default
sx40  failmode       wait        default
sx40  listsnapshots  off         default
sx40  autoexpand     on          local
sx40  dedupditto     0           default
sx40  dedupratio     1.00x       -
sx40  free           2.31T       -
sx40  allocated      8.57T       -
sx40  readonly       off         -

The first 6 disks building raidz2-0 are 2 TB ones, not 1 TB.
The gpt partitions *are* about 2 TB in size.

What am I missing? Any hints welcome. I do have the hardware
to build another device with 6 drives 2 TB and 6 drives 1 TB,
which I planned to hook up to another server. Of course I could
connect it to this one first, build a second pool, copy over the
data ... but I was trying to avoid that in the first place ;-)

Thanks in advance,
Patrick
--=20
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
info@punkt.de       http://www.punkt.de
Gf: J=FCrgen Egeling      AG Mannheim 108285






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3CC42829-EED0-4F40-A046-7191498B1850>