Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Jul 2009 12:56:14 -0700
From:      "Mahlon E. Smith" <mahlon@martini.nu>
To:        freebsd-stable@freebsd.org
Subject:   ZFS: drive replacement performance
Message-ID:  <20090707195614.GA24326@martini.nu>

next in thread | raw e-mail | index | archive | help

--6c2NcOVqGQ03X4Wi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable


I've got a 9 sata drive raidz1 array, started at version 6, upgraded to
version 13.  I had an apparent drive failure, and then at some point, a
kernel panic (unrelated to ZFS.)  The reboot caused the device numbers
to shuffle, so I did an 'export/import' to re-read the metadata and get
the array back up.

Once I swapped drives, I issued a 'zpool replace'.

That was 4 days ago now.  The progress in a 'zpool status' looks like
this, as of right now:

 scrub: resilver in progress for 0h0m, 0.00% done, 2251h0m to go

=2E.. which is a little concerning, since a) it appears to have not moved
since I started it, and b) I'm in a DEGRADED state until it finishes...
if it finishes.

So, I reach out to the list!

 - Is the resilver progress notification in a known weird state under
   FreeBSD?

 - Anything I can do to kick this in the pants?  Tuning params?

 - This was my first drive failure under ZFS -- anything I should have
   done differently?  Such as NOT doing the export/import? (Not sure
   what else I could have done there.)


Some additional info is below.  Drives are at about 20% busy, according
to vmstat.  Seem to have bandwidth to spare.

This is a FreeBSD 7.2-STABLE system from the end of May -- 32 bit, 2G of
RAM.  I have the luxury of this being a test machine (for exactly stuff
like this), so I'm willing to try whatever without worrying about
production data or SLA.  :)

--
Mahlon E. Smith =20
http://www.martini.nu/contact.html



-----------------------------------------------------------------------

% zfs list store
NAME    USED  AVAIL  REFER  MOUNTPOINT
store  1.22T  2.36T  32.0K  none

-----------------------------------------------------------------------

% cat /boot/loader.conf
vm.kmem_size_max=3D"768M"
vm.kmem_size=3D"768M"
vfs.zfs.arc_max=3D"256M"

-----------------------------------------------------------------------

% zpool status store
  pool: store
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.00% done, 2251h0m to go
config:

        NAME                       STATE     READ WRITE CKSUM
        store                      DEGRADED     0     0     0
          raidz1                   DEGRADED     0     0     0
            da0                    ONLINE       0     0     0  274K resilve=
red
            da1                    ONLINE       0     0     0  282K resilve=
red
            replacing              DEGRADED     0     0     0
              2025342973333799752  UNAVAIL      3 4.11K     0  was /dev/da2
              da8                  ONLINE       0     0     0  418K resilve=
red
            da2                    ONLINE       0     0     0  280K resilve=
red
            da3                    ONLINE       0     0     0  269K resilve=
red
            da4                    ONLINE       0     0     0  266K resilve=
red
            da5                    ONLINE       0     0     0  270K resilve=
red
            da6                    ONLINE       0     0     0  270K resilve=
red
            da7                    ONLINE       0     0     0  267K resilve=
red

errors: No known data errors


-----------------------------------------------------------------------


% zpool iostat -v
                              capacity     operations    bandwidth
pool                        used  avail   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
store                      1.37T  2.72T     49    106   138K   543K
  raidz1                   1.37T  2.72T     49    106   138K   543K
    da0                        -      -     15     62  1017K  79.9K
    da1                        -      -     15     62  1020K  80.3K
    replacing                  -      -      0    103      0  88.3K
      2025342973333799752      -      -      0      0  1.45K    261
      da8                      -      -      0     79  1.45K  98.2K
    da2                        -      -     14     62   948K  80.3K
    da3                        -      -     13     62   894K  80.0K
    da4                        -      -     14     63   942K  80.3K
    da5                        -      -     15     62   992K  80.4K
    da6                        -      -     15     62  1000K  80.1K
    da7                        -      -     15     62  1022K  80.1K
-------------------------  -----  -----  -----  -----  -----  -----


--6c2NcOVqGQ03X4Wi
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----

iD8DBQFKU6he1bsjBDapbeMRAutpAJ9RSF4mSydmeAO9GXtBM8n0FSNgKgCeMjgl
nsDg01FjPMgEQX1XBqxbDPc=
=wCV4
-----END PGP SIGNATURE-----

--6c2NcOVqGQ03X4Wi--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090707195614.GA24326>