Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 28 Feb 2012 12:00:31 -0500 (EST)
From:      Randy Schultz <schulra@earlham.edu>
To:        Matthew Seaman <m.seaman@infracaninophile.co.uk>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: zpool not grabbing hot spare
Message-ID:  <alpine.BSF.2.00.1202281135370.90975@tdream.lly.earlham.edu>
In-Reply-To: <4F4CFFCF.4010207@infracaninophile.co.uk>
References:  <alpine.BSF.2.00.1202281021000.90975@tdream.lly.earlham.edu> <4F4CFFCF.4010207@infracaninophile.co.uk>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Tue, 28 Feb 2012, Matthew Seaman spaketh thusly:

-}
-}Yes.  That's the generally accepted meaning of the concept of a 'hot
-}spare.'  The fact that the spare hasn't been automatically bought
-}on-line in this case is a bug.  There's an open PR on the subject:
-}
-}http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/134491

Tnx for the pointer!


-}
-}That seems to suggest the problem was known to be solved at some point
-}in 2011, but it was not necessarily propagated to all stable branches.
-}However, given your experience perhaps that is not the case.

Yeah, current kernel src's (8.2-STABLE) were sup'd and rebuilt Dec 22.


-}
-}You should be able to use zfs commands manually to sub-in the spare
-}drive and get it resilvered.
-}
-}As an aside -- you've got a pretty odd setup there: 41 drives all in one
-}big RAIDZ2 vdev?  Standard practice would be to create something like 5
-}RAIDZ2 vdevs of 8 drives each (Or maybe 6 vdevs of 7 drives apiece: 6--9
-}drives is about the sweet spot for a RAIDZ2) and then stripe those vdevs
-}together to create your zpool.

We looked at doing things this way, especially since it give much better
performance.  However, performance was less important than maximizing storage.
Over the last 9 weeks we are averaging (including nighly backups):

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data        1.41T  8.34T     47     29  2.82M  1.31M
  raidz2    1.41T  8.34T     47     27  2.82M  1.17M
    da2         -      -     20      2  69.3K  30.1K
    da3         -      -     20      2  69.3K  30.1K
    da4         -      -     20      2  69.3K  30.1K
    da5         -      -     20      2  69.3K  30.1K
    da6         -      -     20      2  69.3K  30.1K
    da7         -      -     20      2  69.3K  30.1K
    da9         -      -     20      2  69.3K  30.1K
    da10        -      -     20      2  69.3K  30.1K
    da11        -      -     20      2  69.3K  30.1K
    da12        -      -     20      2  69.3K  30.1K
    da13        -      -     20      2  69.3K  30.1K
    da14        -      -     20      2  69.3K  30.1K
    da15        -      -     20      2  69.3K  30.1K
    da17        -      -     20      2  69.3K  30.1K
    da18        -      -     20      2  69.3K  30.1K
    da19        -      -     20      2  69.3K  30.1K
    da20        -      -     20      2  69.3K  30.1K
    da21        -      -     20      2  69.3K  30.1K
    da22        -      -     20      2  69.3K  30.1K
    da23        -      -     20      2  69.3K  30.1K
    da25        -      -     20      2  69.3K  30.1K
    da26        -      -     20      2  69.3K  30.1K
    da27        -      -     20      2  69.3K  30.1K
    da28        -      -     20      2  69.4K  30.1K
    da29        -      -     20      2  69.2K  30.1K
    da30        -      -     20      2  67.6K  29.9K
    da31        -      -     20      2  69.2K  30.1K
    da32        -      -     20      2  69.3K  30.1K
    da33        -      -     20      2  69.3K  30.1K
    da34        -      -     20      2  69.3K  30.1K
    da35        -      -     20      2  69.3K  30.1K
    da36        -      -     20      2  69.3K  30.1K
    da37        -      -     20      2  69.3K  30.1K
    da38        -      -     20      2  69.3K  30.1K
    da39        -      -     20      2  69.3K  30.1K
    da40        -      -     20      2  69.3K  30.1K
    da41        -      -     20      2  69.3K  30.1K
    da42        -      -     20      2  69.3K  30.1K
    da43        -      -     20      2  69.3K  30.1K
    da44        -      -     20      2  69.3K  30.1K
    da45        -      -     20      2  69.3K  30.1K
    da46        -      -     20      2  69.3K  30.1K
    da47        -      -     20      2  69.3K  30.1K


--
 Randy    (schulra@earlham.edu)      765.983.1283         <*>

nosce te ipsum




Want to link to this message? Use this URL: <http://docs.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1202281135370.90975>