Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 12 Dec 2019 12:42:45 +0000
From:      Norman Gray <Norman.Gray@glasgow.ac.uk>
To:        David Christensen <dpchrist@holgerdanske.com>
Cc:        "freebsd-questions@freebsd.org" <freebsd-questions@freebsd.org>
Subject:   Re: Adding to a zpool -- different redundancies and risks
Message-ID:  <5A01F7F7-9326-47E2-BA6E-79A7D3F0889A@glasgow.ac.uk>
In-Reply-To: <09b11639-3303-df6b-f70c-6722caaacee7@holgerdanske.com>
References:  <6104097C-009B-4E9C-A1D8-A2D0E5FECADF@glasgow.ac.uk> <09b11639-3303-df6b-f70c-6722caaacee7@holgerdanske.com>

next in thread | previous in thread | raw e-mail | index | archive | help

David, hello.

On 12 Dec 2019, at 5:11, David Christensen wrote:

> Please post:
>
> 1   The 'zpool create ...' command you used to create the existing=20
> pool.

I don't have a note of the exact command, but it would have been=20
something like

     zpool create pool raidz2 da{0,1,2,3,4,5,6,7,8} raidz2 da9=20
da1{0,1,2,3,4,5,6,7}

> 2.  The output of 'zpool status' for the existing pool.

# zpool status pool
   pool: pool
  state: ONLINE
status: Some supported features are not enabled on the pool. The pool=20
can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
   scan: none requested
config:

	NAME             STATE     READ WRITE CKSUM
	pool             ONLINE       0     0     0
	  raidz2-0       ONLINE       0     0     0
	    label/zd032  ONLINE       0     0     0
	    label/zd033  ONLINE       0     0     0
	    label/zd034  ONLINE       0     0     0
	    label/zd035  ONLINE       0     0     0
	    label/zd036  ONLINE       0     0     0
	    label/zd037  ONLINE       0     0     0
	    label/zd038  ONLINE       0     0     0
	    label/zd039  ONLINE       0     0     0
	    label/zd040  ONLINE       0     0     0
	  raidz2-1       ONLINE       0     0     0
	    label/zd041  ONLINE       0     0     0
	    label/zd042  ONLINE       0     0     0
	    label/zd043  ONLINE       0     0     0
	    label/zd044  ONLINE       0     0     0
	    label/zd045  ONLINE       0     0     0
	    label/zd046  ONLINE       0     0     0
	    label/zd047  ONLINE       0     0     0
	    label/zd048  ONLINE       0     0     0
	    label/zd049  ONLINE       0     0     0

errors: No known data errors
#

(Note: since creating the pool, I realised that gpart labels were a Good=20
Thing, hence exported, labelled, and imported the pool, hence the=20
difference from the da* pool creation).

> 3.  The output of 'zpool list' for the existing pool.

# zpool list pool
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP =20
HEALTH  ALTROOT
pool    98T  75.2T  22.8T        -         -    29%    76%  1.00x =20
ONLINE  -

> 4.  The 'zpool add ...' command you are contemplating.

# zpool add -n pool raidz2 label/zd05{0,1,2,3,4,5}
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses 9-way raidz and new vdev uses=20
6-way raidz

The six new disks are 12TB; the 18 original ones 5.5TB.

> So, you have 24 drives in a 24 drive cage?

That's correct -- the maximum the chassis will take.

> What are your space and performance goals?

Not very explicit: TB/currency-unit as high as possible.  Performance:=20
bottlenecks are likely to be elsewhere (network, processing power) so no=20
stringent requirements.  Though this is a fairly general-purpose data=20
store, a large fraction of the datasets on the machine comprise a number=20
of 10GB single files, served via NFS.

> What are your sustainability goals as drives and/or VDEV's fail?

It doesn't have to be high availability, so if I have a drive failure, I=20
can consider shutting the machine down until a replacement disk arrives=20
and can be resilvered.  This is a mirror of data where the masters are=20
elsewhere on the planet, so this machine is 'reliable storage but not=20
backed up' (and the users know this).  Thus if I do decide to keep=20
running with one failed disk in one VDEV, and the worst comes to the=20
worst and the whole thing explodes... the world won't end.  I will be=20
cross, and users will moan, in either case, but they know this is a=20
problem that can fundamentally be solved with more money.

I'm sure I could be more sophisticated about this (and any suggestions=20
are welcome), but unfortunately I don't have as much time to spend on=20
storage problems as I'd like, so I'd like to avoid creating a setup=20
which is smarter than I'm able to fix!

Best wishes,

Norman


--=20
Norman Gray  :  https://nxg.me.uk
SUPA School of Physics and Astronomy, University of Glasgow, UK



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5A01F7F7-9326-47E2-BA6E-79A7D3F0889A>