Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 21 Nov 2020 19:50:36 -0800
From:      David Christensen <dpchrist@holgerdanske.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: [SOLVED] Re: "zpool attach" problem
Message-ID:  <f4e8236f-b1c3-87c4-332e-9aadb93eddaa@holgerdanske.com>
In-Reply-To: <202011212233.0ALMXfvE022876@sdf.org>
References:  <202011212233.0ALMXfvE022876@sdf.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2020-11-21 14:33, Scott Bennett via freebsd-questions wrote:
> Hi David,
>       Thanks for your reply.  I was about to respond to my own message to say that the
> issue has been resolved, but I saw your reply first.  However, I respond below to
> your comments and questions, as well as stating what the problem turned out to be.

<snip>


I suspect that we all have similar stories.  :-)


It sounds like we both have small SOHO networks.  My comments below 
reflect such.


Spreading two ZFS pools and one GEOM RAID (?) across six HDD's is not 
something that I would do or recommend.  I also avoid raidzN.  I suggest 
that you backup, wipe, create one pool using mirrors, and restore.


Apply the GPT partitioning scheme and create one large partition with 1 
MiB alignment on each of the six data drives.


When partitioning, some people recommend leaving a non-trivial amount of 
unused space at the end of the drive -- say 2% to 5% -- to facilitate 
replacing failed drives with somewhat smaller drives.  I prefer to use 
100% and will buy a pair of identical replacements if faced with that 
situation.


Label your partitions with names that correlate with your ZFS storage 
architecture [3].  Always use the labels for administrative commands; 
never use raw device nodes.


(I encrypt the partitions and use the GPT label GELI nodes when creating 
the pool, below.)


Create a zpool with three mirrors -- a first mirror of two 2 TB drive 
partitions, a second mirror of two 2 TB drive partitions, and a third 
mirror of the 3 TB drive and the 4 TB drive partitions.  That should 
give you about the same available space as your existing raidz2.


Consider buying a spare 4 TB (or two) and putting it on the shelf. 
Better yet, connect it to the machine and tell ZFS to use it as a spare. 
  Buy 4 TB drives going forward.


Adding a solid-state cache device or partition can noticeably improve 
read responsiveness (e.g. both sequential and random latency).  After 
the initial hits, both my Samba and CVS services are snappy.


I expect solid-state log devices would similarly help write performance, 
but have not tried it yet.


David


[3] https://b3n.org/zfs-hierarchy/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f4e8236f-b1c3-87c4-332e-9aadb93eddaa>