Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Dec 2016 15:05:12 +0000
From:      Matthew Seaman <matthew@FreeBSD.org>
To:        freebsd-questions@freebsd.org
Subject:   Re: gmirror/gstripe or ZFS?
Message-ID:  <8129aba8-acdc-c921-62ce-ed0f994cf2af@FreeBSD.org>
In-Reply-To: <20161215111048.541d6745@Papi>
References:  <20161215111048.541d6745@Papi>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Fe0wjaGJpSkXTs5JvUQ3NUjH26veEO6Te
Content-Type: multipart/mixed; boundary="tEBdl9arB5MUs0oflXxUFmrVlrNNHLbjw";
 protected-headers="v1"
From: Matthew Seaman <matthew@FreeBSD.org>
To: freebsd-questions@freebsd.org
Message-ID: <8129aba8-acdc-c921-62ce-ed0f994cf2af@FreeBSD.org>
Subject: Re: gmirror/gstripe or ZFS?
References: <20161215111048.541d6745@Papi>
In-Reply-To: <20161215111048.541d6745@Papi>

--tEBdl9arB5MUs0oflXxUFmrVlrNNHLbjw
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On 2016/12/15 14:10, Mario Lobo wrote:
> I'll be building an area on an existing server that runs 10.3-STABLE
> (will upgrade to 11-STABLE), which is going to be used basically as a
> work/storage area for graphic design files (lots and lots of image
> editing, etc ...) that are extremely critical for the company and need
> to be up and ready all the time.
>=20
> A backup system is already in place and running.
>=20
> The OS runs off of its own ufs formatted drive and I acquired 4x 4Tb
> drives (sata), which I plan to gmirror 1&2/3&4, stripe the two mirrors
> into an 8Tb volume, and share it via samba. Network is Gbit.
>=20
> It comes to mind doing the same thing through ZFS. I've never used it
> before, which is the opposite of gmirror/gstripe, which I have used
> plenty.
>=20
> Given what this volume is going to be used for, in terms of
> performance/reliability/sharing, which one is best?
>=20
> I have replaced defective drives in gmirror many times without any
> problem. Is that just as easy with ZFS?
>=20
> Is sharing a dataset through samba as straight forward as sharing a
> gmirror/gstripe?
>=20
> I am reading as much as I can about ZFS but most of what I found is
> mainly technical implementation, not so much about how the user is
> experiencing it compared to other options.

If your data is at all important to you and you aren't constrained by
running on tiny little devices with very limited system resources, then
it's a no-brainer: use ZFS.

Creating a ZFS striped over two mirrored vdevs is not particularly
difficult and gives a result about equivalent to RAID10:

  zpool create tank -m /somewhere mirror ada0p3 ada1p3 mirror ada2p3 ada3=
p3

will create a new zpool called 'tank' and mount it at /somewhere.
There's a number of properties to fiddle with for tuning purposes, and
you'll want to create a heirarchy of ZFSes under zroot to suit your
purposes, but otherwise that's about it.

Replacing drives in a ZFS is about as hard as replacing them in a
gmirror / gstripe setup.  Swap out the physical device, create an
appropriate partitioning scheme on the new disk if needed[*], then run
'zpool replace device-name' and wait for the pool to resilver.

There are only two commands you need to achieve some familiarity with in
order to manage a ZFS setup -- zfs(8) and zpool(8).  Don't be put off by
the length of the man pages: generally it's pretty obvious what
subcommand you need and you can just jump to that point in the manual to
find your answers.

[*] The installer will create a zpool by using gpart partitions, so it
can also add bootcode and a swap area to each disk.  If you're not going
to be booting off this pool and you have swap supplied elsewhere, then
all that is unnecessary. You can just tell ZFS to use the raw disk device=
s.

Problems you may run into:

* Not having enough RAM -- ZFS eats RAM like there's no tomorrow.
That's because of the agressive caching it employs: many IO requests
will be served out of RAM rather than having to go all the way to disk.
Sprinkling RAM liberally into your server will help performance.

* Do turn on compression, and use the lz4 algorithm.  Compression is a
win in general due to reducing the size of IO requests, which gains more
than you lose in the extra work to compress and decompress the data.
lz4 is preferred because it gives pretty good compression for
compressible data, but can detect and bale out early for incompressible
data, like many image formats (JPG, PNG, GIF) -- in which case the data
is simply stored without compression at the ZFS level.

* Don't enable deduplication.  It sounds really attractive, but for
almost all cases it leads to vastly increased memory requirements,
performance slowing to a near crawl, wailing, and gnashing of teeth.  If
you have to ask, then you *don't* want it.

* ZFS does a lot more processing than most filesystems -- calculating
all of those checksums, and doing all those copy-on-writes takes its
toll.  It's the price you pay for being confident your data is
uncorrupted, but it does mean ZFS is harder on the system than many
other FSes.  For a modern server, the extra processing cost is generally
not a problem, and swallowed in the time it takes to access the spinning
rust.  It will hurt you if your IO characteristics are a lot of small
reads / writes randomly scattered around your storage, typical of eg. a
RDBMS.

* You can add a 'SLOG' (Separate LOG) device to improve performance --
this is typically a fast SSD.  Doesn't have to be particularly big: all
it does is move some particularly hot IO caches off the main drives onto
the faster hardware.  Can be used for ARC (reading data) or ZIL (writing
data) or both.  However, you can add this on the fly without any
interruption of service, so I'd recommend starting without and only
adding one if it seems you need it.

* Having both UFS and ZFS on the same machine.  This is not
insurmountably bad, but the different memory requirements of the two
filesystems can lead to performance trouble.  It depends on what your
server load levels are like.  If it's lightly loaded, then no problem.

	Cheers,

	Matthew



--tEBdl9arB5MUs0oflXxUFmrVlrNNHLbjw--

--Fe0wjaGJpSkXTs5JvUQ3NUjH26veEO6Te
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQJ8BAEBCgBmBQJYUrEuXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQxOUYxNTRFQ0JGMTEyRTUwNTQ0RTNGMzAw
MDUxM0YxMEUwQTlFNEU3AAoJEABRPxDgqeTnZyQP/0VHLpdJNvdmDb1A9QcefOeJ
ho3p1YWiDr3czH9rMu5K+186e+28md4glzDH0SvcxDUefOIx4xwaD/ybBXlYeHoZ
i8RXLuByk3Lrn7tIv+M/jHJ93r+JP7V7dcjiAoUEW3hAo3WEsDBQ1t6XgPbFKBz0
yTxZtoiEos8VXNuAUI1DNU2+USCIrNLquh2rOLeFRdDZ4eYcYwjWDpoFH/XN7Zty
jyBeMMqwNWwRDnBOe9/65L9VnMxGF+4Rfe3T+Y9h9kRDA94Scy2MdsFZY8ZmfrJU
UI50xQJt4X+A1+pWcdrfrix0Cm59oupMHne2pMs8Koypt5wC+v9Kn0bsn0RWOA66
V8zYbTB+RHvr4ARkEW28GTEgp88drjlwtpZFx1nScGvCjSpYl8q3G3mkxVIK9uIa
K0zhlo8Y0FVSOi6kE6pAfMx+E9gNx0E7/yIMnZJDPXNnAJLk6YCwAxRty6eUgmiY
jwmZXYXjwWjoIg0WEuCopDht49NwrnuRozIk5b2w5Y7fpI1eSIR565MMGfzBt/dd
C0rDFDfexcSQZ0uPnF5QA1k24qD5mCn3mZLDU3Zs1tQznUOt44bJ3TDBelV8YYED
/kxatMe8NEoX7SmWurY3pN0sqAliJJ01wkqL8TBIGXyuaxRMsrCrKpB0lx9ASu4B
KJd7EFYP97Fgb4JKdqWi
=hFsZ
-----END PGP SIGNATURE-----

--Fe0wjaGJpSkXTs5JvUQ3NUjH26veEO6Te--



Want to link to this message? Use this URL: <http://docs.FreeBSD.org/cgi/mid.cgi?8129aba8-acdc-c921-62ce-ed0f994cf2af>