Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Dec 2016 15:47:18 -0300
From:      Mario Lobo <lobo@bsd.com.br>
To:        Matthew Seaman <matthew@freebsd.org>
Cc:        freebsd-questions <freebsd-questions@freebsd.org>
Subject:   Re: gmirror/gstripe or ZFS?
Message-ID:  <CA%2ByoEx84DDXzRf9VxjcDxdi81_FvUv0xF7CEHF%2BZepjLAKF2Fg@mail.gmail.com>
In-Reply-To: <8129aba8-acdc-c921-62ce-ed0f994cf2af@FreeBSD.org>
References:  <20161215111048.541d6745@Papi> <8129aba8-acdc-c921-62ce-ed0f994cf2af@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Thanks for replying, Matthew!

Comments in between bellow;

2016-12-15 12:05 GMT-03:00 Matthew Seaman <matthew@freebsd.org>:

> If your data is at all important to you and you aren't constrained by
> running on tiny little devices with very limited system resources, then
> it's a no-brainer: use ZFS.
>
> Creating a ZFS striped over two mirrored vdevs is not particularly
> difficult and gives a result about equivalent to RAID10:
>
>   zpool create tank -m /somewhere mirror ada0p3 ada1p3 mirror ada2p3 ada3p3
>
> will create a new zpool called 'tank' and mount it at /somewhere.
> There's a number of properties to fiddle with for tuning purposes, and
> you'll want to create a heirarchy of ZFSes under zroot to suit your
> purposes, but otherwise that's about it.
>
> Replacing drives in a ZFS is about as hard as replacing them in a
> gmirror / gstripe setup.  Swap out the physical device, create an
> appropriate partitioning scheme on the new disk if needed[*], then run
> 'zpool replace device-name' and wait for the pool to resilver.
>
> There are only two commands you need to achieve some familiarity with in
> order to manage a ZFS setup -- zfs(8) and zpool(8).  Don't be put off by
> the length of the man pages: generally it's pretty obvious what
> subcommand you need and you can just jump to that point in the manual to
> find your answers.
>
> [*] The installer will create a zpool by using gpart partitions, so it
> can also add bootcode and a swap area to each disk.  If you're not going
> to be booting off this pool and you have swap supplied elsewhere, then
> all that is unnecessary. You can just tell ZFS to use the raw disk devices.
>
> Problems you may run into:
>
> * Not having enough RAM -- ZFS eats RAM like there's no tomorrow.
> That's because of the agressive caching it employs: many IO requests
> will be served out of RAM rather than having to go all the way to disk.
> Sprinkling RAM liberally into your server will help performance.
>
>
This could be a problem. I also run a couple VMs on that server.
This is a 16Gram server but each VM uses 4G. How much RAM would be
a good amount for ZFS? Can I limit its memory?

* Do turn on compression, and use the lz4 algorithm.  Compression is a
> win in general due to reducing the size of IO requests, which gains more
> than you lose in the extra work to compress and decompress the data.
> lz4 is preferred because it gives pretty good compression for
> compressible data, but can detect and bale out early for incompressible
> data, like many image formats (JPG, PNG, GIF) -- in which case the data
> is simply stored without compression at the ZFS level.
>
> Ok.


> * Don't enable deduplication.  It sounds really attractive, but for
> almost all cases it leads to vastly increased memory requirements,
> performance slowing to a near crawl, wailing, and gnashing of teeth.  If
> you have to ask, then you *don't* want it.
>
>
Yes! I heard about that too!


> * ZFS does a lot more processing than most filesystems -- calculating
> all of those checksums, and doing all those copy-on-writes takes its
> toll.  It's the price you pay for being confident your data is
> uncorrupted, but it does mean ZFS is harder on the system than many
> other FSes.  For a modern server, the extra processing cost is generally
> not a problem, and swallowed in the time it takes to access the spinning
> rust.  It will hurt you if your IO characteristics are a lot of small
> reads / writes randomly scattered around your storage, typical of eg. a
> RDBMS.
>
>
Well, the VMs run from the separate ufs drive, and not from the pool. The
ZFS pool will be used for just graphic files only.
But this extra load that ZFS puts on the system is exactly my main concern.
I ran gmirror/gstripe on other systems and the load
is practically not affected.


> * You can add a 'SLOG' (Separate LOG) device to improve performance --
> this is typically a fast SSD.  Doesn't have to be particularly big: all
> it does is move some particularly hot IO caches off the main drives onto
> the faster hardware.  Can be used for ARC (reading data) or ZIL (writing
> data) or both.  However, you can add this on the fly without any
> interruption of service, so I'd recommend starting without and only
> adding one if it seems you need it.
>
> * Having both UFS and ZFS on the same machine.  This is not
> insurmountably bad, but the different memory requirements of the two
> filesystems can lead to performance trouble.  It depends on what your
> server load levels are like.  If it's lightly loaded, then no problem.
>
> Exactly my concern. This is not file-server only machine. It has other
roles too.


>         Cheers,
>
>         Matthew
>
>
>
For these reasons, I am leaning toward the gmirror/gstripe solution.

-- 
Mario Lobo
http://www.mallavoodoo.com.br
FreeBSD since version 2.2.8 [not Pro-Audio.... YET!!] (99,7% winfoes FREE)



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2ByoEx84DDXzRf9VxjcDxdi81_FvUv0xF7CEHF%2BZepjLAKF2Fg>