Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 16 Jul 2013 10:49:42 -0700
From:      aurfalien <aurfalien@gmail.com>
To:        Shane Ambler <FreeBSD@ShaneWare.Biz>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: to gmirror or to ZFS
Message-ID:  <C13CC733-E366-4B54-8991-0ED229932787@gmail.com>
In-Reply-To: <51E51558.50302@ShaneWare.Biz>
References:  <4DFBC539-3CCC-4B9B-AB62-7BB846F18530@gmail.com> <alpine.BSF.2.00.1307152211180.74094@wonkity.com> <976836C5-F790-4D55-A80C-5944E8BC2575@gmail.com> <51E51558.50302@ShaneWare.Biz>

next in thread | previous in thread | raw e-mail | index | archive | help

On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:

> On 16/07/2013 14:41, aurfalien wrote:
>>=20
>> On Jul 15, 2013, at 9:23 PM, Warren Block wrote:
>>=20
>>> On Mon, 15 Jul 2013, aurfalien wrote:
>>>=20
>>>> ... thats the question :)
>>>>=20
>>>> At any rate, I'm building a rather large 100+TB NAS using ZFS.
>>>>=20
>>>> However for my OS, should I also ZFS or simply gmirror as I've a
>>>> dedicated pair of 256GB SSD drives for it.  I didn't ask for SSD
>>>> sys drives, this system just came with em.
>>>>=20
>>>> This is more of a best practices q.
>>>=20
>>> ZFS has data integrity checking, gmirror has low RAM overhead.
>>> gmirror is, at present, restricted to MBR partitioning due to
>>> metadata conflicts with GPT, so 2TB is the maximum size.
>>>=20
>>> Best practices... depends on your use.  gmirror for the system
>>> leaves more RAM for ZFS.
>>=20
>> Perfect, thanks Warren.
>>=20
>> Just what I was looking for.
>=20
> I doubt that you would save any ram having the os on a non-zfs drive =
as
> you will already be using zfs chances are that non-zfs drives would =
only
> increase ram usage by adding a second cache. zfs uses it's own cache
> system and isn't going to share it's cache with other system managed
> drives. I'm not actually certain if the system cache still sits above
> zfs cache or not, I think I read it bypasses the traditional drive =
cache.
>=20
> For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
> that is a system wide setting and isn't going to increase if you have
> two zpools.
>=20
> Tip: set the arc_max value - by default zfs will use all physical ram
> for cache, set it to be sure you have enough ram left for any services
> you want running.
>=20
> Have you considered using one or both SSD drives with zfs? They can be
> added as cache or log devices to help performance.
> See man zpool under Intent Log and Cache Devices.

This is a very interesting point.

In terms if SSDs for cache, I was planning on using a pair of Samsung =
Pro 512GB SSDs for this purpose (which I haven't bought yet).

But I tire of buying stuff, so I have a pair of 40GB Intel SSDs for use =
as sys disks and several Intel 160GB SSDs lying around that I can =
combine with the existing 256GB SSDs for a cache.

Then use my 36x3TB for the beasty NAS.

- aurf





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?C13CC733-E366-4B54-8991-0ED229932787>