Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 Jul 2013 11:25:28 -0700
From:      aurfalien <aurfalien@gmail.com>
To:        Warren Block <wblock@wonkity.com>
Cc:        freebsd-questions@freebsd.org, Shane Ambler <FreeBSD@ShaneWare.Biz>
Subject:   Re: to gmirror or to ZFS
Message-ID:  <069F4A27-A7A2-4215-A815-468F436B331F@gmail.com>
In-Reply-To: <alpine.BSF.2.00.1307161239580.82091@wonkity.com>
References:  <4DFBC539-3CCC-4B9B-AB62-7BB846F18530@gmail.com> <alpine.BSF.2.00.1307152211180.74094@wonkity.com> <976836C5-F790-4D55-A80C-5944E8BC2575@gmail.com> <51E51558.50302@ShaneWare.Biz> <C13CC733-E366-4B54-8991-0ED229932787@gmail.com> <alpine.BSF.2.00.1307161239580.82091@wonkity.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Jul 16, 2013, at 11:42 AM, Warren Block wrote:

> On Tue, 16 Jul 2013, aurfalien wrote:
>> On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:
>>>=20
>>> I doubt that you would save any ram having the os on a non-zfs drive =
as
>>> you will already be using zfs chances are that non-zfs drives would =
only
>>> increase ram usage by adding a second cache. zfs uses it's own cache
>>> system and isn't going to share it's cache with other system managed
>>> drives. I'm not actually certain if the system cache still sits =
above
>>> zfs cache or not, I think I read it bypasses the traditional drive =
cache.
>>>=20
>>> For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
>>> that is a system wide setting and isn't going to increase if you =
have
>>> two zpools.
>>>=20
>>> Tip: set the arc_max value - by default zfs will use all physical =
ram
>>> for cache, set it to be sure you have enough ram left for any =
services
>>> you want running.
>>>=20
>>> Have you considered using one or both SSD drives with zfs? They can =
be
>>> added as cache or log devices to help performance.
>>> See man zpool under Intent Log and Cache Devices.
>>=20
>> This is a very interesting point.
>>=20
>> In terms if SSDs for cache, I was planning on using a pair of Samsung =
Pro 512GB SSDs for this purpose (which I haven't bought yet).
>>=20
>> But I tire of buying stuff, so I have a pair of 40GB Intel SSDs for =
use as sys disks and several Intel 160GB SSDs lying around that I can =
combine with the existing 256GB SSDs for a cache.
>>=20
>> Then use my 36x3TB for the beasty NAS.
>=20
> Agreed that 256G mirrored SSDs are kind of wasted as system drives.  =
The 40G mirror sounds ideal.


Update;

I went with ZFS as I didn't want to confuse the toolset needed to =
support this server.  Although gmirror is not hard to figure out, I =
wanted consistency in systems.

So I've a booted 9.1 rel using a mirrored ZFS system disk.

The drives do support TRIM but am unsure how this plays with ZFS.  I did =
the standard partition scheme of;

root@kronos:/root # gpart show
=3D>      34  78165293  da0  GPT  (37G)
        34       128    1  freebsd-boot  (64k)
       162         6       - free -  (3.0k)
       168   8388608    2  freebsd-swap  (4.0G)
   8388776  69776544    3  freebsd-zfs  (33G)
  78165320         7       - free -  (3.5k)

=3D>      34  78165293  da1  GPT  (37G)
        34       128    1  freebsd-boot  (64k)
       162         6       - free -  (3.0k)
       168   8388608    2  freebsd-swap  (4.0G)
   8388776  69776544    3  freebsd-zfs  (33G)
  78165320         7       - free -  (3.5k)

At any rate, thank you for the replies, very much appreciate it.

Especially since building a rather large production worthy NAS not =
knowing a lick of freeBSD.

The reasons going with freeBSD are 2 fold;

ZFS stability,seems a better marriage then ZOL.
Correctly provides NFS pre attributes on write reply; mtime.  Linux does =
not.

While its a steep learning curve, the 2 points above require the use of =
freeBSD or alike.

- aurf=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?069F4A27-A7A2-4215-A815-468F436B331F>