Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 May 2010 22:16:57 -0400 (EDT)
From:      Charles Sprickman <spork@bway.net>
To:        Wes Morgan <morganw@chemikals.org>
Cc:        Eric Damien <jafa82@gmail.com>, freebsd-stable@freebsd.org
Subject:   Re: ZFS: separate pools
Message-ID:  <alpine.OSX.2.00.1005032211590.35361@hotlap.local>
In-Reply-To: <alpine.BSF.2.00.1005022124090.80153@ibyngvyr>
References:  <201005021536.05389.jafa82@gmail.com> <alpine.BSF.2.00.1005022124090.80153@ibyngvyr>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 2 May 2010, Wes Morgan wrote:

> On Sun, 2 May 2010, Eric Damien wrote:
>
>> Hello list.
>>
>> I am taking my first steps with ZFS. In the past, I used to have two UFS
>> slices: one dedicated to the o.s. partitions, and the second to data (/home,
>> etc.). I read on that it was possible to recreate that logic with zfs, using
>> separate pools.
>>
>> Considering the example of
>>    http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot,
>> any idea how I can adapt that to my needs? I am concerned about all the
>> different mountpoints.
>
> Well, you need not create all those filesystems if you don't want them.
> The pool and FreeBSD will function just fine.
>
> However, as far as storage is concerned, there is no disadvantage to
> having additional mount pounts. The only limits each filesystem will have
> is a limit you explicitly impose. There are many advantages, though. Some
> datasets are inherently compressible or incompressible. Other datasets you
> may not want to schedule for snapshots, or allow files to be executed,
> suid, checksummed, block sizes, you name it (as the examples in the wiki
> demonstrate).
>
> Furthermore, each pool requires its own vdev. If you create slices on a
> drive and then make each slice its own pool, I would wonder if zfs's
> internal queuing would understand the topology and be able to work as
> efficiently. Just a thought, though.

I have two boxes setup where zfs is on top of slices like that.  One has a 
small zpool across 3 disks - the rest of those disks and 3 other disks of 
the same size also make up another zpool.  The hardware is old, so 
performance just is not spectacular (old 8 port 3Ware PATA card).  I can't 
tell if this config is contributing to the somewhat anemic (by today's 
standards) r/w speeds or not.

Another has 4 drives with a gmirror setup on two of the drives for the OS 
(20G out of 1TB).  This box performs extremely well (bonnie++ shows 
123MB/s writes, 142MB/s reads).

Just some random data.  I know when I was reading about ZFS I did come 
across some vague notion that zfs wanted the entire drive to better deal 
with queueing, not sure if that was official Sun docs or some random blog 
though...

Charles

> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.OSX.2.00.1005032211590.35361>