Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 27 Jan 2014 07:43:40 +0000
From:      Kym Crox <kymcrox21@gmail.com>
To:        freebsd-questions@freebsd.org
Subject:   Important Information For Your Website: Freebsd.org :ZS
Message-ID:  <20cf30363713c7e54c04f0eeddb1@google.com>

next in thread | raw e-mail | index | archive | help
DQoNCkhpIEZyZWVic2Qub3JnIFRlYW0sDQoNCkhvcGUgeW91IGFyZSBkb2luZyBncmVhdCBhbmQg
ZXZlcnl0aGluZyBmaW5lIGF0IHlvdXIgZW5kLg0KDQpJIHRob3VnaHQgeW91IG1pZ2h0IGxpa2Ug
dG8ga25vdyBzb21lIG9mIHRoZSBtb3N0IGltcG9ydGFudCBmYWN0b3JzIG9mIHlvdXINCndlYnNp
dGUgYW5kIHJlYXNvbnMgb2YgbGFja2luZyBlbm91Z2ggb3JnYW5pYyB0cmFmZmljICYgbW9zdCBv
ZnRlbiB5b3UNCnN0aWNrIHRvIEFkIHdvcmRzIHRvIGdldCBtb3JlIHRyYWZmaWMgd2hpY2ggaXMg
cXVpdGUgZXhwZW5zaXZlIGFuZCB0aGUNCmNoYW5jZXMgaXMgaGlnaCBvZiBnZXR0aW5nIGEgc3Bh
bSB0cmFmZmljIGFzIHdlbGwuDQoNCipTb21lIG9mIHRoZSBtYWpvciBmYWN0b3JzIHdoaWNoIGNh
biBiZSBvdmVyY29tZSB0byB5b3VyIHdlYnNpdGUgdG8gcmFuaw0Kd2VsbCBpbiBTRVJQIG9yZ2Fu
aWNhbGx5IGFuZCBpbmNyZWFzZSB5b3VyIHNvY2lhbCBtZWRpYSBwcmVzZW5jZSBhcmU6Kg0KDQox
LiBTZWVtcyBsaWtlIHlvdXIgd2Vic2l0ZSBjYXJyaWVzIGEgbG90IG9mIHRlY2huaWNhbCBlcnJv
cnMgd2hpY2ggcHJldmVudHMNCnNlYXJjaCBlbmdpbmUgdG8gY3Jhd2wgYW5kIGluZGV4IHlvdXIg
d2Vic2l0ZSBwcm9wZXJseS4NCg0KMi4gU2VlbXMgeW91ciB3ZWJzaXRlIG5lZWRzIGEgcHJvcGVy
IGtleXdvcmQgc2VsZWN0aW9uIGZyb20gd2hpY2ggeW91IG1pZ2h0DQpnZXQgcHJvcGVyIHBvc2l0
aW9uIHdpdGggcGVyZmVjdCB0cmFmZmljIGZvciB5b3VyIHdlYnNpdGUgaW4gc2VhcmNoIGVuZ2lu
ZXMNCi4NCg0KMy4gWW91ciB3ZWJzaXRlIHNob3VsZCBiZSBtb3JlIGluY2xpbmVkIHRvd2FyZHMg
U29jaWFsIG1lZGlhIHByb21vdGlvbiBhbmQNCmEgcmVndWxhciB1cGRhdGVzIGluIG1ham9yIHNv
Y2lhbCBuZXR3b3JrcyBmb3IgaXRzIGJyYW5kIGF3YXJlbmVzcy4NCg0KNC4gTWlzc2luZyBvZiBx
dWFsaXR5IHdlYiBhbmQgcHJvbW90aW9uIGNvbnRlbnRzIChBcnRpY2xlLCBCbG9ncyBldGMuKQ0K
d2hpY2ggaXMgcHJldmVudGluZyB5b3VyIHdlYnNpdGUgdG8gZ2FpbiBtb3JlIGF1dGhvcml0eSBh
bmQgcmFua2luZyBpbiBXZWINCk1hcmtldC4NCg0KSW4gdGhlIHByZXNlbnQgZGF5IHNjZW5hcmlv
IGl0knMgdmVyeSBlc3NlbnRpYWwgdG8gdGFrZSBhIHByb3BlciBjYXJlIG9mDQp5b3VyIHdlYnNp
dGUgYW5kIGtlZXAgaXQgdXBkYXRlZCB3aXRoIGZyZXNoIGFuZCBvcmlnaW5hbCBjb250ZW50cy4g
VGhlcmUNCmFyZSBtYW55IGFkZGl0aW9uYWwgaW1wcm92ZW1lbnRzIHdoaWNoIGNhbiBoZWxwIHlv
dXIgd2Vic2l0ZSB0byBnYWluIG1vcmUNCnRyYWZmaWMgYW5kIHZpc2liaWxpdHkuIElmIHlvdSBh
cmUgaW50ZXJlc3RlZCB0byBsZWFybiBtb3JlIGFuZCBjdXJpb3VzIHRvDQprbm93IGhvdyB3ZSBj
YW4gaGVscCB5b3UgdG8gaW1wcm92ZSB5b3VyIHdlYnNpdGUgdG8gZ2V0IGEgaGlnaGVyIHRyYWZm
aWMsDQp0aGVuIEkgd291bGQgYmUgZ2xhZCB0byBwcm92aWRlIHlvdSBhIGRldGFpbGVkIHByb3Bv
c2FsIGZvciB5b3VyIHdlYnNpdGUuDQoNClN1Y2Nlc3NmdWwgU2VhcmNoIEVuZ2luZSBPcHRpbWl6
YXRpb24gcmVxdWlyZXMgYSBjb21wcmVoZW5zaXZlLCBjdXN0b21pemVkDQphcHByb2FjaCBiYXNl
ZCBvbiBhIHNpdGUncyB1bmlxdWUgY2hhcmFjdGVyaXN0aWNzLiBUaGUgU2VhcmNoIEVuZ2luZQ0K
T3B0aW1pemF0aW9uIHByb2plY3Qgd2lsbCBuZWVkIHRvIHN0cmlrZSBhIHRydWUgYmFsYW5jZSBi
ZXR3ZWVuIHdlYnNpdGUgYW5kDQpmdW5jdGlvbmFsaXR5LCB0aGUgc2VhcmNoaW5nIGJlaGF2aW91
cnMgb2YgdGhlIHRhcmdldCBhdWRpZW5jZXMsIGFuZCB0aGUNCmFsZ29yaXRobXMgdXNlZCBieSBz
ZWFyY2ggZW5naW5lcyB0byBmaW5kIHJlc3VsdHMuDQoNClRoaXMgZW1haWwganVzdCB0ZWxscyB5
b3UgdGhlIGZyYWN0aW9uIG9mIHRoaW5ncyB3ZSBkbywgb3VyIG9wdGltaXphdGlvbg0KcHJvY2Vz
cyBpbnZvbHZlcyBtYW55IG90aGVyIHRlY2huaWNhbCBmYWN0b3JzIHdoaWNoIGNhbiBiZSBzZW50
IHRvIHlvdSBvbg0KeW91ciByZXF1ZXN0LiBJZiB5b3Ugd291bGQgbGlrZSB0byBrbm93IG1vcmUg
YWJvdXQgb3VyIHNlcnZpY2VzIHRoZW4gcGxlYXNlDQp3cml0ZSB1cyBiYWNrIGVsc2UgeW91IGNh
biBhc2sgdXMgdG8gY2FsbCB5b3UgYW5kIHdlIHdpbGwgZ2V0IGJhY2sgdG8geW91DQphcyBwZXIg
eW91ciBzdWl0YWJsZSB0aW1lLg0KDQpMZXQgbWUga25vdyB5b3VyIHRob3VnaHRzIGFuZCBsb29r
aW5nIGZvcndhcmQgdG8gd29yayB0b2dldGhlci4NCg0KKkJlc3QgUmVnYXJkcywqDQoNCipLeW0g
Q3JveCoNClNlbmlvciBTRU8gQWR2aXNvcg0KU2t5cGU6IHdlYm1hcmtldGluZy5zYWxlcw0KDQoq
Tm90ZTogKldlIGFyZSBub3Qgc3BhbW1lcnMuIFdlIGp1c3Qgd2FudCB0byBrbm93IHlvdXIgaW50
ZXJlc3QgdG93YXJkcyB0aGUNCmJldHRlciBwZXJmb3JtYW5jZSBvZiB5b3VyIHdlYnNpdGUgYW5k
IGVuaGFuY2UgeW91ciBidXNpbmVzcyBpbiB3ZWIgbWFya2V0Lg0KSWYgeW91IHdpbGwgYmUgaW50
ZXJlc3RlZCB3ZSB3aWxsIGRpcmVjdGx5IGNvbW11bmljYXRlIHdpdGggeW91IHRocm91Z2ggb3Vy
DQpjb3Jwb3JhdGUgSWQuDQoNCklmIHlvdSB0aGluayBpdCdzIHVubmVjZXNzYXJ5IGZvciB5b3Ug
c28gcGxlYXNlIGVtYWlsIHVzIGJhY2sgdG8gcmVtb3ZlIGFuZA0Kd2Ugd2lsbCB1bnN1YnNjcmli
ZSB5b3UuIEhvcGUgeW91IHdpbGwgY28tb3BlcmF0ZS4NCg0KDQoNCi0tLS0tLS0tLS0tLS0tLS0t
RExQLS0tLS0tLS0tLS0tLS0tDQoNCg0K
From owner-freebsd-questions@FreeBSD.ORG  Mon Jan 27 09:08:30 2014
Return-Path: <owner-freebsd-questions@FreeBSD.ORG>
Delivered-To: freebsd-questions@freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115])
 (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by hub.freebsd.org (Postfix) with ESMTPS id 176C832E
 for <freebsd-questions@freebsd.org>; Mon, 27 Jan 2014 09:08:30 +0000 (UTC)
Received: from smtp.fagskolen.gjovik.no (smtp.fagskolen.gjovik.no
 [IPv6:2001:700:1100:1:200:ff:fe00:b])
 (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mx1.freebsd.org (Postfix) with ESMTPS id 4F07510C5
 for <freebsd-questions@freebsd.org>; Mon, 27 Jan 2014 09:08:28 +0000 (UTC)
Received: from mail.fig.ol.no (localhost [127.0.0.1])
 by mail.fig.ol.no (8.14.7/8.14.7) with ESMTP id s0R98Ngd022500
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
 Mon, 27 Jan 2014 10:08:23 +0100 (CET)
 (envelope-from trond@fagskolen.gjovik.no)
Received: from localhost (trond@localhost)
 by mail.fig.ol.no (8.14.7/8.14.7/Submit) with ESMTP id s0R98NfA022497;
 Mon, 27 Jan 2014 10:08:23 +0100 (CET)
 (envelope-from trond@fagskolen.gjovik.no)
X-Authentication-Warning: mail.fig.ol.no: trond owned process doing -bs
Date: Mon, 27 Jan 2014 10:08:23 +0100 (CET)
From: =?ISO-8859-1?Q?Trond_Endrest=F8l?= <Trond.Endrestol@fagskolen.gjovik.no>
Sender: Trond.Endrestol@fagskolen.gjovik.no
To: Kaya Saman <kayasaman@gmail.com>
Subject: Re: ZFS confusion
In-Reply-To: <52E40C82.7050302@gmail.com>
Message-ID: <alpine.BSF.2.00.1401270944100.4811@mail.fig.ol.no>
References: <52E40C82.7050302@gmail.com>
User-Agent: Alpine 2.00 (BSF 1167 2008-08-23)
Organization: Fagskolen Innlandet
OpenPGP: url=http://fig.ol.no/~trond/trond.key
MIME-Version: 1.0
X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED
 autolearn=unavailable version=3.3.2
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mail.fig.ol.no
Content-Type: TEXT/PLAIN; charset=ISO-8859-1
Content-Transfer-Encoding: 8BIT
X-Content-Filtered-By: Mailman/MimeDel 2.1.17
Cc: freebsd-questions <freebsd-questions@freebsd.org>
X-BeenThere: freebsd-questions@freebsd.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: User questions <freebsd-questions.freebsd.org>
List-Unsubscribe: <http://lists.freebsd.org/mailman/options/freebsd-questions>, 
 <mailto:freebsd-questions-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-questions/>;
List-Post: <mailto:freebsd-questions@freebsd.org>
List-Help: <mailto:freebsd-questions-request@freebsd.org?subject=help>
List-Subscribe: <http://lists.freebsd.org/mailman/listinfo/freebsd-questions>, 
 <mailto:freebsd-questions-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 27 Jan 2014 09:08:30 -0000

On Sat, 25 Jan 2014 19:12-0000, Kaya Saman wrote:

> Hi,
> 
> I'm really confused about something so I hope someone can help me clear the
> fog up....
> 
> basically I'm about to setup a ZFS RAIDZ3 pool and having discovered this
> site:
> 
> https://calomel.org/zfs_raid_speed_capacity.html
> 
> as a reference for disk quantity got totally confused.

Dead link as far as I can tell.

> Though in addition have checked out these sites too:
> 
> https://blogs.oracle.com/ahl/entry/triple_parity_raid_z
> 
> http://www.zfsbuild.com/2010/06/03/howto-create-raidz2-pool/
> 
> http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/
> 
> http://www.linux.org/threads/zettabyte-file-system-zfs.4619/
> 
> 
> Implementing a test ZFS pool on my old FreeBSD 8.3 box using dd derived vdevs
> coupled with reading the man page for zpool found that raidz3 needs a minimum
> of 4 disks to work.
> 

> However, according to the above mentioned site for triple parity one should
> use 5 disks in 2+3 format.
> 
> My confusion is this: does the 2+3 mean 2 disks in the pool with 3 hot spares
> or does it mean 5 disks in the pool?

No one's answered this, so I'll just give you my 2 cents.

Triple parity means you're using storage capacity equivalent of three 
drives for parity alone. If you use five drives in total, this gives 
you 2 drives worth of real data and 3 drives worth of parity. In other 
words, you should really consider using a lot more drives when using 
triple parity, say nine drives.

> As in:
> 
> zpool create <pool_name> raidz3 disk1 disk2 disk3 disk4 disk5

No spares are configured. You should consider something like this:

zpool create <pool_name> raidz3 disk1 disk2 disk3 disk4 disk5 spare disk6 disk7 disk8

> In addition to my testing I was looking at ease of expansion... ie. growing
> the pool, so is doing something like this:
> 
> zpool create <pool_name> raidz3 disk1 disk2 disk3 disk4
> 
> Then when I needed to expand just do:
> 
> zpool add <pool_name> raidz3 disk5 disk6 disk7 disk8

You should do some further experimentation and consider the effects of 
the zpool attach command.

> which gets:
> 
>   pool: testpool
>  state: ONLINE
> status: The pool is formatted using a legacy on-disk format.  The pool can
>     still be used, but some features are unavailable.
> action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
>     pool will no longer be accessible on software that does not support
> feature
>     flags.
>   scan: none requested
> config:
> 
>     NAME            STATE     READ WRITE CKSUM
>     testpool        ONLINE       0     0     0
>       raidz3-0      ONLINE       0     0     0
>         /tmp/disk1  ONLINE       0     0     0
>         /tmp/disk2  ONLINE       0     0     0
>         /tmp/disk3  ONLINE       0     0     0
>         /tmp/disk4  ONLINE       0     0     0
>       raidz3-1      ONLINE       0     0     0
>         /tmp/disk5  ONLINE       0     0     0
>         /tmp/disk6  ONLINE       0     0     0
>         /tmp/disk7  ONLINE       0     0     0
>         /tmp/disk8  ONLINE       0     0     0

This is an unclever setup. Of the eight drives configured, you'll only 
are allowed to use real storage capacity equivalent of two of the 
drives. Maybe you're aiming for redundancy rather than storage 
capacity.

> ----------
> 
> The same as this:
> 
> ----------
> 
>   pool: testpool
>  state: ONLINE
> status: The pool is formatted using a legacy on-disk format.  The pool can
>     still be used, but some features are unavailable.
> action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
>     pool will no longer be accessible on software that does not support
> feature
>     flags.
>   scan: none requested
> config:
> 
>     NAME            STATE     READ WRITE CKSUM
>     testpool        ONLINE       0     0     0
>       raidz3-0      ONLINE       0     0     0
>         /tmp/disk1  ONLINE       0     0     0
>         /tmp/disk2  ONLINE       0     0     0
>         /tmp/disk3  ONLINE       0     0     0
>         /tmp/disk4  ONLINE       0     0     0
>         /tmp/disk5  ONLINE       0     0     0
>         /tmp/disk6  ONLINE       0     0     0
>         /tmp/disk7  ONLINE       0     0     0
>         /tmp/disk8  ONLINE       0     0     0

This setup is a bit more clever, as you'll get the storage capacity 
equivalent of fives drives for real data and the storage capacity 
equivalent of three drives for parity.

> ?? Of course using the 1st method there is extra meta data involved but not
> too much especially with TB drives.
> 
> Having created a zfs filesystem on top of both setups, the fs will grow over
> the 1st scenario to utilize disks 5 through 8 added later; while of course
> with the second setup the filesystem is already created over all 8 disks.
> 
> 
> In a real situation however, the above would certainly be 5 disks at a time to
> gain the triple parity, with ZIL and L2ARC on SSD's and hot swap spares.
> 
> 
> The reason am asking the above is that I've got a new enclosure with up to 26
> disk capacity and need to create a stable environment and make best use of the
> space. So another words, maximum redundancy with max capacity allowed per
> method: which would be raidz1..3 and of course raidz3 offers the best
> redundancy but yet has much more capacity then a raid1+0 setup.

I'm a bit unsure as to whether it's better to simply attach new disks 
to the existing raidz3 vdev rather than adding entire new raidz3 vdevs 
to the pool. Once you do either, there's no going back unless you are 
prepared to recreate the entire pool. Maybe someone else can chime in 
on this.

> My intention was to grab 5 disks to start with then expand as necessary plus 2
> SSD's for ZIL+L2ARC using (raid0 mirroring and raid1 mirroring consecutively)
> and then 3x hot swap spares and use lz4 compression on the filesystem. With
> FreeBSD 10.0 as base OS... my current 8.3 must be EOL now though on a
> different box so no matter :-)

Spare drives can be added anytime (zpool add spare diskN) and removed 
(zpool remove) unless the spare drive is currently engaged by some 
pool.

> Hopefully someone can help me understanding the above.
> 
> 
> Many thanks.
> 
> 
> Regards,
> 
> 
> Kaya

-- 
+-------------------------------+------------------------------------+
| Vennlig hilsen,               | Best regards,                      |
| Trond Endrestøl,              | Trond Endrestøl,                   |
| IT-ansvarlig,                 | System administrator,              |
| Fagskolen Innlandet,          | Gjøvik Technical College, Norway,  |
| tlf. mob.   952 62 567,       | Cellular...: +47 952 62 567,       |
| sentralbord 61 14 54 00.      | Switchboard: +47 61 14 54 00.      |
+-------------------------------+------------------------------------+



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20cf30363713c7e54c04f0eeddb1>