From owner-freebsd-questions@FreeBSD.ORG Sun Jul 6 06:37:19 2008 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 69D48106564A for ; Sun, 6 Jul 2008 06:37:19 +0000 (UTC) (envelope-from lists@jnielsen.net) Received: from ns1.jnielsen.net (ns1.jnielsen.net [69.55.238.237]) by mx1.freebsd.org (Postfix) with ESMTP id 3FEE88FC0C for ; Sun, 6 Jul 2008 06:37:19 +0000 (UTC) (envelope-from lists@jnielsen.net) Received: from [172.17.2.20] (jn@stealth.jnielsen.net [74.218.226.254]) (authenticated bits=0) by ns1.jnielsen.net (8.12.9p2/8.12.9) with ESMTP id m666Me3w015792 for ; Sun, 6 Jul 2008 02:22:41 -0400 (EDT) (envelope-from lists@jnielsen.net) From: John Nielsen To: freebsd-questions@freebsd.org Date: Sun, 6 Jul 2008 02:22:39 -0400 User-Agent: KMail/1.9.7 References: <1a5a68400806080604ped08ce8p120fc21107e7de81@mail.gmail.com> <4F9C9299A10AE74E89EA580D14AA10A61A193E@royal64.emp.zapto.org> <20080612132527.K5722@wojtek.tensor.gdynia.pl> In-Reply-To: <20080612132527.K5722@wojtek.tensor.gdynia.pl> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200807060222.40004.lists@jnielsen.net> X-Virus-Scanned: ClamAV version 0.88.4, clamav-milter version 0.88.4 on ns1.jnielsen.net X-Virus-Status: Clean Subject: Re: FreeBSD + ZFS on a production server? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2008 06:37:19 -0000 I'm behind on my mailing list reading and don't really want to prolong/resurrect this thread unduly, but I do want to respond to this point: On Thursday 12 June 2008 07:37:06 am Wojciech Puchar wrote: > you must have disks dedicated for raidz, disks dedicated for mirrored > storage and disks dedicated for unprotected storage. it's inflexible > and not much usable. > > actually - much less usable than "legacy" > gmirror/gstripe/gconcat+bsdlabel. ZFS on FreeBSD is GEOM-ified. While I believe what Wojciech said about needing a full disk is correct under Solaris, it's not the case in FreeBSD. Any GEOM provider can be added to a zpool--disk, slice, partition, gmirror, gstripe, md device, etc. I just added some storage to a personal server and re-did the layout using ZFS. My zpool (raidz) is made up of two partitions and one gstripe, spanning a total of four disks. I haven't had any issues with it at all (7-STABLE i386, 1.5GB RAM, no tuning other than kmem size and MAXPAGES). All of the disks also have other small partitions--two for a gmirrored root and three for swap. I think FreeBSD is a great storage/fileserver platform exactly _because_ there are so many options. UFS is great, gmirror and gstripe and friends are fantastic, and ZFS is yet another powerful tool in the arsenal. In my case ZFS was the best meeting point for space vs redundancy vs performance. Not having "real" RAID hardware my other candidates were graid3, graid5 and gvinum. ZFS is much easier to configure than gvinum, much more proven and stable than graid5 (which isn't even in the tree yet), and ought to perform better than graid3. I didn't do any testing to verify the last assertion since this is just a personal box, but I don't have any complaints about performance. JN > one of my systems have 8 disks. 80% of data doesn't need any > protection, it's just a need for a lot of space, other 20 needs to be > mirrored. this 80% of data is used in high bandwidth/low seeks style > (only big files). > > i simply partitioned every disk on 2 partitions, every first is used to > make gmirror+gstripe device, every second is used to make gconcat > device, and i have what i need WITH BALANCED LOAD. > > with ZFS i would have to make first 2 drives as mirror, another 6 for > unprotected storage, having LOTS of seeks on first 2 drives and very > little seeks on other 6 drives. the system would be unable to support > the load. > > > > to say more: zfs set copies could be usable to selectively mirror given > data while not mirroring other (using unprotected storage for ZFS). > but it's broken. it writes N copies under write, but don't remake > copies in case of failure! > > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to > "freebsd-questions-unsubscribe@freebsd.org"