From owner-freebsd-questions@FreeBSD.ORG Thu Jun 12 11:37:17 2008 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3F453106566B for ; Thu, 12 Jun 2008 11:37:17 +0000 (UTC) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from wojtek.tensor.gdynia.pl (wojtek.tensor.gdynia.pl [IPv6:2001:4070:101:2::1]) by mx1.freebsd.org (Postfix) with ESMTP id 09D7C8FC0C for ; Thu, 12 Jun 2008 11:37:15 +0000 (UTC) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from wojtek.tensor.gdynia.pl (localhost [IPv6:::1]) by wojtek.tensor.gdynia.pl (8.14.2/8.14.2) with ESMTP id m5CBb8DK005818; Thu, 12 Jun 2008 13:37:08 +0200 (CEST) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from localhost (wojtek@localhost) by wojtek.tensor.gdynia.pl (8.14.2/8.14.2/Submit) with ESMTP id m5CBb7QJ005815; Thu, 12 Jun 2008 13:37:07 +0200 (CEST) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Date: Thu, 12 Jun 2008 13:37:06 +0200 (CEST) From: Wojciech Puchar To: Daniel Eriksson In-Reply-To: <4F9C9299A10AE74E89EA580D14AA10A61A193E@royal64.emp.zapto.org> Message-ID: <20080612132527.K5722@wojtek.tensor.gdynia.pl> References: <1a5a68400806080604ped08ce8p120fc21107e7de81@mail.gmail.com> <4F9C9299A10AE74E89EA580D14AA10A61A193E@royal64.emp.zapto.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: =?iso-8859-1?Q?Anders_H=E4ggstr=F6m?= , freebsd-questions@freebsd.org Subject: RE: FreeBSD + ZFS on a production server? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Jun 2008 11:37:17 -0000 > > ZFS is very nice, but slightly over-hyped imho. not slightly and not only over-hyped. it's definitely far from being for storage as "VM is for memory". for example you can't select per file (or at least - per pseudo-filesystem) if you want no protection, mirrored or raidz. you must have disks dedicated for raidz, disks dedicated for mirrored storage and disks dedicated for unprotected storage. it's inflexible and not much usable. actually - much less usable than "legacy" gmirror/gstripe/gconcat+bsdlabel. one of my systems have 8 disks. 80% of data doesn't need any protection, it's just a need for a lot of space, other 20 needs to be mirrored. this 80% of data is used in high bandwidth/low seeks style (only big files). i simply partitioned every disk on 2 partitions, every first is used to make gmirror+gstripe device, every second is used to make gconcat device, and i have what i need WITH BALANCED LOAD. with ZFS i would have to make first 2 drives as mirror, another 6 for unprotected storage, having LOTS of seeks on first 2 drives and very little seeks on other 6 drives. the system would be unable to support the load. to say more: zfs set copies could be usable to selectively mirror given data while not mirroring other (using unprotected storage for ZFS). but it's broken. it writes N copies under write, but don't remake copies in case of failure!