From owner-freebsd-fs@FreeBSD.ORG Wed Jan 23 22:52:38 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5CAB2EF0; Wed, 23 Jan 2013 22:52:38 +0000 (UTC) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from wojtek.tensor.gdynia.pl (wojtek.tensor.gdynia.pl [188.252.31.196]) by mx1.freebsd.org (Postfix) with ESMTP id 84ACE6ED; Wed, 23 Jan 2013 22:52:37 +0000 (UTC) Received: from wojtek.tensor.gdynia.pl (localhost [127.0.0.1]) by wojtek.tensor.gdynia.pl (8.14.5/8.14.5) with ESMTP id r0NMqX1Y002480; Wed, 23 Jan 2013 23:52:33 +0100 (CET) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Received: from localhost (wojtek@localhost) by wojtek.tensor.gdynia.pl (8.14.5/8.14.5/Submit) with ESMTP id r0NMqWvN002477; Wed, 23 Jan 2013 23:52:32 +0100 (CET) (envelope-from wojtek@wojtek.tensor.gdynia.pl) Date: Wed, 23 Jan 2013 23:52:32 +0100 (CET) From: Wojciech Puchar To: Steven Chamberlain Subject: Re: ZFS regimen: scrub, scrub, scrub and scrub again. In-Reply-To: <510067DC.7030707@pyro.eu.org> Message-ID: References: <20130122073641.GH30633@server.rulingia.com> <510067DC.7030707@pyro.eu.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.7 (wojtek.tensor.gdynia.pl [127.0.0.1]); Wed, 23 Jan 2013 23:52:33 +0100 (CET) Cc: freebsd-fs , Mark Felder , Chris Rees X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Jan 2013 22:52:38 -0000 >> unless your work is serving movies it doesn't matter. > > That's why I find it really interesting the Netflix Open Connect > appliance didn't use ZFS - it would have seemed perfect for that "Seems perfect" only by ZFS marketers and their victims. but is at most usable, but and dangerous. > application. because doing it with UFS is ACTUALLY perfect. large parallel transfers are great with UFS, >95% of platter speed is normal and near zero CPU load, metadata amount are minimal and doesn't matter for performance and fsck time (but +J would make it even smaller). Getting ca 90% of platter speed under multitasking load is possible with proper setup. > http://lists.freebsd.org/pipermail/freebsd-stable/2012-June/068129.html > > Instead there are plain UFS+J filesystems on some 36 disks and no RAID - > it tries to handle almost everything at the application layer instead. this is exactly the kind of setup i would do in their case. They can restore all data as master movie storage is not here. but they have to restore 2 drives in case of 2 drives failing in the same time. not 36 :) "application layer" is quite trivial - just store where each movie is. such setup could easily handle 2 10Gb/s cards. Or more if load is spread over drives.