From owner-freebsd-stable@FreeBSD.ORG Mon Jan 3 13:20:07 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0AF1B1065670 for ; Mon, 3 Jan 2011 13:20:07 +0000 (UTC) (envelope-from freebsd-stable@m.gmane.org) Received: from lo.gmane.org (lo.gmane.org [80.91.229.12]) by mx1.freebsd.org (Postfix) with ESMTP id B1CCF8FC0C for ; Mon, 3 Jan 2011 13:20:06 +0000 (UTC) Received: from list by lo.gmane.org with local (Exim 4.69) (envelope-from ) id 1PZkKT-00059L-5K for freebsd-stable@freebsd.org; Mon, 03 Jan 2011 14:20:05 +0100 Received: from cpe-188-129-77-254.dynamic.amis.hr ([188.129.77.254]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 03 Jan 2011 14:20:05 +0100 Received: from ivoras by cpe-188-129-77-254.dynamic.amis.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 03 Jan 2011 14:20:05 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-stable@freebsd.org From: Ivan Voras Date: Mon, 03 Jan 2011 14:17:57 +0100 Lines: 10 Message-ID: References: <4D1C6F90.3080206@my.gd> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: cpe-188-129-77-254.dynamic.amis.hr User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.12) Gecko/20101102 Thunderbird/3.1.6 In-Reply-To: <4D1C6F90.3080206@my.gd> Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Jan 2011 13:20:07 -0000 On 12/30/10 12:40, Damien Fleuriot wrote: > I am concerned that in the event a drive fails, I won't be able to > repair the disks in time before another actually fails. An old trick to avoid that is to buy drives from different series or manufacturers (the theory is that identical drives tend to fail at the same time), but this may not be applicable if you have 5 drives in a volume :) Still, you can try playing with RAIDZ levels and probabilities.