From owner-freebsd-current@FreeBSD.ORG Sun Jun 14 13:27:45 2009 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2607E1065675 for ; Sun, 14 Jun 2009 13:27:45 +0000 (UTC) (envelope-from ianjhart@ntlworld.com) Received: from mtaout01-winn.ispmail.ntl.com (mtaout01-winn.ispmail.ntl.com [81.103.221.47]) by mx1.freebsd.org (Postfix) with ESMTP id A3F5F8FC0A for ; Sun, 14 Jun 2009 13:27:44 +0000 (UTC) (envelope-from ianjhart@ntlworld.com) Received: from aamtaout01-winn.ispmail.ntl.com ([81.103.221.35]) by mtaout01-winn.ispmail.ntl.com (InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id <20090614132743.VZJE6742.mtaout01-winn.ispmail.ntl.com@aamtaout01-winn.ispmail.ntl.com> for ; Sun, 14 Jun 2009 14:27:43 +0100 Received: from cpc1-cove3-0-0-cust909.sol2.cable.ntl.com ([86.20.31.142]) by aamtaout01-winn.ispmail.ntl.com (InterMail vG.2.02.00.01 201-2161-120-102-20060912) with ESMTP id <20090614132743.ZVWE13254.aamtaout01-winn.ispmail.ntl.com@cpc1-cove3-0-0-cust909.sol2.cable.ntl.com> for ; Sun, 14 Jun 2009 14:27:43 +0100 X-Virus-Scanned: amavisd-new at cpc2-cove3-0-0-cust311.sol2.cable.ntl.com Received: from gamma.private.lan (gamma.private.lan [192.168.0.12]) by cpc1-cove3-0-0-cust909.sol2.cable.ntl.com (8.14.3/8.14.3) with ESMTP id n5EDR8YU096188; Sun, 14 Jun 2009 14:27:08 +0100 (BST) (envelope-from ianjhart@ntlworld.com) From: ian j hart To: freebsd-current@freebsd.org Date: Sun, 14 Jun 2009 14:27:08 +0100 User-Agent: KMail/1.9.10 References: <200906132311.15359.ianjhart@ntlworld.com> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Message-Id: <200906141427.08397.ianjhart@ntlworld.com> X-Spam-Status: No, score=-1.4 required=5.0 tests=ALL_TRUSTED autolearn=failed version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on cpc1-cove3-0-0-cust909.sol2.cable.ntl.com X-Cloudmark-Analysis: v=1.0 c=1 a=ERehf_AEJYYA:10 a=NLZqzBF-AAAA:8 a=cyPt2eo1X11ld7FjuZ4A:9 a=YBHpFZlM06wofzdCsFpcykCZw48A:4 a=_dQi-Dcv4p4A:10 a=Vgu9YVKAPQfWkwmj:21 a=VDCMgUDYjoeEWV6C:21 Cc: Freddie Cash Subject: Re: zpool scrub errors on 3ware 9550SXU X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jun 2009 13:27:45 -0000 On Sunday 14 June 2009 09:27:22 Freddie Cash wrote: > On Sat, Jun 13, 2009 at 3:11 PM, ian j hart wrote: > > [long post with long lines, sorry] > > > > I have the following old hardware which I'm trying to make into a stora= ge > > server (back story elided). > > > > Tyan Thunder K8WE with dual Opteron 270 > > 8GB REG ECC RAM > > 3ware/AMCC 9550SXU-16 SATA controller > > Adaptec 29160 SCSI card -> Quantum LTO3 tape > > ChenBro case and backplanes. > > 'don't remember' PSU. I do remember paying =C2=A398 3 years ago, so not= cheap! > > floppy > > > > Some Seagate Barracuda drives. Two old 500GB for the O/S and 14 new 1.5= TB > > for > > data (plus some spares). > > > > Astute readers will know that the 1.5TB units have a chequered history. > > > > I went to considerable effort to avoid being stuck with a bricked unit, > > so imagine my dismay when, just before I was about to post this, I > > discovered there's a new issue with these drives where they reallocate > > sectors, from new. > > > > I don't want to get sucked into a discussion about whether these disks > > are faulty or not. I want to examine what seems to be a regression > > between 7.2-RELEASE and 8-CURRENT. If you can't resist, start a thread = in > > chat and CC > > me. > > > > Anyway, here's the full story (from memory I'm afraid). > > > > All disks exported as single drives (no JBOD anymore). > > Install current snapshot on da0 and gmirror with da1, both 500GB disks. > > Create a pool with the 14 1.5TB disks. Raidz2. > > Are you using a single raidz2 vdev using all 14 drives? If so, that's > probably (one of) the source of the issues. You really shouldn't use more > than 8 or 9 drives in a singel raidz vdev. Bad things happen. Especially > during resilvers and scrubs. We learned this the hard way, trying to > replace a drive in a 24-drive raidz2 vdev. > > If possible, try to rebuild the pool using multiple, smaller raidz (1 or = 2) > vdevs. Did you post this issue to the list or open a PR? This is not listed in zfsknownproblems. Does opensolaris have this issue? Cheers =2D-=20 ian j hart