From owner-freebsd-current@FreeBSD.ORG Mon Jun 15 02:12:42 2009 Return-Path: Delivered-To: current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 531A71065677 for ; Mon, 15 Jun 2009 02:12:42 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-gx0-f207.google.com (mail-gx0-f207.google.com [209.85.217.207]) by mx1.freebsd.org (Postfix) with ESMTP id 0A01C8FC1B for ; Mon, 15 Jun 2009 02:12:41 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by gxk3 with SMTP id 3so5436350gxk.19 for ; Sun, 14 Jun 2009 19:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=SVko4OFBqGC9f3mQDTLbr9gExcZcxA2t5LOcNpdGqOs=; b=thyOYVnTG9jCqHsDjY6a8uxBH62Dx3aOD1cvr4n5O/Bd0zK4c631lpzAEUlaCjv7it yRlJE818CdogJq0u1j/lfRS8c6JFAhh3fK2y4GlUuMh9M5pAyzztB0Sc3WEuOHTGvzA0 Gc80+LuRdRsAaR0OCSxCzbfbtX6SqECx1l8nM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=S3wGItZCJn2KVoIQwmwsmZlqq4Vrq5qyT7HrtO4BHydopZvQeDU+BVHrw/DKJyA/Cw b1cfNM0hDgpsaFoo8eUwESkfLhD935UCk25M5G1Ssn7fnxle0hPiBpEa7J10C248kred L73H9bP3/85Mucg1PTpNwLTwtBoHetnAMiVMg= MIME-Version: 1.0 Received: by 10.151.72.2 with SMTP id z2mr12246450ybk.3.1245031961497; Sun, 14 Jun 2009 19:12:41 -0700 (PDT) In-Reply-To: <200906141427.08397.ianjhart@ntlworld.com> References: <200906132311.15359.ianjhart@ntlworld.com> <200906141427.08397.ianjhart@ntlworld.com> Date: Sun, 14 Jun 2009 19:12:41 -0700 Message-ID: From: Freddie Cash To: current@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Subject: Re: zpool scrub errors on 3ware 9550SXU X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jun 2009 02:12:42 -0000 On Sun, Jun 14, 2009 at 6:27 AM, ian j hart wrote: > On Sunday 14 June 2009 09:27:22 Freddie Cash wrote: > > On Sat, Jun 13, 2009 at 3:11 PM, ian j hart > wrote: > > > [long post with long lines, sorry] > > > > > > I have the following old hardware which I'm trying to make into a > storage > > > server (back story elided). > > > > > > Tyan Thunder K8WE with dual Opteron 270 > > > 8GB REG ECC RAM > > > 3ware/AMCC 9550SXU-16 SATA controller > > > Adaptec 29160 SCSI card -> Quantum LTO3 tape > > > ChenBro case and backplanes. > > > 'don't remember' PSU. I do remember paying =C2=A398 3 years ago, so n= ot > cheap! > > > floppy > > > > > > Some Seagate Barracuda drives. Two old 500GB for the O/S and 14 new > 1.5TB > > > for > > > data (plus some spares). > > > > > > Astute readers will know that the 1.5TB units have a chequered histor= y. > > > > > > I went to considerable effort to avoid being stuck with a bricked uni= t, > > > so imagine my dismay when, just before I was about to post this, I > > > discovered there's a new issue with these drives where they reallocat= e > > > sectors, from new. > > > > > > I don't want to get sucked into a discussion about whether these disk= s > > > are faulty or not. I want to examine what seems to be a regression > > > between 7.2-RELEASE and 8-CURRENT. If you can't resist, start a threa= d > in > > > chat and CC > > > me. > > > > > > Anyway, here's the full story (from memory I'm afraid). > > > > > > All disks exported as single drives (no JBOD anymore). > > > Install current snapshot on da0 and gmirror with da1, both 500GB disk= s. > > > Create a pool with the 14 1.5TB disks. Raidz2. > > > > Are you using a single raidz2 vdev using all 14 drives? If so, that's > > probably (one of) the source of the issues. You really shouldn't use > more > > than 8 or 9 drives in a singel raidz vdev. Bad things happen. > Especially > > during resilvers and scrubs. We learned this the hard way, trying to > > replace a drive in a 24-drive raidz2 vdev. > > > > If possible, try to rebuild the pool using multiple, smaller raidz (1 o= r > 2) > > vdevs. > > Did you post this issue to the list or open a PR? No, as it's a known issue with ZFS itself, and not just the FreeBSD port. > > This is not listed in zfsknownproblems. It's listed in the OpenSolaris/Solaris documentation, best practises guides= , blog posts, and wiki entries. > > Does opensolaris have this issue? > Yes. --=20 Freddie Cash fjwcash@gmail.com