Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Jun 2009 14:27:08 +0100
From:      ian j hart <ianjhart@ntlworld.com>
To:        freebsd-current@freebsd.org
Cc:        Freddie Cash <fjwcash@gmail.com>
Subject:   Re: zpool scrub errors on 3ware 9550SXU
Message-ID:  <200906141427.08397.ianjhart@ntlworld.com>
In-Reply-To: <b269bc570906140127l32ea7ff9p88fc6b0ad96d7f23@mail.gmail.com>
References:  <200906132311.15359.ianjhart@ntlworld.com> <b269bc570906140127l32ea7ff9p88fc6b0ad96d7f23@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sunday 14 June 2009 09:27:22 Freddie Cash wrote:
> On Sat, Jun 13, 2009 at 3:11 PM, ian j hart <ianjhart@ntlworld.com> wrote:
> > [long post with long lines, sorry]
> >
> > I have the following old hardware which I'm trying to make into a stora=
ge
> > server (back story elided).
> >
> > Tyan Thunder K8WE with dual Opteron 270
> > 8GB REG ECC RAM
> > 3ware/AMCC 9550SXU-16 SATA controller
> > Adaptec 29160 SCSI card -> Quantum LTO3 tape
> > ChenBro case and backplanes.
> > 'don't remember' PSU. I do remember paying =C2=A398 3 years ago, so not=
 cheap!
> > floppy
> >
> > Some Seagate Barracuda drives. Two old 500GB for the O/S and 14 new 1.5=
TB
> > for
> > data (plus some spares).
> >
> > Astute readers will know that the 1.5TB units have a chequered history.
> >
> > I went to considerable effort to avoid being stuck with a bricked unit,
> > so imagine my dismay when, just before I was about to post this, I
> > discovered there's a new issue with these drives where they reallocate
> > sectors, from new.
> >
> > I don't want to get sucked into a discussion about whether these disks
> > are faulty or not. I want to examine what seems to be a regression
> > between 7.2-RELEASE and 8-CURRENT. If you can't resist, start a thread =
in
> > chat and CC
> > me.
> >
> > Anyway, here's the full story (from memory I'm afraid).
> >
> > All disks exported as single drives (no JBOD anymore).
> > Install current snapshot on da0 and gmirror with da1, both 500GB disks.
> > Create a pool with the 14 1.5TB disks. Raidz2.
>
> Are you using a single raidz2 vdev using all 14 drives?  If so, that's
> probably (one of) the source of the issues.  You really shouldn't use more
> than 8 or 9 drives in a singel raidz vdev.  Bad things happen.  Especially
> during resilvers and scrubs.  We learned this the hard way, trying to
> replace a drive in a 24-drive raidz2 vdev.
>
> If possible, try to rebuild the pool using multiple, smaller raidz (1 or =
2)
> vdevs.

Did you post this issue to the list or open a PR?

This is not listed in zfsknownproblems.

Does opensolaris have this issue?

Cheers

=2D-=20
ian j hart



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200906141427.08397.ianjhart>