Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Jun 2009 19:12:41 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        current@freebsd.org
Subject:   Re: zpool scrub errors on 3ware 9550SXU
Message-ID:  <b269bc570906141912s2d75e370s456ede1d460f6c33@mail.gmail.com>
In-Reply-To: <200906141427.08397.ianjhart@ntlworld.com>
References:  <200906132311.15359.ianjhart@ntlworld.com> <b269bc570906140127l32ea7ff9p88fc6b0ad96d7f23@mail.gmail.com> <200906141427.08397.ianjhart@ntlworld.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Jun 14, 2009 at 6:27 AM, ian j hart <ianjhart@ntlworld.com> wrote:

> On Sunday 14 June 2009 09:27:22 Freddie Cash wrote:
> > On Sat, Jun 13, 2009 at 3:11 PM, ian j hart <ianjhart@ntlworld.com>
> wrote:
> > > [long post with long lines, sorry]
> > >
> > > I have the following old hardware which I'm trying to make into a
> storage
> > > server (back story elided).
> > >
> > > Tyan Thunder K8WE with dual Opteron 270
> > > 8GB REG ECC RAM
> > > 3ware/AMCC 9550SXU-16 SATA controller
> > > Adaptec 29160 SCSI card -> Quantum LTO3 tape
> > > ChenBro case and backplanes.
> > > 'don't remember' PSU. I do remember paying =C2=A398 3 years ago, so n=
ot
> cheap!
> > > floppy
> > >
> > > Some Seagate Barracuda drives. Two old 500GB for the O/S and 14 new
> 1.5TB
> > > for
> > > data (plus some spares).
> > >
> > > Astute readers will know that the 1.5TB units have a chequered histor=
y.
> > >
> > > I went to considerable effort to avoid being stuck with a bricked uni=
t,
> > > so imagine my dismay when, just before I was about to post this, I
> > > discovered there's a new issue with these drives where they reallocat=
e
> > > sectors, from new.
> > >
> > > I don't want to get sucked into a discussion about whether these disk=
s
> > > are faulty or not. I want to examine what seems to be a regression
> > > between 7.2-RELEASE and 8-CURRENT. If you can't resist, start a threa=
d
> in
> > > chat and CC
> > > me.
> > >
> > > Anyway, here's the full story (from memory I'm afraid).
> > >
> > > All disks exported as single drives (no JBOD anymore).
> > > Install current snapshot on da0 and gmirror with da1, both 500GB disk=
s.
> > > Create a pool with the 14 1.5TB disks. Raidz2.
> >
> > Are you using a single raidz2 vdev using all 14 drives?  If so, that's
> > probably (one of) the source of the issues.  You really shouldn't use
> more
> > than 8 or 9 drives in a singel raidz vdev.  Bad things happen.
>  Especially
> > during resilvers and scrubs.  We learned this the hard way, trying to
> > replace a drive in a 24-drive raidz2 vdev.
> >
> > If possible, try to rebuild the pool using multiple, smaller raidz (1 o=
r
> 2)
> > vdevs.
>
> Did you post this issue to the list or open a PR?


No, as it's a known issue with ZFS itself, and not just the FreeBSD port.


>
> This is not listed in zfsknownproblems.


It's listed in the OpenSolaris/Solaris documentation, best practises guides=
,
blog posts, and wiki entries.

>
> Does opensolaris have this issue?
>

Yes.

--=20
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc570906141912s2d75e370s456ede1d460f6c33>