Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Jun 2009 01:27:22 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        freebsd-current@freebsd.org
Subject:   Re: zpool scrub errors on 3ware 9550SXU
Message-ID:  <b269bc570906140127l32ea7ff9p88fc6b0ad96d7f23@mail.gmail.com>
In-Reply-To: <200906132311.15359.ianjhart@ntlworld.com>
References:  <200906132311.15359.ianjhart@ntlworld.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Jun 13, 2009 at 3:11 PM, ian j hart <ianjhart@ntlworld.com> wrote:

> [long post with long lines, sorry]
>
> I have the following old hardware which I'm trying to make into a storage
> server (back story elided).
>
> Tyan Thunder K8WE with dual Opteron 270
> 8GB REG ECC RAM
> 3ware/AMCC 9550SXU-16 SATA controller
> Adaptec 29160 SCSI card -> Quantum LTO3 tape
> ChenBro case and backplanes.
> 'don't remember' PSU. I do remember paying =C2=A398 3 years ago, so not c=
heap!
> floppy
>
> Some Seagate Barracuda drives. Two old 500GB for the O/S and 14 new 1.5TB
> for
> data (plus some spares).
>
> Astute readers will know that the 1.5TB units have a chequered history.
>
> I went to considerable effort to avoid being stuck with a bricked unit, s=
o
> imagine my dismay when, just before I was about to post this, I discovere=
d
> there's a new issue with these drives where they reallocate sectors, from
> new.
>
> I don't want to get sucked into a discussion about whether these disks ar=
e
> faulty or not. I want to examine what seems to be a regression between
> 7.2-RELEASE and 8-CURRENT. If you can't resist, start a thread in chat an=
d
> CC
> me.
>
> Anyway, here's the full story (from memory I'm afraid).
>
> All disks exported as single drives (no JBOD anymore).
> Install current snapshot on da0 and gmirror with da1, both 500GB disks.
> Create a pool with the 14 1.5TB disks. Raidz2.
>

Are you using a single raidz2 vdev using all 14 drives?  If so, that's
probably (one of) the source of the issues.  You really shouldn't use more
than 8 or 9 drives in a singel raidz vdev.  Bad things happen.  Especially
during resilvers and scrubs.  We learned this the hard way, trying to
replace a drive in a 24-drive raidz2 vdev.

If possible, try to rebuild the pool using multiple, smaller raidz (1 or 2)
vdevs.

--=20
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc570906140127l32ea7ff9p88fc6b0ad96d7f23>