Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 8 Feb 2010 16:35:46 -0800
From:      Freddie Cash <fjwcash@gmail.com>
To:        freebsd-stable@freebsd.org
Subject:   Re: one more load-cycle-count problem
Message-ID:  <b269bc571002081635p7bac2de6j210f21af7bdf1810@mail.gmail.com>
In-Reply-To: <201002091059.28625.doconnor@gsoft.com.au>
References:  <cf9b1ee01002080543m7a403a6ej1f25b88c47f18c68@mail.gmail.com> <201002091059.28625.doconnor@gsoft.com.au>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Feb 8, 2010 at 4:29 PM, Daniel O'Connor <doconnor@gsoft.com.au>wrote:

> On Tue, 9 Feb 2010, Dan Naumov wrote:
> > which essentially solves the problem. Note that going this route will
> > probably involve rebuilding your entire array from scratch, because
> > applying WDIDLE3 to the disk is likely to very slightly affect disk
> > geometry, but just enough for hardware raid or ZFS or whatever to
> > bark at you and refuse to continue using the drive in an existing
> > pool (the affected disk can become very slightly smaller in
> > capacity). Backup data, apply WDIDLE3 to all disks. Recreate the
> > pool, restore backups. This will also void your warranty if used on
> > the new WD drives, although it will still work just fine.
>
> Errm.. Why would it change the geometry?
>
> I have used this tool to change the settings on all my disks and it did
> not in any way cause a problem booting later.
>
> My disks are WDC WD10EADS-00L5B1/01.01A01
>
> (1Tb "green" disks)
>
> I just did this to 8 of the 1.5 TB Caviar Green disks, without ZFS
complaining in any way.

I did test it on a spare drive before doing it to the 7 live drives.  And I
did replace them while the server was turned off, just to be safe (and to
prevent a resilver from occuring).

wdidle3 doesn't actually disable the idle timeout on these drives.  Using /d
just sets the timeout to 62 minutes.  Effectively the same, but don't be
surprised when it continues to say "idel 3 available and enabled".  :)

So far, things are running much smoother.  Load_Cycle_Count has stopped
increasing (50,000 in 8 weeks on 1 drive).  Re-silver throughput for these
drives has jumped from 7 MBps to over 40 MBps (90% full pool, so it's slower
than normal right now, which is why we're swapping these drive into the
pool).

-- 
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc571002081635p7bac2de6j210f21af7bdf1810>