Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 9 Feb 2010 08:49:25 -0800
From:      Freddie Cash <fjwcash@gmail.com>
To:        freebsd-stable@freebsd.org
Subject:   Re: one more load-cycle-count problem
Message-ID:  <b269bc571002090849p577a6d40je720ad506ec37a1@mail.gmail.com>
In-Reply-To: <201002091231.17551.doconnor@gsoft.com.au>
References:  <cf9b1ee01002080543m7a403a6ej1f25b88c47f18c68@mail.gmail.com> <201002091059.28625.doconnor@gsoft.com.au> <b269bc571002081635p7bac2de6j210f21af7bdf1810@mail.gmail.com> <201002091231.17551.doconnor@gsoft.com.au>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Feb 8, 2010 at 6:01 PM, Daniel O'Connor <doconnor@gsoft.com.au>wrote:

> On Tue, 9 Feb 2010, Freddie Cash wrote:
> > I just did this to 8 of the 1.5 TB Caviar Green disks, without ZFS
> > complaining in any way.
> >
> > I did test it on a spare drive before doing it to the 7 live drives.
> > And I did replace them while the server was turned off, just to be
> > safe (and to prevent a resilver from occuring).
> >
> > wdidle3 doesn't actually disable the idle timeout on these drives.
> > Using /d just sets the timeout to 62 minutes.  Effectively the same,
> > but don't be surprised when it continues to say "idel 3 available and
> > enabled".  :)
>
> /d sets it (for me) to 6300 milliseconds (6.3 seconds). I took this as a
> special value that disabled it entirely (no idea why they didn't use 0
> or 255..)
>
> I've seen reports of the same on various hardware forums.  Not sure if it's
due to different firmware, or different drive models.

You should still be able to list the timeout value explicitly (instead of
using /d).  According to the help output, you can use either 25.5 seconds or
3000-something seconds as the max value (depends on the drive).

-- 
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc571002090849p577a6d40je720ad506ec37a1>