Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 14 May 2003 11:53:08 +0930
From:      Greg 'groggy' Lehey <grog@FreeBSD.org>
To:        "Marc G. Fournier" <scrappy@hub.org>
Cc:        freebsd-scsi@freebsd.org
Subject:   Re: RAID5 capacities / usable drive space ...
Message-ID:  <20030514022308.GA70087@wantadilla.lemis.com>
In-Reply-To: <20030513220947.L3557@hub.org>
References:  <20030509222154.N728@hub.org> <20030514005737.GA68496@wantadilla.lemis.com> <20030513220947.L3557@hub.org>

next in thread | previous in thread | raw e-mail | index | archive | help

--9jxsPFA5p3P2qPhR
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tuesday, 13 May 2003 at 22:13:45 -0300, Marc G. Fournier wrote:
> On Wed, 14 May 2003, Greg 'groggy' Lehey wrote:
>
>> On Friday,  9 May 2003 at 22:25:51 -0300, The Hermit Hacker wrote:
>>>
>>> I have someone telling me something that I'd never heard before, and find
>>> difficult to believe ...
>>>
>>> Apparently, he is under the impression that altho a file system shows a
>>> capacity of, say, 100G, its usable space is around 50% of that ...
>>> anything higher then that, you risk problems ... (significantly reduced
>>> MTBF of the drives, degradation in performance, etc) ...
>>>
>>> His opinion seems to be based on some talks he had with ppl at IBM and
>>> Seagate way back in '89, but still seems to feel they are applicable today
>>> ...
>>>
>>> Is there any fact behind his opinion?
>>
>> It's difficult to say if he hasn't specified reasons.
>>
>> I can think of a couple of possibilities.  One would be, of course,
>> that RAID-5 always has overhead for parity, and the other is the fact
>> that file system performance deteriorates when the file system fills
>> up (thus the 10% left over by UFS).  None of these sound like good
>> reasons, though.  MTBF depends on the activity, not what kind of data
>> (allocated/non-allocated) is on the drives.
>
> 'K ... I'm going to be setting up a server to test my knowledge here, but,
> I've had someone tell me: "the fact that you need a minimum of three
> drives in Raid 5, so a three drive configuration in Raid5 is not hot
> swappable nor will it boot with less than three working drives."
> ....

Hmm.  You know some interesting someones.  Yes, it doesn't make sense
to have a RAID-5 volume with less than three drives, but in degraded
mode it'll run with two.  Or it should, bar implementation
constraints.  And theoretically you could hot swap them.

> My understanding was that if I had three drives in a RAID5
> configuration, and one died, the file system would still function
> with the 2 drives ...

Yes, that's the intention.

Greg
--
See complete headers for address and phone numbers

--9jxsPFA5p3P2qPhR
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (FreeBSD)

iD8DBQE+waiMIubykFB6QiMRAr0eAJ9vuhHLWqeRGJ3hg3J2hh0fkc+pWQCdGiD1
RcK4TaU1WtWHVwXXTStr2TM=
=KdzE
-----END PGP SIGNATURE-----

--9jxsPFA5p3P2qPhR--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030514022308.GA70087>