Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 27 Oct 1998 23:00:28 -0600 (CST)
From:      Joe Greco <jgreco@solaria.sol.net>
To:        david@sparks.net
Cc:        hackers@FreeBSD.ORG, mlnn4@oaks.com.au
Subject:   Re: Multi-terabyte disk farm
Message-ID:  <199810280500.XAA10438@aurora.sol.net>
In-Reply-To: <Pine.BSI.3.96.981027231302.23783B-100000@sparks.net> from "david@sparks.net" at "Oct 27, 98 11:23:17 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
> On Thu, 22 Oct 1998, Joe Greco wrote:
> 
> > If you do the raid5 thing, I would recommend clustering drives in a four-
> > plus-one pattern that gives you three filesystems of 72 or 168GB each per
> > SCSI chain.  This effectively limits the amount of data you need to fsck, 
> > and the amount of potential loss in case of catastrophic failure.  It
> > does drive the price up a bit.
> 
> Why four + 1?  Maybe performance isn't an issue, but don't you stand a
> real chance of lots of superblocks being on a single drive?
> 
> 7 + 1 would be a nice number with most rack mount chassis which hold 8
> drives.

A F/W SCSI bus handles 15 devices.  7+1 means that you could have 1 and 7/8
RAID filesystems on a single chain.  4+1 means that you could have 3 RAID
filesystems on a single chain.  It is usually easier for humans to interpret
easy patterns such as that.

A 4+1 RAID gets you a 72GB filesystem that will take a long time to fsck.
Adding almost double the capacity to the RAID will double the fsck time.

Writing to a RAID involves a performance penalty where all drives have to
participate in the write process.  Involving only five drives instead of
eight implies that the other three can be doing something else.

If speed is not an issue, and horrible write performance is OK, then the
best other option would be to do 14+1 RAID5.  There's really very little
reason to have any middle ground.

The aesthetics of being able to use an eight-wide rack mount chassis is of
no concern to me, it is simply a trivial exercise in cabling.  Those units
are always a problem, anyways, since the only way you can put 8 drives in
them without a cabling issue is to run a F/W SCSI bus to each chassis and
forget about the other 7 drives that the F/W SCSI would allow you.  Since
we were explicitly discussing an _inexpensive_ storage solution, it stands
to reason that you would want to maximize the number of SCSI busses and
also the number of SCSI devices per bus, to minimize the number of disk
fileservers that you had.

Does this adequately answer your question?

> Something I did when hooking up something like this at a previous job was
> a four + 1 setup (the mylex raid controllers had 5 channels, I didn't have
> any choice:) where each of the five channels was an independent channel on
> an external RAID controller.  Each channel went to a seperate rack mount
> chassis, so even if I lost a chassis cable/power supply the thing was
> still running OK.
> 
> In that installation performance was an issue, so I hooked 20 drives each
> up to two raid controllers on two seperate wide scsi busses.  However,
> there's no reason why 40 drives couldn't be connected to a single raid
> controller (2,500-4,000 or so), for a total of 576 effective GB.  With CCD
> they could even be configured as a single disk drive.  Or maybe not, any
> ufs experts around?
> 
> > You may not want to start out using vinum, which is relatively new and
> > untested.
> 
> I love watching the development, but it's a little new for me to bet *my*
> job on it:)
> 
> Besides, external RAID is easily cost justified in a project like this.

You clearly didn't read the design requirements.  To be cost-competitive
with a _tape_array_, it is necessary to go for the least expensive storage
option.  Even the RAID thing is questionable, because it will add to the
overall cost.

To get 4-8 terabytes of storage with hard drives at this point, with 18GB
drives running about $1000, would cost $222K-$444K _just_ for the hard 
drives without any RAID, any servers, any anything else.  They can get an
8 TB tape robot for $100K.  The only way to beat that is to bank on the
fact that hard drive prices are dropping rapidly (and maybe that they did
not factor in the cost of tapes?)

Besides, if they were willing to settle for tape, and are able to restore
crashed drives from backup tapes, it is quite likely that external RAID is
NOT justifiable, particularly on a cost basis.

... Joe

-------------------------------------------------------------------------------
Joe Greco - Systems Administrator			      jgreco@ns.sol.net
Solaria Public Access UNIX - Milwaukee, WI			   414/342-4847

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199810280500.XAA10438>