From owner-freebsd-hardware Sat Jul 26 05:15:46 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.5/8.8.5) id FAA28002 for hardware-outgoing; Sat, 26 Jul 1997 05:15:46 -0700 (PDT) Received: from shadows.aeon.net (bsdhw@shadows.aeon.net [194.100.41.1]) by hub.freebsd.org (8.8.5/8.8.5) with ESMTP id FAA27997 for ; Sat, 26 Jul 1997 05:15:42 -0700 (PDT) Received: (from bsdhw@localhost) by shadows.aeon.net (8.8.6/8.8.3) id PAA02085 for hardware@freebsd.org; Sat, 26 Jul 1997 15:15:14 +0300 (EET DST) From: mika ruohotie Message-Id: <199707261215.PAA02085@shadows.aeon.net> Subject: Re: building RAID systems In-Reply-To: <199707231558.BAA09859@genesis.atrad.adelaide.edu.au> from Michael Smith at "Jul 24, 97 01:28:43 am" To: hardware@freebsd.org Date: Sat, 26 Jul 1997 15:15:14 +0300 (EET DST) X-Mailer: ELM [version 2.4ME+ PL31 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-freebsd-hardware@freebsd.org X-Loop: FreeBSD.org Precedence: bulk > > I wish. SCSI is hopeless these days. About 3-5x the price > > of IDE. > Actually, you should go back and look at what SCSI drive prices are > doing at the moment. I can get a 5400 RPM 4GB ultra scsi IBM disk for right... just a thought here though... i worked at pc hardware distributor for 19 months, just swapped to another place last week... the reason current eide drives cost the amount they do is rather simple, the manufacturer no longer cares at all if the drive works or not when it's shipped out. nor do they care if the drives last or not. they're saving money by letting the _customers_ to do the liability testing, sure, giving out 3 year warranty makes the customer happy, until the point the drive suddenly blows up and data is lost. many drives are DOD too, far too many occasionally. (quality differencies are "noticeable") modern eide drives seems to have about 10-20% breaking precentage, worst shipment i've seen had 80% breaking precentage, well, now, 80% of the drives blew up before one year (seagate medalist 1080), almost all those in first few months... in fact, i doubt that hardly _any_ modern eide drive lasts to the end of the typical 3 year guarantee period. my machine running always seems to tire those things in about 12-18 months, 18 if i cool them well. i'd _hate_ to see similar happening in the scsi drives, but it seems to be what the customers want, cheap drives, no realiability, who cares if it breaks, as long as it has long guarantee... (oh, in the servers at work i blew up far too many _cheap_ scsi drives too in 6 month time i was using them) actually, it seems that some drives comes cheapo, and some cost more, and i personally would get those bit more expensive, assuming those are still tested to be reliable. (for example, seagate barracuda) at least i'm under an impression that those quality ($$$) scsi drives still have much lower than the 20% breaking precentage... (from what i've seen/heard) sure, if the data is well backed up, and the blow-up doesnt cause too much of a hassle, cheapo (unreliable too), scsi drives are a good sollution, to me, they're not, i dont want to change every 5th drive like every 6 months. oh, and this was raid's... if one needs a serious raid sollution, i think using, say 2-3 3-channel cards with 64 megs of cache on cards, and connect each channel with UW scsi-scsi connection to standalone raid box, each box filled with appropriate sized drives, 4-5 on each. ofcourse, i'd expect to max out PCI bus with that setup. oh yeah, ofcourse, ccd on the top of that. mickey