Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 10 Feb 2001 17:40:26 -0800
From:      "Ted Mittelstaedt" <tedm@toybox.placo.com>
To:        "Mike Meyer" <mwm@mired.org>
Cc:        <questions@freebsd.org>
Subject:   RE: Problems installing 4.x on large disks
Message-ID:  <002d01c093cb$9ba1f2e0$1401a8c0@tedm.placo.com>
In-Reply-To: <14981.53225.50061.220090@guru.mired.org>

next in thread | previous in thread | raw e-mail | index | archive | help

> -----Original Message-----
> From: Mike Meyer [mailto:mwm@mired.org]
> Sent: Saturday, February 10, 2001 3:34 PM
> To: Ted Mittelstaedt
> Cc: questions@freebsd.org
> Subject: RE: Problems installing 4.x on large disks
>
>
> Yup. And the last price/performance study I saw for that came down in
> favor of a custom RAID box that had a bunch of UDMA drives in it and a
> SCSI port to talk to the system. UDMA controllers are cheap enough you
> can throw one in with the HDA and still beat SCSI prices.
>

Yes, I've seen more of those kinds of things, as well as the IDE raid cards.
While they aren't yet as standard as a SCSI setup who knows we may see
them in wide use one day.

Certainly it's to the drive manufacturers advantage to kill off one
specification
over another so that eventually all disk drives are of a single type.

> Of course, you're not talking about a typical Unix workstation at that
> point, either.
>
> > > Of course, SCSI *is* a much better protocol. Two SCSI disks on one
> > > controller take up one IRQ, and will perform much better than two IDE
> > > disks on one controller, and slightly better than two IDE disks on two
> > > controllers - which takes up two IRQs.
> > > And it's more than just "a few extra gig". 50G scsi drives are around
> > > $500. 60G UDMA drives are under $200. You're paying 2x to 4x more per
> > > gigabyte for SCSI than IDE - and you only get extra performance in
> > > multi-drive systems.
> > And boy the quality of those $200 drives is right up there with the
> > $500 ones - NOT!
>
> Well, I haven't checked those. The SCSI drives were Seagates; the UDMA
> drives were Maxtors, and those happen to be the drives I did my
> testing on. The Maxtor was slightly faster than the Seagate once you
> got the UDMA stuff turned on.
>
> > The wide price differences of SCSI is only present in the largest
> > drives.  If you really want 50GB your speediest performance is to
> > take 3 20GB SCSI disks and stripe them.
>
> Yeah - obsolete drives tend to bottom out in price. The last time I
> bought a SCSI drive (about a year ago), the 10GB SCSI drives cost
> slightly more than 2x what the 10GB IDE drives did. That's no longer
> true, of course. On the other hand, to get the best performance, you
> don't buy obslete drives.
>

It depends on what you define as obsolete.  Sure, 500MB is obsolete!  But,
9GB vs 20GB?  There are plenty of 9GB models that were manufactured with
the same specs as 20GB drives, the only difference was size.

>
> Well, if I spent more on the controller than I did on the drives, I'd
> certainly expect it to contribute something to the performance. On the
> other hand, the same $100 spent on another hundred or so megabytes of
> RAM will mean most workstation users don't page, which is liable to
> make even more difference.
>

I'm going to assume that with workstations as cheap as they are and ram
as cheap as it is that anyone building a UNIX system today from new hardware
is going to get enough ram necessary to keep it from swapping.

> > > Bottom line: if you only have one drive, the extra cost of a SCSI
> > > drive would be better put into more RAM. For low-end servers, I buy
> > say rather "if the drive has to be as large as possible" and I'll agree
> > with you.
>
> Nope. My tests were on 10MB IDE & SCSI drives, and the IDE drive
> delivered better throughput with less CPU load at a lower price. If
> you say rather "if cost is no object", I'll agree with you.
>

What all this boils down to is what are you going to spend.  Starting out
with a budget of $5.00 well you can get some nice 486/33s for that which
will give you one level of performance.  As you raise the dollar amount
you can get better and better hardware.

Now, if you can only spend, say $500 on a new system that has IDE and for
an extra $300 you can make it SCSI, well then you probably are going to be
making those IDE/SCSI/RAM tradeoffs.  But if your going to be spending $1500
on a system, in my opinion you have gotten to the point that an extra $300
to get that measure of reliability and performance of SCSI is nothing.
Keep going up the price scale, to say $5K, and the cost difference becomes
negligible.

But, if I was only given $500 to put together a low-cost server, I wouldn't
go out and buy brand new hardware with IDE and a cheap motherboard and box.
I'd buy an older server-class box from Ebay or some used equipment dealer.
Many of them come with raid arrrays, multiple CPU slots, and a lot of other
assorted goodies and their reliability is going to be immensely better than
a clone box slapped together with components designed for the home user.
I'll always trade off performance for reliability in a tight-cost situation,
and let me point out that even an older Pentium Pro 200 with a RAID array
of slower disk drives is perfectly able to totally saturate
Ethernet if your just using it as a fileserver.  There's a point in server
work where more performance on the server doesen't buy you anything,
depending, of course, on what your doing on the server.

> Again, if you don't believe my numbers, run your own tests and tell us
> about it.
>

There's that saying about lies, damn lies, and benchmarks.  If your seeing
better
performance from a 10MB IDE disk than a 10MB SCSI disk, for that statement
to
have any validity you really need to post your setup with model numbers,
adapter cards and all that.  Do I believe that you saw that difference?
Yes,
because it's certainly possible to put a SCSI disk with a slower seek time
against an IDE disk and get better numbers from the IDE.  It also depends
a lot on the test too, what kind of data was being written back and forth
and
all that.

I'm a SCSI bigot because in all the servers I've worked on,
I can count the number of failures on SCSI disks that wern't full-height on
the fingers of one hand.  I've lost count of the IDE disks I've sent back
for
replacement.  In all the workstations I've worked on, the SCSI ones
were always faster than IDE.  And I've never had as many incompatability
problems with SCSI systems as with IDE.  Now, maybe I need to come into the
21st
century, but I'll always choose SCSI over IDE if I have a choice.

> 	<mike
> --
> Mike Meyer <mwm@mired.org>
http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more
information.



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?002d01c093cb$9ba1f2e0$1401a8c0>