Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 4 Jan 2000 09:27:15 -0800 (PST)
From:      Matthew Jacob <mjacob@feral.com>
To:        Mitch Collinsworth <mkc@Graphics.Cornell.EDU>
Cc:        hardware@FreeBSD.ORG
Subject:   Re: differences between SCSI and EIDE [was: wanna buy an EIDE harddisk ... 5400 or 7200 for home use (noise)]
Message-ID:  <Pine.BSF.4.10.10001040917470.4553-100000@beppo.feral.com>
In-Reply-To: <200001041610.LAA15549@benge.graphics.cornell.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 4 Jan 2000, Mitch Collinsworth wrote:

> 
> >anyway, thank you all for responding and sheading light on my
> >confusion. i'd always thought that scsi was the better way to
> >go, either fro the 'comercial' environment or the ever more
> >demanding 'home' environment.
> 
> Well this is actually an interesting question.  My salesman says the
> HDAs are the same in SCSI and EIDE drives, so reliability-wise there
> should be no difference.

Yes, but unless we have stuff that turns on the bad block sparing, or
otherwise does defect management, from a managability point of view
there's a massive difference.

> like to better understand what is being said here.  Do SCSI device
> drivers typically initiate multiple commands from separate processes to
> the drive without waiting for the previous command to complete?  In other

Absolutely. The limitation here is probably the filesystem and load
mix you're using- under heavy multiuser load I've seen all 256 tags used
up.


> words the drive logic has it's own queue management?  And EIDE drives
> require their device drivers to perform all queue management and only
> initiate a command after the previous one has completed?
> 
> Is the bottom line result of this that the SCSI drive has a much greater
> chance of servicing multiple processes during a single media revolution
> while the EIDE will frequently take multiple revolutions to service the
> same queue?

On the other hand, the newer bigger drives are getting able to basically
consume most available bus bandwidth. If the numbers I've seen recently
for drives being able to do ~24MB/s off the platter are indicative of
things to come, then another reason for using SCSI (shared interleaved
usage of an I/O bus) is going away because the limit is moving from the
primary PCI bus to the seconday I/O bus, and if you can fit 4 ~20MB/s or
better drives into a system (consuming most of the usable PCI bus
bandwidth while you're at it) at a fraction of the cost for an Ultra2 LVD
bus (which maxes out at 80MB/s), then indeed why bother with SCSI?

This scenario changes slightly with Fibre Channel because the command
processing time overhead that you can't get away from in parallel SCSI
goes away as well as the tag limit so you can run a higher command load
per spindle with Fibre Channel, but FC is definitely very expensive and
fragile (from a programming point of view). It'll be interesting to see
what Ultra3 brings to the party in all of this.

At any rate, as long a most systems are single 33Mhz PCI bus systems, I'm
rather annoyed to find that EIDE has snuck in and gotten good enough to be
more than just your dopey local root disk.

-matt




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hardware" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.10.10001040917470.4553-100000>