Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 7 Sep 1997 00:12:32 -0500 (EST)
From:      "John S. Dyson" <toor@dyson.iquest.net>
To:        michaelv@MindBender.serv.net (Michael L. VanLoon -- HeadCandy.com)
Cc:        sos@sos.freebsd.dk, mal@algonet.se, current@FreeBSD.ORG
Subject:   Re: lousy disk perf. under cpu load (was IDE vs SCSI)
Message-ID:  <199709070512.AAA00465@dyson.iquest.net>
In-Reply-To: <199709070407.VAA04801@MindBender.serv.net> from "Michael L. VanLoon -- HeadCandy.com" at "Sep 6, 97 09:07:02 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
Michael L. VanLoon -- HeadCandy.com said:
> 
> Why is it necessary to bring this up over and over again.
>
Because I think that people sometimes think that the world changes
from time to time.  SCSI is really great in large high-end systems
for sure.  EIDE isn't the joke that IDE was 4-5yrs ago though.

> 
> It's an Asus Triton-1 board (P55TP4N) with a Cyrix 6x86 P166+, 64MB
> RAM, running NetBSD-1.2.1.  "No load" means all the standard system
> processes are running, along with a few X apps, but nothing using any
> real CPU time.  Loadit was a simple program I wrote (appended at the
> bottom), which simply generated a constant load of one process, each.
>
Yep, I don't understand the fall off that others have seen.  In fact,
we have more complaints about I/O not being counted properly in system
load and tasks doing I/O appear to have too high a priority.  We have
some mods that help the situation, but there are tradeoffs.

> 
> I don't mean this in a condescending way, but I'd really like to see
> this same kind of test run against four ccd-striped EIDE drives,
> running in both PIO mode, and in DMA mode.  Anyone have four drives
> they could test it with?  I only have a couple, and one is committed
> elsewhere.
> 
> Here are the results from the tests I ran:
> 
> time dd if=/dev/rccd0f of=/dev/null bs=64k count=4096, no load:
> 
>     268435456 bytes transferred in 35 secs (7669584 bytes/sec)
>     0.026u 1.726s 0:35.27 4.9% 0+0k 6+1io 10pf+0w
> 
> time dd if=/dev/rccd0f of=/dev/null bs=64k count=4096, 1 loadit:
> 
>     268435456 bytes transferred in 35 secs (7669584 bytes/sec)
>     0.021u 1.727s 0:35.46 4.9% 0+0k 0+1io 0pf+0w
> 
> time dd if=/dev/rccd0f of=/dev/null bs=64k count=4096, 4 loadits:
> 
>     268435456 bytes transferred in 35 secs (7669584 bytes/sec)
>     0.021u 1.715s 0:35.17 4.9% 0+0k 0+1io 0pf+0w
> 
> 

Here are my dd results for my EIDE 4GB Caviar drive -- NO STRIPING, running
FreeBSD-current:  I wonder how it would be to add a Promise EIDE controller,
and run one drive per EIDE interface???  I do have one of those Promise
controllers, and will probably add support in FreeBSD soon.  Maybe I'll try
ccd then :-).  Sorry that I didn't have time for a scientific measurement,
but I would be interested in running some packaged benchmarks.  (BTW,
the command overhead for my 4GB Caviar is about 80-100usecs also...  Older
Caviars get about 200-400usecs.)  My 2GB Hawk with an NCR controller gets
about 800usecs.  IDE isn't a toy any more, but it also isn't for every
application either.

No load:

dd if=/dev/rwd1 of=/dev/null bs=64k count=1600
1600+0 records in
1600+0 records out
104857600 bytes transferred in 10.848874 secs (9665298 bytes/sec)

One loadit:
./loadit &
dd if=/dev/rwd1 of=/dev/null bs=64k count=1600
1600+0 records in
1600+0 records out
104857600 bytes transferred in 10.864632 secs (9651279 bytes/sec)

Two loadits:
./loadit &
dd if=/dev/rwd1 of=/dev/null bs=64k count=1600
1600+0 records in
1600+0 records out
104857600 bytes transferred in 10.841087 secs (9672240 bytes/sec)

Four loadits:
./loadit &
./loadit &
dd if=/dev/rwd1 of=/dev/null bs=64k count=1600
1600+0 records in
1600+0 records out
104857600 bytes transferred in 10.835611 secs (9677129 bytes/sec)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199709070512.AAA00465>