Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 12 Nov 1999 00:17:56 -0500
From:      Simon Shapiro <shimon@simon-shapiro.org>
To:        "Kenneth D. Merry" <ken@kdm.org>
Cc:        Randell Jesup <rjesup@wgate.com>, freebsd-arch@freebsd.org
Subject:   Re: I/O Evaluation Questions (Long but interesting!)
Message-ID:  <382BA304.EE2F0D66@simon-shapiro.org>
References:  <199911120444.VAA32051@panzer.kdm.org>

next in thread | previous in thread | raw e-mail | index | archive | help
"Kenneth D. Merry" wrote:
> 
> Simon Shapiro wrote...
> > "Kenneth D. Merry" wrote:
> > >
> > > Simon Shapiro wrote...
> > > > "Kenneth D. Merry" wrote:
> > > > >
> > > > > [ Simon:  the "charset = " (i.e. nothing) line your mail makes my mailer
> > > > > barf.  You may want to adjust your character set. ]
> > > >   [  Am using Netscape Messenger.  Know not how to do that
> > > >      (no relevant preference found :-( ]
> > >
> > > My best guess is, go to:
> > >
> > > Edit -> Preferences -> Navigator -> Languages
> > >
> > > And make sure you at least have English defined there.  Also, go to:
> > >
> > > View -> Character Set
> > >
> > > And make sure you've got Western (ISO-8559-1) defined.
> >
> > How's that?  (sorry for the spam...)
> 
> Much better, thanks.
> 
> > > > > How can you get speeds like that with just a 32-bit PCI bus?  The specs for
> > > > > the PowerEdge 1300 say it has 5 32-bit PCI slots:
> > > >
> > > > These numbers are for block devices.  The kernel obviously
> > > > caches some of this.  I should look next time at emory usage;
> > > > The machine has 1GB of memory. The dataset is about 15GB per
> > > > array.
> > >
> > > Is that for random or sequential I/O?  With sequential I/O, you would
> > > probably blow away any caching effects.  With random I/O, though, you might
> > > get significant help from the cache, especially with that much RAM.
> >
> > Random, of course.
> 
> Okay, that fits the results.
> 
> > To stay architectually minded, please consider these thoughts:
> >
> > Increasing the workers load in this test increases measured
> > throughput (which is to be expected).  However, past about
> > 400 concurrent workers, performance declines rapidly.
> > At about 600 the system simply goes nuts.  Processes exit
> > or hang solidly without any warnings.
> > Must be some resources to be increased.  How is the
> > ftp.cdrom.com kernel configured?  This may help me.
> 
> wcarchive's configuration might help somewhat (whatever it is), but it is
> operating with a very different load than the one you're using.  It has
> ~~5000 users, and pushes out I think somewhere in the neighborhood of
> 100-150Mbits/sec of data.  (DG would know for sure.)  And it's almost all
> reads.
> 
> You're pushing 130-170Mbytes/sec of data, which is about 8 times more, with
> a fraction of the processes.
> 
> You may be running into context switch overhead, or who knows what else.
> The hangs, though, are not good.
> 
> > > > Raw disks perfromance is totally throttled by physics;
> > > > We are running at about 200% of Seagate specs.
> > >
> > > How can you run at 200% of the spec?  Most of the time disk manufacturers
> > > are even a little optimistic about their high end performance.
> >
> > I suspect caching on the disk.  I also know the DPT
> > firmware, while claiming not to do READ caching, does some
> > very interesting things with sorting, queuing, tagging, etc.
> > This is worth the difference.  More or less.
> >
> > BTW, I am not looking at claimed benchmarks from the mfgs.
> > I am looking at what tends to be accurately reported;
> > Sek times, internal transfer rates, data sheets timing
> > specs, etc.
> 
> I've found that the transfer rates are sometimes accurate.  For instance,
> I've got an IBM Ultrastar 9ZX, which IBM claims can do 10-17MB/sec:
> 
> http://www.storage.ibm.com/techsup/hddtech/fedspd.pdf
> 
> That's about right, from what I've seen.  The low end may even be a little
> lower than the actual performance.
> 
> Another thing you can do is benchmark one disk, and then compare that with
> the throughput you get from the array.

Done that.  It is exactly on the nose with the specs for sequential
and the quoted 200% or less for random.

> It could be that the combination of the DPT controller's 256MB cache and
> fancy queueing, and your 1GB of RAM is causing the amazingly fast disk speeds.

These DPTs seem to be optimal for RAID-5, very good at RAID-0
and nothing exciting for single disks.  I have some FC-AL
gear on order.

What worries me is not the perfromance, but the corruption
of the stack that I see.

For example, I can run the same 400 processes against the
raw device all day and all night without a hitch.
Run them against a block device and something bizzare
happens;  A filesystem get corrupted, the Adaptec driver
times out, tsleep segfaults, something.  At times I can
get the error in the driver, but then it makes no sense 
either.  There are tons of self-checks and state
verifications in the code.  None trip, or when they do
they are as illogical as the null pointer inside tsleep.

> Ken
> --
> Kenneth Merry
> ken@kdm.org
> 
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-arch" in the body of the message

-- 


Sincerely Yours,                 Shimon@Simon-Shapiro.ORG
                                             404.664.6401
Simon Shapiro

Unwritten code has no bugs and executes at twice the speed of mouth




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?382BA304.EE2F0D66>