Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 16 Feb 1999 13:07:30 -0800
From:      "Bryn Wm. Moslow" <bryn@spacemonster.org>
To:        Greg Lehey <grog@lemis.com>
Cc:        freebsd-isp@FreeBSD.ORG
Subject:   Re: DPT 3334UW RAID-5 Slowness / Weird FS problems
Message-ID:  <36C9DE12.78E77F53@spacemonster.org>
References:  <36C88CC6.E1621F6F@spacemonster.org> <19990216105959.P2207@lemis.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Greg Lehey wrote:
 
> I don't know the DPT controllers, but 32 kB stripes are far too small.
> For better performance, you should increase them to between 256 kB and
> 512 kB.  Small stripe sizes create many more I/O requests at the drive
> level.

I was actually hoping to get quicker read times out of the spool by
potentially having more heads at the disposal of each seek, given that
the vast majority of files on the filesystem are either directories or 0
length. Perhaps I came up on the short end of the stick on this one.
Good suggestion. 32K was the default for anyone else reading this... you
have to change it on the DPT in the Storage Manager software.

> 
> > My first problem was that I initially tried to do 16K per inode to
> > speed things up a little bit (also, I didn't need the millions of
> > inodes that came with a default newfs on 43GB =).)
> 
> I might be missing something, but I don't see any performance
> improvement by changing the number of inodes.

I've seen not earth-shaking but significant performance increases in
filesystem performance using this technique. It's taken from many a
"Unix Gurus Down From the Mountain to Ram Knowledge into Your Skull"
book and a couple of UFS/FFS optimizing guides I've read around the web.
In my own testing, systems that require large numbers of opens and seeks
within files (anything where you're moving a pointer) are sped up by
reducing the number of inodes in a huge filesystem. It's just a little
tweak but it can really contribute to performance, I've seen it and I
wasn't hallucinating at that time <g>. Also, try formatting a 43 gig
filesystem sometime, do a df -i and look how much space you lose at the
default. I weep like a baby =). It makes me squirm in my sleep at night
to see 80% usage on a fs and see inode usage at like 1% but I'm high
strung and need a vacation...

> What hung?  The FreeBSD system or the DPT subsystem?

The whole dog and pony show... You know - like Windows 98 =) kinda
freeze. Solid, solid as a rock. <G>

> You should consider that once you have set the stripe size, you're
> stuck with it.  Unless the DPTs have a good reason (like "not
> supported"), take a 256 kB stripe size.

I'm definitely going to try this. I've got a 108 Gig array that I'm
going to be playing with this week and I'll have more time to actually
mess with the stripe size more. Thanks for the suggestion.

> > The user directories for delivery are broken out into 1st letter, 1st
> > two letters, username (i.e.: /home/u/us/username) to speed up dir
> > lookups already.
> 
> I'd guess that these would end up in cache anyway, so you shouldn't
> see much improvement with this technique.

I actually found this one when I had the mail spool on one disk eons ago
and it helps indeed. With 13,000+ entries in /var/mail directory lookups
anything requiring vnode access (heh, what doesn't? Just do a "man -k
dir" and start poking) could take a up to a second or two - especially
when getting hardcore spammed or something - an eternity when you're
firing off mail.local and popper every other nanosecond. Most of the
system utils (ls, args, etc.) break on dirs this large (changed in 3.x?
- dunno) and you find yourself writing for loops and while loops just to
do everyday stuff (like ls). Yes, I know, that's why I have the source -
too bad it doesn't come with time to rewrite that whole part of the
system <G>. Also, even though you have it in cache access/modification
times and such have to be updated and writes to reads are almost even on
this particular filesystem. Shutting down atime on the fs would lose us
a first-tier diagnostic tool and I don't want to run the fs async
<shudder>. One last note - the hardware cache on the DPT isn't
particularly useful for the majority of what goes on with this fs, or at
least it doesn't when I read their theory of caching and the
implementation described in their manual. The disk access usage is just
too random. It does help, very much so, but doesn't come close to the
ideal.... (mmm... momentary solid state disk fantasy... shall we all
pause?) 

> There's a Compaq driver out there.  It seems to have some
> strangenesses which suggest that it'll need a lot of work before it
> can be incorporated into the source tree.

Strangenesses, hehe - I like that one, can I use it? All the Compaq
controllers I've looked at are made by DPT. Is there a new one?

>> or the FreeBSD serial code is "broken".
>
> I'm not sure what relation this has with the DPT controller.
> 
> I'm copying Shimon Shapiro on this reply.  He's the author of the DPT
> driver, and he may have more insight.

One of my coworkers emailed Simon and Simon was the one who told him
that it was an issue with the serial code. Didn't quite get it myself
but who am I to argue with a man who has a domain named after him =).
I'd like to hear more myself.

> 
> Greg

Thanks, Greg, I appreciate that you put some time and thought into this.
Mighty nice of you...

Bryn


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?36C9DE12.78E77F53>