Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Mar 2000 23:17:28 -0600 (CST)
From:      Kevin Day <toasty@dragondata.com>
To:        grog@lemis.com (Greg Lehey)
Cc:        webweaver@rmci.net (Ken), toasty@dragondata.com (Kevin Day), freebsd-isp@FreeBSD.ORG
Subject:   Re: Vinum Stripe Size
Message-ID:  <200003080517.XAA94623@celery.dragondata.com>
In-Reply-To: <20000308153114.A52208@freebie.lemis.com> from "Greg Lehey" at Mar 08, 2000 03:31:14 PM

next in thread | previous in thread | raw e-mail | index | archive | help
> >> Actually, I discovered that with 4 drives, you're much better off using an
> >> odd stripe size. (not a power of two) This is because of how the cylinder
> >> groups are laid out, they'll all end up on one drive.
> >>
> >> You may want to ask Greg Lehey (grog@lemis.com) for more info about this, as
> >> I can't remember exactly what he came up with for an optimum stripe size.
> >
> > I figured he'd end up seeing this and I didn't want to bug him at
> > his private address;) The docs suggest 256k to 512K.  I thing I read
> > somewhere to use 512 with larger drives, but cannot recall precisely
> > where.  BTW, this is for a web hosting box.  I could probably get by
> > with RAID-1, but figured I might as well go with RAID-10 since I
> > have the drives to spare.
> 
> Indeed, Kevin is right.  At the FreeBSDCon he showed me some
> interesting results running ufs on striped plexes with different
> stripe sizes.
> 
> 2.  You want to avoid putting all your superblocks on the same disk.
>     Nowadays cylinder groups are almost always 32 MB, so any power of
>     2 stripe size will give rise to this undesirable situation.  Kevin
>     gave me some interesting input on the effect this has on
>     performance, but it's quite a difficult problem.  I'm planning to
>     provide a program to work out optimal stripe sizes, but it's not
>     ready yet.  The principle is simple, though: spread the
>     superblocks equally across all disks.


When I used a stripe size of 256k or 512k, I ended up with about 90% of my
disk accesses going on drive 0, and the remaining 10% spread across the
other three drives. This was mostly because my application created HUGE
directories with tons of very small files, I'm guessing.

After playing with things for far too long, I ended up with some very very
unusual number, that seemed to work best... It probably works for the wrong
reasons, but a co-worker suggested that I try a (prime number * 512) that
ended up being close to 256k. I think just the fact that it was just an
oddball number worked well, the fact that it was prime has nothing to do
with it. :)

In short, picking something that doesn't end up as an integer if you do:

  ((32M / stripesize) / numdrives)

was as close as I came to figuring out how to make it not beat one drive to
death. :)


Kevin


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200003080517.XAA94623>