Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 11 Nov 1998 19:41:57 +0100
From:      Bernd Walter <ticso@cicely.de>
To:        Greg Lehey <grog@lemis.com>, Mike Smith <mike@smith.net.au>, hackers@FreeBSD.ORG
Subject:   Re: [Vinum] Stupid benchmark: newfsstone
Message-ID:  <19981111194157.06719@cicely.de>
In-Reply-To: <19981111183546.D20849@freebie.lemis.com>; from Greg Lehey on Wed, Nov 11, 1998 at 06:35:46PM %2B1030
References:  <199811100638.WAA00637@dingo.cdrom.com> <19981111103028.L18183@freebie.lemis.com> <19981111040654.07145@cicely.de> <19981111134546.D20374@freebie.lemis.com> <19981111085152.55040@cicely.de> <19981111183546.D20849@freebie.lemis.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Nov 11, 1998 at 06:35:46PM +1030, Greg Lehey wrote:
> On Wednesday, 11 November 1998 at  8:51:52 +0100, Bernd Walter wrote:
> > On Wed, Nov 11, 1998 at 01:45:46PM +1030, Greg Lehey wrote:
> >> On Wednesday, 11 November 1998 at  4:06:54 +0100, Bernd Walter wrote:
> >>> On Wed, Nov 11, 1998 at 10:30:28AM +1030, Greg Lehey wrote:
> >>>> On Monday,  9 November 1998 at 22:38:04 -0800, Mike Smith wrote:
> >>> [...]
> >>> One point is that is doesn't aggregate transactions to the lower drivers.
> >>> When using stripes of one sector it's doing no more than one sector
> >>> transactions to the HDDs so at least with the old scsi driver there's no
> >>> linear performance increase with it. That's the same with ccd.
> >>
> >> Correct, at least as far as Vinum goes.  The rationale for this is
> >> that, with significant extra code, Vinum could aggregate transfers
> >> *from a single user request* in this manner.  But any request that
> >> gets this far (in other words, runs for more than a complete stripe)
> >> is going to convert one user request into n disk requests.  There's no
> >> good reason to do this, and the significant extra code would just chop
> >> off the tip of the iceberg.  The solution is in the hands of the user:
> >> don't use small stripe sizes.  I recommend a stripe of between 256 and
> >> 512 kB.
> >
> > That's good for random performance increase - but for linear access a smaler
> > stripe size is the only way to get the maximum performance of all
> > disks together.
> 
> No, the kind of stripe size you're thinking about will almost always
> degrade performance.  If you're accessing large quantities of data in
> a linear fashion, you'll be reading 60 kB at a time.  If each of these
> reads requires accessing more than one disk, you'll kill performance.
> Try it: I have.
With agregation?
Say You read the volume linear without any other activity on the disks.
If you have a stripe size of 60k and reading is at 60k chunks each read
will read 60k of only one disk - expecting all transactions are stripe aligned.
The only thing wich will increase performance are the readahead abilities
of the fs-driver and the disks themself - at least if I havn't missed any.
If You use 512byte Stripes and read 60k chunks - the current situation is that
each drive gets single sector transactions which is often slower than a single disk
What I expect is that an agreagation such a 60k chunk access on the volume is
splited into only one transaction per drive - so you can read from all the
drives at the same time and get an bandwidth increase.

Write access like newfs should be different because of write behind caches.

-- 
  B.Walter


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19981111194157.06719>