Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 11 Nov 1998 13:45:46 +1030
From:      Greg Lehey <grog@lemis.com>
To:        Bernd Walter <ticso@cicely.de>, Mike Smith <mike@smith.net.au>, hackers@FreeBSD.ORG
Subject:   Re: [Vinum] Stupid benchmark: newfsstone
Message-ID:  <19981111134546.D20374@freebie.lemis.com>
In-Reply-To: <19981111040654.07145@cicely.de>; from Bernd Walter on Wed, Nov 11, 1998 at 04:06:54AM %2B0100
References:  <199811100638.WAA00637@dingo.cdrom.com> <19981111103028.L18183@freebie.lemis.com> <19981111040654.07145@cicely.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wednesday, 11 November 1998 at  4:06:54 +0100, Bernd Walter wrote:
> On Wed, Nov 11, 1998 at 10:30:28AM +1030, Greg Lehey wrote:
>> On Monday,  9 November 1998 at 22:38:04 -0800, Mike Smith wrote:
>>>
>>> Just started playing with Vinum.  Gawd Greg, this thing seriously needs
>>> a "smart" frontend to do the "simple" things.
>>
>> Any suggestions?  After seeing people just banging out RAID
>> configurations with GUIs, I thought that this is probably a Bad
>> Thing.  If you don't understand what you're doing, you shouldn't be
>> doing it.
>>
>> The four-layer concepts used by Veritas and Vinum have always been
>> difficult to understand.  I'm trying to work out how to explain them
>> better, but taking the Microsoft-style "don't worry, little boy, I'll
>> do it all for you" approach is IMO not the right way.
>>
> :)
>
> [...]
>>
>> You shouldn't really be doing performance testing in this version.
>> It's full to the gunwhales of debugging code.  But I'd be interested
>> to know how long it took to do a newfs on one of the disks without
>> Vinum.
>>
> I havn't measured the time but in my opinion it's similary to ccd.
> Can't say about the disks itself.
> Looks much like it's faster than ccd together with cam on a striped
> volume when having parallel access and having not to small stripes.
>
> One point is that is doesn't aggregate transactions to the lower drivers.
> When using stripes of one sector it's doing no more than one sector
> transactions to the HDDs so at least with the old scsi driver there's no
> linear performance increase with it. That's the same with ccd.

Correct, at least as far as Vinum goes.  The rationale for this is
that, with significant extra code, Vinum could aggregate transfers
*from a single user request* in this manner.  But any request that
gets this far (in other words, runs for more than a complete stripe)
is going to convert one user request into n disk requests.  There's no
good reason to do this, and the significant extra code would just chop
off the tip of the iceberg.  The solution is in the hands of the user:
don't use small stripe sizes.  I recommend a stripe of between 256 and
512 kB.

>>> There was an interesting symptom observed in striped mode, where the
>>> disks seemed to have a binarily-weighted access pattern.
>>
>> Can you describe that in more detail?  Maybe I should consider
>> relating stripe size to cylinder group size.
>
> I always saw the same and I'm shure that the cylinder groups are
> mostly placed on one disk each.

I think you mean the superblocks.  It depends on the cylinder group
size.  I haven't thought about this yet, but I may well do so.

>>> It will get more interesting when I add two more 9GB drives and four
>>> more 4GB units to the volume; especially as I haven't worked out if I
>>> can stripe the 9GB units separately and then concatenate their plex
>>> with the plex containing the 4GB units; my understanding is that all
>>> plexes in a volume contain copies of the same data.
>>
>> Correct.  I need to think about how to do this, and whether it's worth
>> the trouble.  It's straightforward with concatenated plexes, of
>> course.
>
> In my opinion it worth.
> Concatenation is the only way to increase a partition and it's realy usefull
> to be able to do on a stripe.

I'll think about it.

> I never checked if it's possible to do a stripe on differend sized
> disks as ccd can do.

Do you mean 'different sized disks' or 'different sized subdisks'?
Different sized disks are no problem, of course, but 'different sized
subdisks' are.  I don't think that ccd could do this either; it would
leave a hole in the volume.

> And ccd is more integrated into the rest of the system but the other
> things are working at least as good as with ccd.

In which way is it better integrated?  It's available in exactly the
same way as ccd (unless, like you, you want RAID-5 :-)

Greg
--
See complete headers for address, home page and phone numbers
finger grog@lemis.com for PGP public key

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19981111134546.D20374>