Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Mar 2000 15:21:04 -0800
From:      Greg Lehey <grog@lemis.com>
To:        Matthew Dillon <dillon@apollo.backplane.com>
Cc:        Poul-Henning Kamp <phk@critter.freebsd.dk>, current@FreeBSD.ORG
Subject:   Re: patches for test / review
Message-ID:  <20000323152104.B9318@mojave.worldwide.lemis.com>
In-Reply-To: <200003202204.OAA72087@apollo.backplane.com>; from dillon@apollo.backplane.com on Mon, Mar 20, 2000 at 02:04:48PM -0800
References:  <20074.953579833@critter.freebsd.dk> <200003202204.OAA72087@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Monday, 20 March 2000 at 14:04:48 -0800, Matthew Dillon wrote:
>
>     If a particular subsystem needs b_data, then that subsystem is obviously
>     willing to take the virtual mapping / unmapping hit.  If you look at
>     Greg's current code this is, in fact, what is occuring.... the critical
>     path through the buffer cache in a heavily loaded system tends to require
>     a KVA mapping *AND* a KVA unmapping on every buffer access (just that the
>     unmappings tend to be for unrelated buffers).  The reason this occurs
>     is because even with the larger amount of KVA we made available to the
>     buffer cache in 4.x, there still isn't enough to leave mappings intact
>     for long periods of time.  A 'systat -vm 1' will show you precisely
>     what I mean (also sysctl -a | fgrep bufspace).
>
>     So we will at least not be any worse off then we are now, and probably
>     better off since many of the buffers in the new system will not have
>     to be mapped.  For example, when vinum's RAID5 breaks up a request
>     and issues a driveio() it passes a buffer which is assigned to b_data
>     which must be translated (through page table lookups) to physical
>     addresses anyway, so the fact that that vinum does not populate
>     b_pages[] does *NOT* help it in the least.  It actually makes the job
>     harder.

I think you may be confusing two things, though it doesn't seem to
make much difference.  driveio() is used only for accesses to the
configuration information; normal Vinum I/O goes via launch_requests()
(in vinumrequest.c).  And it's not just RAID-5 that breaks up a
request, it's any access that goes over more than one subdisk (even
concatenated plexes in exceptional cases).

Greg
--
Finger grog@lemis.com for PGP public key
See complete headers for address and phone numbers


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20000323152104.B9318>