Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 May 1997 11:21:25 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        davem@jenolan.rutgers.edu (David S. Miller)
Cc:        terry@lambert.org, ejc@bazzle.com, james@westongold.com, freebsd-hackers@FreeBSD.ORG
Subject:   Re: mmap()
Message-ID:  <199705151821.LAA15353@phaeton.artisoft.com>
In-Reply-To: <199705151810.OAA01276@jenolan.caipgeneral> from "David S. Miller" at May 15, 97 02:10:28 pm

next in thread | previous in thread | raw e-mail | index | archive | help
>    This is not to say the situation is hopeless; you *could* crank up
>    the sequential I/O performance of mmap(), at a cost of a save and
>    compare in the general page fault case.
> 
> Or you could add intelligent page prefetching/prefaulting, see the
> JACM article on prefetching about 3 or 4 issues ago for an extremely
> clever strategy to pull this off in an online fashion.
> 
> Although be careful, their scheme is patented, but you could implement
> something similar just using something other than LNZ compression code
> selection (ie. use another compression scheme's code selection).
> There is proof even in the computer learning field that this is an
> extremely effective limited history prefetching strategy.

Yes; the store was for "last page" and the compare was for "this page
equal last page plus one" to add a prefetch trigger to the VM to
enable predictive faulting.

The intelligent mechanisms generally require a history with the file;
The ICON systems (which used a seperate processor for the disk
controller subsystems) kept a bitmap of this information to decide
which pages were probably going to be asked for, and faulted them in.

I think that predictive faulting in the mmap() case should be as
valuable as predictive read-ahead (faulting, basically) in the read()
case, though I admit that it would be interesting to investigate
more intelligent algorithms.

The University of Utah, in particular, had a very interesting project
which involved keeping around pieces of prelinked PIC objects, which
was effectively "faulting" object linking for shared libraries.  They
unfortunately required you to do bizarre things with your address space
(I considered it as a potential candidate for BSD shared libraries at
one time).

I seems to me that the cache criteria algorithm from this would be a
good method for intelligent page selection... it's basically the same
thing, applied to code usage rather than data usage of the page contents,
and the bonus is that it's published instead of patented.


					Regards,
					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199705151821.LAA15353>