Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 May 1997 10:09:21 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        ejc@bazzle.com (Eric J. Chet)
Cc:        james@westongold.com, freebsd-hackers@FreeBSD.ORG
Subject:   Re: mmap()
Message-ID:  <199705151709.KAA15089@phaeton.artisoft.com>
In-Reply-To: <Pine.BSF.3.96.970515091201.182A-100000@kayman.bazzle.com> from "Eric J. Chet" at May 15, 97 09:13:45 am

next in thread | previous in thread | raw e-mail | index | archive | help
[ ... mmap() won't trigger predictive read-ahead ... ]

> > Is this 'because of the way it happens to be implemented' or something
> > fundamental?
> > 
> > Is there any way that one could add a hint to say that the access will
> > be sequential, or that it should fault in multiple pages?
> > 
> > Some extra flags as an extension?
> 
> 	madvise(addr, len, MADV_SEQUENTIAL)

This, of course, only depresses the priority of pages preceeding the
page being accessed so the VM can feel free to recycle them faster.
Maybe if your cache was already thrashing, this would be a win.  But
probably not.

Predictive read-ahead is a function of the block I/O subsystem, the
read operands of which are implemented with the VM system.

The mapping of file pages into your address space are also implemented
with the VM system.


On systems where mmap() is implemented on top of a buffer cache (ie:
systems without a unified VM/buffer cache, unlike FreeBSD), mmap()
accesses will trigger predictive read ahead.  But because a unified
cache is so much faster than a non-unified one, practically, you
will not get better performance from that mmap() than you already
get from the FreeBSD mmap().  This is because an mmap() on a buffer
cache is proportionally slower because of buffer/VM synchronization
(which isn't needed on a unified system).  So it's six of one, half
a dozen of the other.


This is not to say the situation is hopeless; you *could* crank up the
sequential I/O performance of mmap(), at a cost of a save and compare
in the general page fault case.  What would have to happen is the vnode
would have to notice on one fault that the page faulted immediately
before for the same vnode was the immediately previous page, and then
it would predictively "fault ahead" instead of the block I/O subsystem
noting that the read is sequentialy and predictively faulting ahead.

Maybe you can convince John Dyson that coding this would be fun (it
might even actually *be* fun 8-)), and then checking the degradation
this causes in the general case to see if it's unacceptably high for
your special case.


					Regards,
					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199705151709.KAA15089>