Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Nov 1996 10:14:57 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        cskim@cslsun10.sogang.ac.kr (Kim Chang Seob)
Cc:        freebsd-hackers@freebsd.org, cskim@cslsun10.sogang.ac.kr
Subject:   Re: working set model
Message-ID:  <199611111714.KAA18275@phaeton.artisoft.com>
In-Reply-To: <9611110805.AA23163@cslsun10.sogang.ac.kr> from "Kim Chang Seob" at Nov 11, 96 05:05:00 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> i have some question about FreeBSD memory management 
> i would that know how memory to provide to each process
> to minimize the latter's page fault behavior 
> i know, freebsd memory management does not use the working set medel 
> because it lacks accurate information about the reference pattern 
> of a process. 

This really depends on if you believe LRU works for caching.  This, in
turn, depends on whether you believe in locality of reference.

The theory is that if the buffer and vm cache are the same thing, vm
references will change the LRU position, and the locality will be
optimized for future hits.

That is, you will not really be able to make it more efficient.


A working set model is only useful in the case of badly behaved
processes.  The cannonically worst offender of all time is the SVR4
"ld", which mmap's .o files into memory and traverses the symbol
space during linking, instead of building a link graph in memory
from the object data.

The result is that you will get a disproportionately high amount of
locality in the pages mmapped and referenced this way... and other
processes data will be forced out of cache as a result.



The working set model that makes sense in this case is *not* a per
process working set -- it's a per vnode working set.


It is relatively trivial to implement and test this change: all you
have to do is maintin a buffer count on the number of buffers hung
off a vnode, and modify your LRU insertion order on freed buffers
for vnodes over quota, and modify reclaimation for allocation of
pages for vnodes over quota to steal from the local vnode's LRU
instead of the system LRU.

Together, these will prevent the working set of a single vnode from
growing "too large", causing the LRU locality to break down across
context switches.


The final (optional) piece would allow priveledged processes to relax
their quotas; there are some uses where it's important that a process
be efficient at the expense of other processes on the system.  I would
suggest "madvise" as the best bet, but it would mean taking the memory
range specified as a hint to identify the vnode that you want to affect.


					Regards,
					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199611111714.KAA18275>