Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 2 Oct 1996 00:11:16 -0500 (EST)
From:      "John S. Dyson" <toor@dyson.iquest.net>
To:        bde@zeta.org.au (Bruce Evans)
Cc:        heo@cslsun10.sogang.ac.kr, freebsd-fs@FreeBSD.org
Subject:   Re: nbuf in buffer cache
Message-ID:  <199610020511.AAA00199@dyson.iquest.net>
In-Reply-To: <199610020450.OAA23573@godzilla.zeta.org.au> from "Bruce Evans" at Oct 2, 96 02:50:39 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> >> Why does the number of buffers is calculated in this fashion? 
> >> 30 buffers, 1024 pages, and division by 12 have special meaning? 
> >> There is no comment on source code.
> >>
> >Experience shows that this is a good number.  30 Buffers is a good minimum
> >on a very small system.  There has been problems in earlier code (and
> >perhaps even -current) when running with less than 10 Buffers.
> >> 
> >The performance on a small system is poor (IMO) anyway.  Adding more buffers
> >will take more memory from runnable processes.  Generally, common wisdom
> >and practice shows that it is best to minimize paging.  30 buffers represents
> >approx 240K (on a normally configured filesystem.)  If there is more free
> 
> Experience showed that 240K is about right for a 2MB system running
> FreeBSD.1.x, but 30 buffers is far too small.  For file systems with
> a block size of 512 (e.g. msdos floppies), it can cache a whole 15K.
> For normal ufs file systems with a fragment size of 1K, 1K fragments
> are common for directories.
> 
> >memory, the system will store cached data in memory not associated with
> >buffers.  On a 4MB system, this is uncommon though.  Unlike other *BSD's
> >the buffer cache isn't the only place that I/O cached data is stored.  On
> >FreeBSD the buffer cache is best thought of as a mapping cache, and also the
> >upper limit of dirty buffer space.  Free memory is used for caching both
> >file data and unused memory segments (.text,...).
> 
> Now 240K is probably too much for metadata alone, but 30 buffers is still
> too small.  Metadata blocks are usually small, so 30 buffers usually
> limits the amount of metadata cached to much less than 240K.
> 
So, you would trade paging for file buffering?  I don't think so.  Firstly, the
MSDOS filesystem is a degenerate case.  Many programs have a very steep
curve that if you are running low on memory, they will cause thrashing.  DG
and I found that it is very important to make sure that GCC can have as much
memory as possible.  If (and it is a very big if) there is free (spare)
memory, the system will provide it in the form of the merged VM object cache.
Note also the system prefers to keep metadata in the cache, and to push
file data to the VM objects.  It is then the best of both worlds.

So, to be precise, limiting the number of buffers keeps the freedom
maximized.  The larger the number of buffers, the greater the chance that
there will be too much wired memory for an application.  I have found that
the knee for gcc appears to be about 2M (plus or minus.)  And it is very
sharp.  If you restrict the amount of memory even by 100K-200K, compile
times go through the roof.

Additionally, the issue of MSDOS having a very small cache size isn't valid,
and is limited by the total amount of available memory.

John



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199610020511.AAA00199>