Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 2 Oct 1996 08:33:12 -0500 (EST)
From:      "John S. Dyson" <toor@dyson.iquest.net>
To:        bde@zeta.org.au (Bruce Evans)
Cc:        bde@zeta.org.au, dyson@FreeBSD.org, freebsd-fs@FreeBSD.org, heo@cslsun10.sogang.ac.kr
Subject:   Re: nbuf in buffer cache
Message-ID:  <199610021333.IAA00852@dyson.iquest.net>
In-Reply-To: <199610020618.QAA26221@godzilla.zeta.org.au> from "Bruce Evans" at Oct 2, 96 04:18:46 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> No, but allocate enough buffers to hold the memory that you're willing
> to allocate for (non VM-object) buffering.  nbuf = memory_allowed /
> DEV_BSIZE is too many for static allocation, so dynamic allocation
> is required.  sizeof(struct buf) is now 212, so the worst case should
> only have nbuf = memory_allowed / (512 + 212).  (struct buf is bloated.
> In my first implementation of buffering, for an 8-bit system, DEV_BSIZE
> was 256 and sizeof(struct buf) was 13 and I thought that the 5% overhead
> was high.  Sigh.)
> 
If you can figure out a way to shrink our current buffers to 13 instead
of just under 256, please do so.  They are NOT easy to shrink.  I think
what the smaller buffer headers did must have been quite different from
what we have now.  Remember also that the amount of buffering space is
not limited by the number of buffers!!!  The buffers are now mostly 
for temporary mappings and pending writes.  The only other required
purpose for buffers is for caching directories.  There is
bias to keep the directories in the buffers.

>
> >So, to be precise, limiting the number of buffers keeps the freedom
> >maximized.  The larger the number of buffers, the greater the chance that
> >there will be too much wired memory for an application.  I have found that
> 
> Limiting the number of buffers instead of limiting the memory allocated
> for the buffers sometimes gives more freedom because less memory is
> allocated, but it is better to limit the amount allocated explicitly.
> 
The mechanism exists in our current vfs_bio to support that.  In fact,
if you notice, the amount of memory used by vfs_bio is limited to
nbuf * 8K.  If you have 16k buffers, it is still limited to nbuf*8k,
so the number of buffers (again, not limiting the buffering space) is 
one half for larger buffers.  You can re-tune those parameters for the
small-block filesystems.  Of course, such file systems encounter many
other inefficiencies in normal operations also.  (In other words, IMO,
msdosfs as it is currently written is not very fast anyway.)

Remember, my argument against excessive numbers of buffers is mostly
for small systems (i.e. 4M.)  Those systems are just not very effective
at caching.  The case that I am most worried about is 4k/8k
ufs systems (the ones most used.)  I do not think that wiring down
large amounts of memory is a wise idea.  If you are complaining about
an excessive buffer header size, then there is an opportunity to
work on it.  (Actually, shouldn't an MSDOSFS use the cluster size
instead of 512 anyway?, we have no problem handling 32k buffers,
if you need them (minor tunable).)  There is another opportunity
to work on solving the MSDOS problem and getting the best of both
worlds (bigger buffer support for more wired-down caching, and
not taking excessive memory.)  Directories are still small though,
but we have different sized buffers on UFS also.

I would suggest also when/if you make a decision to change the way
that buffer sizes/buffer memory is calculated, please consider the
case of 8k UFS (the default.)  Also, I think that many of the
small systems are for "non-wealthy students" who would like to compile
programs with gcc.  It is already slow, and making it slower is not good.

John




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199610021333.IAA00852>