Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Sep 1999 10:06:48 -0700
From:      "Scott Hess" <scott@avantgo.com>
To:        "Matthew Dillon" <dillon@apollo.backplane.com>, "Kevin Day" <toasty@dragondata.com>
Cc:        "Kevin Day" <toasty@dragondata.com>, "Daniel C. Sobral" <dcs@newsguy.com>, <hackers@FreeBSD.ORG>
Subject:   Re: Idea: disposable memory
Message-ID:  <1ea001bf05e6$05d47590$1e80000a@avantgo.com>
References:   <199909231433.JAA61714@celery.dragondata.com> <199909231654.JAA28326@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
It sounds like what he wants is some sort of userland swapper.  In this
case, the implementation would be to decompress when pages are swapped in,
and simply drop the page when it's swapped out.

Given the current constraints, and the fact that decompression will touch
the entire dataset _anyhow_, it would make sense for the decompression pass
to prime a data structure with pointers to non-zero data within each page
(probably int-aligned for performance reasons), and mark it disposable as
suggested elsewhere.  Skip any page which is all zeros.  Then when the data
is to be used, mlock() it, check to see if any of the non-zero pointers now
point to zeros, decompress those pages as needed, blit them, munlink(), and
mark them disposable again.

Actually, that might be better than a userland swapper, because in that
case there's nothing to prevent you from blitting half the dataset, and
then hitting a swap.

Later,
scott

----- Original Message -----
From: Matthew Dillon <dillon@apollo.backplane.com>
To: Kevin Day <toasty@dragondata.com>
Cc: Kevin Day <toasty@dragondata.com>; Daniel C. Sobral <dcs@newsguy.com>;
<hackers@FreeBSD.ORG>
Sent: Thursday, September 23, 1999 9:54 AM
Subject: Re: Idea: disposable memory


> :I'm now playing with compressed data streams. The decompression is slow,
so
> :I'd like to cache the *decompressed* version of these files. I end up
> :allocating large amounts of ram in one process to cache the decompressed
> :data. This is a disavantage over the above scenario, since now the
system
> :swaps out my decompressed data when more ram is needed elsewhere.
Swapping
> :out then swapping back in my decompressed data is about 4x slower than
just
> :re-reading my compressed stream and decompressing it again.
> :
> :Why don't I just allocate a predefined amount of memory and use that for
a
> :cache all the time? Most of the time we have about 20MB free on our
system.
> :Sometimes we end up with about 2MB free though, and what's happening now
is
> :that I start paging out data that I could recreate in less time than the
> :page-in/page-out takes.
>
>     Hmm.  Well, you can check whether the memory has been swapped out
with
>     mincore(), and then MADV_FREE it to get rid of it (MADV_FREE'ing
something
>     that has been swapped out frees the swap and turns it back into
zero-fill).
>     That doesn't get rid of the swapout bandwidth, though.
>
>     I think, ultimately, you need to manage the memory used for your
cache
>     manually.  That means using mlock() and munlock() to lock your cache
into
>     memory.  For example, choose a cache size that you believe the system
>     can support without going bonkers, like 5MB.  mmap() 5MB of ram and
>     mlock() it into memory.  From that point on until you munlock() it or
>     exit, the memory will not be swapped out.
>
>     If the purpose of the box is to maintain the flow of video, then the
cache
>     is a critical resource and should be treated as such.
>
> -Matt
> Matthew Dillon
> <dillon@backplane.com>




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1ea001bf05e6$05d47590$1e80000a>