Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 10 Feb 1996 10:36:46 +0100 (MET)
From:      Luigi Rizzo <luigi@labinfo.iet.unipi.it>
To:        hackers@freebsd.org
Subject:   Compressed RAM/SWAP
Message-ID:  <199602100936.KAA04409@labinfo.iet.unipi.it>

next in thread | raw e-mail | index | archive | help
Weekend brainstorm: compressed memory/swap.
I'd like ideas on the subject.

I am asking because there is a bunch of "RAM doubler" PC/MAC
utilities. In principle, the same idea could be adopted for FreeBSD.
But I am a bit worried about cost/performance: compression takes
time, and keeps the CPU busy, while transferring to disk usually
leaves the CPU free (except in the case of ISA-IDE controllers).

A quick thing I did (on 1.1.5) was the following (swapinfo gave
~6MB out of 32MB of swap in use; discard the absolute performance,
that's not an issue, as dd from the raw disk yields slightly above
than 1MB/s):

        dd if=/dev/wd0b bs=4k count=1000 | gzip -1 | wc
        4096000 bytes transferred in 13 secs (315076 bytes/sec)
            2350   15360  747841 

        dd if=/dev/wd0b bs=4k count=4000 | gzip -1 | wc
        16384000 bytes transferred in 56 secs (292571 bytes/sec)
           13254   83574 3838686 

        dd if=/dev/wd0b bs=4k count=8000 | gzip -1 | wc
        32768000 bytes transferred in 101 secs (324435 bytes/sec)
           18816  118495 5675208 

Trying to compress random, single pages yields highly variable 
results, but usually does better than 2:1. Many pages even compress
to <100 bytes, they are probably unused or bzeroed.

Although the above numbers might just mean that swap blocks are 
not allocated contiguously [is this true ?], it sounds reasonable
that the swap in many cases holds bzero-ed pages with sparse non-zero
elements. If this is true, then even simpler/ad hoc (and faster) 
compression algorithms than gzip can work.

This said, where could it be reasonable to do this ? Of course, we 
are talking about systems with limited resources, either RAM or
disk space.  It will definitely help those systems which swap via
NFS.  Also, this approach is probably useful only on those system
with a load average slightly above 1, where a page fault leaves
the system essentially idle.  It will definitely not help on high
performance systems or busy ones.

One possibility is to dedicate some amount of physical memory as
an intermediate compressed, in-core, swap area. Pages get compressed
to this area, and can be subsequently swapped to disk if needed.

Note that if this area is small, then it is just a transient buffer
for pages being swapped out, and it's probably meaningless as pages can
be compressed in-place (or with just an additional bounce-buffer per
CPU; let's think for the future!).

If the area has a significant size, say min(2MB, 25% RAM) or more,
then chances are that such a compression scheme might be effective.

There are of course problems to be solved: the compressed swap should
use fragments smaller than 4K, maybe 512b or so. Addressing these
blocks might require big changes in the vm system, etc. [I don't know
enough on this]

Comments ?

	Have a nice weekend
	Luigi
====================================================================
Luigi Rizzo                     Dip. di Ingegneria dell'Informazione
email: luigi@iet.unipi.it       Universita' di Pisa
tel: +39-50-568533              via Diotisalvi 2, 56126 PISA (Italy)
fax: +39-50-568522              http://www.iet.unipi.it/~luigi/
====================================================================



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199602100936.KAA04409>