Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Nov 2005 14:50:16 +0200
From:      Giorgos Keramidas <keramida@ceid.upatras.gr>
To:        kamal kc <kamal_ckk@yahoo.com>
Cc:        freebsd <freebsd-hackers@FreeBSD.org>, freebsd <freebsd-net@FreeBSD.org>
Subject:   Re: allocating 14KB memory per packet compression/decompression results in vm_fault
Message-ID:  <20051104125016.GA1235@flame.pc>
In-Reply-To: <20051104065630.9592.qmail@web35704.mail.mud.yahoo.com>
References:  <20051103145729.GA2088@flame.pc> <20051104065630.9592.qmail@web35704.mail.mud.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2005-11-03 22:56, kamal kc <kamal_ckk@yahoo.com> wrote:
>>> for my compression/decompression i use string tables and
>>> temporary buffers which take about 14KB of memory per
>>> packet.
>>
>> If you're allocating 14 KB of data just to send
>> (approximately) 1.4 KB
>> and then you throw away the 14 KB immediately, it
>> sounds terrible.
>
> yes that's true.
>
> since i am using the adaptive LZW compression scheme it
> requires construction of string table for
> compression/decompression. So an ip packet of size 1500 bytes
> requires a table of size (4KB + 4KB + 2KB = 12KB).

I may be stating the obvious or something totally wrong, but
couldn't the string table be constructed once instead of each
time a packet goes down?  It is my intuition that this would
perform much much better than re-doing the work of the string
table each time a packet goes out.

> what would be the best possible way to allocate/deallocate 14KB
> memory per packet without causing vm_faults ??

Bearing in mind that packets may be as small as 34 bytes, there's
no good way, IMHO.

- Giorgos




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20051104125016.GA1235>