Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 3 Nov 2005 22:56:30 -0800 (PST)
From:      kamal kc <kamal_ckk@yahoo.com>
To:        Giorgos Keramidas <keramida@linux.gr>
Cc:        freebsd <freebsd-hackers@freebsd.org>, freebsd <freebsd-net@freebsd.org>
Subject:   Re: allocating 14KB memory per packet compression/decompression results in vm_fault
Message-ID:  <20051104065630.9592.qmail@web35704.mail.mud.yahoo.com>
In-Reply-To: <20051103145729.GA2088@flame.pc>

next in thread | previous in thread | raw e-mail | index | archive | help

> > for my compression/decompression i use string
> tables and temporary
> >  buffers which take about 14KB of memory per
> packet.
> 
> If you're allocating 14 KB of data just to send
> (approximately) 1.4 KB
> and then you throw away the 14 KB immediately, it
> sounds terrible.

yes that's true. 

since i am using the adaptive LZW 
compression scheme it requires construction of string
table for compression/decompression. So an ip packet
 of size 1500 bytes requires a table of size (4KB +
 4KB + 2KB =12KB). 

further still i copy the ip packet
 data in another data buffer (about 1.4KB) and 
then compress it.

So all this adds up to about 14KB. 

Right now i can't do with less than 14KB.

as i said before the compression/decompression works
fine. but soon the kernel would panic with one 
of the vm_fault: error message.

what would be the best possible way to 
allocate/deallocate 14KB memory per packet without 
causing vm_faults ?? 

is there anything i am missing ??

thanks 

kamal











	
		
__________________________________ 
Yahoo! Mail - PC Magazine Editors' Choice 2005 
http://mail.yahoo.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20051104065630.9592.qmail>