Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 21 Feb 2004 04:17:50 -0600
From:      "Scott A. Moberly" <smoberly@karamazov.org>
To:        freebsd-stable@freebsd.org
Subject:   ipf/ipfw/ipnat mess I've got...
Message-ID:  <200402210417.51386.smoberly@karamazov.org>

next in thread | raw e-mail | index | archive | help
Actually it isn't really a mess, just seems like it...

For starters; I had ipfw running as a firewall.  Then I needed to implement 
some sort of QoS.  Great!  ipfw has queues and pipes.  Just what I need.  So, 
dynamic states and pipes doesn't work together.  Fine.  ipf to handle the 
filter, ipnat to handle the nat, and QoS for ipfw.  Now, on to the 
question...

Everything works fine, I can control the various protocols queue levels, 
everything gets blocked, etc.  BUT, too much gets blocked.  General setup of 
a particular transaction:

client -> internal interface -> ipnat (nothing to nat yet) -> ipf (keep sate 
on S/SA flags) -> ipfw (queues only) -> kernel -> ipf (keep state on S/SA 
flags) -> ipnat (to external interface) -> ipfw (nothing) -> external 
interface -> internet

This seems to work just fine, and on return:

server ->  external interface -> ipnat (to internal address) -> ipf (using 
state established above) -> ipfw (nothing) -> kernel -> ipf (using state 
established above) -> ipnat (nothing) -> ipfw (queues here again)

This is where there is a 'minor' problem.  I'm seeing packets dropped on the 
return route in the first instance of ipf.  Traffic continues (resend) and it 
only occurs occasionally (one or twice a minute).  So, I guess I'm asking...  
is there something that I'm missing here, I never saw a lost state when just 
using ipfw.  Yes I have verified the states still exist in ipnat and ipf 
during the drop.  Am I just pushing this old box a bit too far?  100Mhz, 32M 
of memory for about 6 clients?  or are there some sysctl options I should 
tweak.  The only thing I saw that may be reasonable is the state table for 
ipf itself (a #define).  The box doesn't seem to be stressing though:

vmstat
 procs      memory      page                    disks     faults      cpu
 r b w     avm    fre  flt  re  pi  po  fr  sr ad0 ad2   in   sy  cs us sy id
 1 0 0   32908  11292   58   0   0   0  31   1   0   0 1318  176  37 81 19  0

top
last pid: 26339;  load averages:  1.42,  1.16,  1.11    up 0+23:52:47 04:16:13
46 processes:  3 running, 39 sleeping, 4 zombie
CPU states:     % user,     % nice,     % system,     % interrupt,     % idle
Mem: 28M Active, 19M Inact, 15M Wired, 4952K Cache, 17M Buf, 416K Free
Swap: 222M Total, 60K Used, 222M Free

My guess is I'm missing somthing in the ipf addition, but any help would be 
appreciated.

--
Scott A. Moberly
smoberly (at) karamazov.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200402210417.51386.smoberly>