Date: Mon, 14 Jan 2002 14:46:33 -0800 (PST) From: chk no <chuck@chk.phattydomain.com> To: chkno@dork.com, rizzo@icir.org Cc: freebsd-ipfw@FreeBSD.ORG Subject: Re: ip_dummynet.c:"*** OUCH! pipe should have been idle!" Message-ID: <200201142246.g0EMkXC05128@chk.phattydomain.com> In-Reply-To: <20020114141539.A70340@iguana.icir.org>
next in thread | previous in thread | raw e-mail | index | archive | help
> > $ grep OUCH /var/log/messages > > Jan 12 23:45:46 chk /kernel: *** OUCH! pipe should have been idle! > > > > Running the error message through 5 popular search engines hit only > > on mirrors of the source. I gather that it is not common for this > > to occur. > > > > This machine is running 4.4-STABLE, cvsuped Wed Dec 19 15:02:20 PST > > 2001. There have been some other issues with ipfw+natd packet loops > > (more packets pass through the system than enter it) if the pipe > > queue is set too low, packet loops if the network activity gets too > > hectic (mutella). I was about to cvsup & reinstall, but i'll hold > > off if anyone wants to look at it. It's still up & crunching > > packets... > > what kind of dummynet configuration are you using ? > (ipfw show ; ipfw pipe show) I'm using a bit of a crazy ruleset atm. I'm on a limited bandwidth connection, & I've been toying with the idea of writing a wrapper for ftpd with the same idea as natd's -punch_fw for dynamic bandwidth management of ftp users while not slowing down the connection for other types of traffic. I'm sure there's a much cleaner way to do all this, but I'm atm I'm just playing around. Also, I can't seem to get large port ranges (49152-65535, specifically) to work as expected, which was further motivation. Anyway, please excuse my ruleset: 00049 83862 53010432 count ip from any to any 00050 83517 52949119 divert 8668 ip from any to any via ed1 00051 83862 53010432 count ip from any to any 00100 144 18292 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 01000 0 0 deny ip from 212.169.172.130 to any 01001 0 0 deny ip from 80.134.79.183 to any 01002 0 0 deny ip from 80.134.93.244 to any 01003 0 0 deny ip from 24.101.41.14 to any 01004 0 0 deny ip from 24.42.1.213 to any 01005 0 0 deny ip from 62.31.149.70 to any 01006 0 0 deny ip from 24.222.68.178 to any 01007 0 0 deny ip from 213.118.43.124 to any 01008 0 0 deny ip from 63.202.174.248 to any 01009 0 0 deny ip from 63.202.174.25 to any 01010 0 0 deny ip from 213.224.83.27 to any 01011 0 0 deny ip from 213.73.130.120 to any 01012 0 0 deny ip from 213.96.243.151 to any 01013 0 0 deny ip from 217.85.160.29 to any 01014 0 0 deny ip from 212.120.156.4 to any 01015 0 0 deny ip from 213.118.62.216 to any 01016 4 192 deny ip from 80.63.18.111 to any 10000 0 0 queue 100 ip from any to 141.56.135.29 out xmit ed1 10002 0 0 queue 100 ip from any to 80.133.105.193 out xmit ed1 10004 0 0 queue 110 ip from any to 24.156.167.145 out xmit ed1 10006 0 0 queue 100 ip from any to 217.82.234.229 out xmit ed1 10008 34461 50035960 queue 100 ip from any to 80.133.107.251 out xmit ed1 19999 22122 1323692 queue 190 ip from any to any out xmit ed1 60000 26930 1589275 count ip from any to any in recv ed1 60001 0 0 count ip from any to any out xmit ed1 60010 117 23184 count ip from any to any in recv rl0 60011 84 19837 count ip from any to any out xmit rl0 65000 27131 1632296 allow ip from any to any 65535 0 0 deny ip from any to any 00010: 125.000 Kbit/s 0 ms 100 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 tcp 12.225.230.182/20 65.8.88.230/2822 84447092 94084152055 0 0 80774191 q00100: weight 1 pipe 10 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 tcp 12.225.230.182/53299 141.56.135.29/50474 1615024 2237963167 8 11616 700551 q00110: weight 2 pipe 10 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 tcp 12.225.230.182/60186 24.156.167.145/1696 92806 31691943 0 0 0 q00190: weight 99 pipe 10 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 tcp 12.225.230.182/2198 209.73.225.9/80 930515 697347225 0 0 437372 The topology is internet--ed1--|box|--rl0--internal_net I'm running natd for clients on the internal net. Rules 49 & 51 are in place because of my other issue. By setting the pipe's queue value to less than 20 slots I can make rule 51 count orders of magnitude more packets than rule 49. I was wondering if this was standard behavior. The problem seems to dissapear as long as I keep the pipe queue large. See -questions for that thread. > > Also, if the problem is easily reproducible, please > let me know if you are interested in testing some patches The OUCH event is not reproduceable. It happened in the middle of the night, & has only happened once so far. The packet loop is easily reproduceable. The large port range issue seems more related to ipfw than dummynet, & I haven't tried to isolate it much. I could just be doing something stupid. That said, if you've got patches, I'm more than willing to try them out. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-ipfw" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200201142246.g0EMkXC05128>