Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 3 Oct 2003 6:30:15 +0200
From:      "Franky" <franky@is.net.pl>
To:        "Harti Brandt" <brandt@fokus.fraunhofer.de>, "Franky" <franky@is.net.pl>
Cc:        freebsd-atm <freebsd-atm@freebsd.org>
Subject:   Odp: Re: patm, idt, ipfw - next adentures
Message-ID:  <20031003043015.6129423160@arrakis.solutions.net.pl>

next in thread | raw e-mail | index | archive | help
> As every non-paid open source project FreeBSD is developed by volunteers.
> If someone finds a problem and helps the developer to get the problem
> solved things will get better with time. I have asked you for panic
> message and stack trace. These are really simple to get. How do you expect
> I'm going to fix your problem if you're not going to help me fixing it?
ok, but now I have not problem with panic (I don't use patm driver and
PROATM-155 card - I will back to this but later) I have problem with idt
driver and ipfw on FreeBSD5.1.
At this night I did new tests on FreeBSD 5.1 with ForeRunner LE155 and idt
driver:
-a part of kernel config:
ptions         DDB                     #Enable the kernel debugger
options         INVARIANTS              #Enable calls of extra sanity
checkin
options         INVARIANT_SUPPORT       #Extra sanity checks of internal
stru
options         WITNESS                 #Enable checks to detect deadlocks
an
options         WITNESS_SKIPSPIN        #Don't run witness on spinlocks for
s
device          isa
device          pci
#device         patm
#device         utopia
device          atm
device          harp
options         ATM_CORE
options         ATM_IP
options         ATM_SIGPVC
options         LIBMBPOOL
options         NATM
-idt is now like a module
# kldstat
Id Refs Address    Size     Name
1    2 0xc0400000 31d0c4   kernel
2    1 0xcb4ac000 9000     idt.ko
-kernel is debug version (size 15827K)
-ipfw have this lines
ipfw pipe 1 config bw 5000Kbit/s queue 4Kbytes
ipfw queue 10 config weight 65 pipe 1 buckets 4096 mask dst-ip 0x0000ffff
ipfw queue 11 config weight 35 pipe 1 buckets 4096 mask dst-ip 0x0000ffff

ipfw add 510 queue 10 all from 192.168.192.0/26 to any out via x0
ipfw add 511 queue 11 all from not 192.168.192.0/26 to any out via x0

About 5 min. after boot all PVC stop transmit, I tcpdump each network
interface - on all are in/out packets, but "atm show stats vcc" show then
only IN couters are change. Out counters are halt.
Input    Input  Input  Output   Output Output
Interface  VPI   VCI     PDUs    Bytes   Errs    PDUs    Bytes   Errs
idt0         0   140    54126  6010496      1  105624 134713668      0
idt0         0   141     1137    81719      0   30811  8441340      0
idt0         0   142    25280 11764794      0   17640  3217976      0
idt0         0   143       30     2520      0       8      800      0
idt0         0   144    12658 13571079      0   10451  5502752      0
idt0         0   145        0        0      0       3      168      0
idt0         0   146     2648   198906      0    6257  8558032      0
idt0         0   147    39718 16771801      0   23808 16353952      0
idt0         0   148       19     4344      0      54     4704      0
idt0         0   149       10      896      0     108    10916      0
idt0         0   150    16403 10339586      0   13689  4619084      0
idt0         0   151     9363  4467235      0    6046  1040544      0
idt0         0   152        0        0      0       0        0      0
idt0         0   153     5336   361945      0    7903 10678104      0
idt0         0   154     5518  1276588      0    9960 12457632      0
idt0         0   155        0        0      0       0        0      0


at this moment  netstat -m show this:
mbuf usage:
GEN cache:      0/0 (in use/in pool)
CPU #0 cache:   52155/52160 (in use/in pool)
Total:          52155/52160 (in use/in pool)
Mbuf cache high watermark: 512
Maximum possible: 131072
Allocated mbuf types:
52155 mbufs allocated to data
39% of mbuf map consumed
mbuf cluster usage:
GEN cache:      0/0 (in use/in pool)
CPU #0 cache:   51537/51544 (in use/in pool)
Total:          51537/51544 (in use/in pool)
Cluster cache high watermark: 128
Maximum possible: 65536
14% of cluster map consumed
116128 KBytes of wired memory reserved (27% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines
after next 5 min. :
mbuf usage:
GEN cache:      0/0 (in use/in pool)
CPU #0 cache:   66153/66176 (in use/in pool)
Total:          66153/66176 (in use/in pool)
Mbuf cache high watermark: 512
Maximum possible: 131072
Allocated mbuf types:
66153 mbufs allocated to data
50% of mbuf map consumed
mbuf cluster usage:
GEN cache:      0/0 (in use/in pool)
CPU #0 cache:   65535/65536 (in use/in pool)
Total:          65535/65536 (in use/in pool)
Cluster cache high watermark: 128
Maximum possible: 65536
4% of cluster map consumed
147616 KBytes of wired memory reserved (14% in use)
716877 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines
after this all interface halt (even etherne fxp0, fxp1) and in
/var/log/messages at first time system message:
Oct  3 07:24:29 ordos kernel: Out of mbuf address space!
Oct  3 07:24:30 ordos kernel: Consider increasing NMBCLUSTERS
Oct  3 07:24:30 ordos kernel: All mbufs or mbuf clusters exhausted, please
se
e tuning(7).

This is the end.
Maybe bug is in ipfw, but I use very often queue/pipe on Intel Gbit
interface with vlan and it works.







________________________________________________
http://www.is.net.pl



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20031003043015.6129423160>