Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 12 Mar 2011 10:03:12 +0100
From:      Vlad Galu <dudu@dudu.ro>
To:        freebsd-net@freebsd.org
Subject:   bge(4) on RELENG_8 mbuf cluster starvation
Message-ID:  <AANLkTimSs48ftRv8oh1wTwMEpgN1Ny3B1ahzfS=AbML_@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
Hi folks,

On a fairly busy recent (r219010) RELENG_8 machine I keep getting
-- cut here --
1096/1454/2550 mbufs in use (current/cache/total)
1035/731/1766/262144 mbuf clusters in use (current/cache/total/max)
1035/202 mbuf+clusters out of packet secondary zone in use (current/cache)
0/117/117/12800 4k (page size) jumbo clusters in use
(current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
2344K/2293K/4637K bytes allocated to network (current/cache/total)
0/70128196/37726935 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
^^^^^^^^^^^^^^^^^^^^^
-- and here --

The interfaces are detected as BCM5750:
-- cut here --
bge0: <Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x004101>
mem 0xc0200000-0xc020ffff irq 16 at device 0.0 on pci4
bge0: CHIP ID 0x00004101; ASIC REV 0x04; CHIP REV 0x41; PCI-E
miibus0: <MII bus> on bge0
brgphy0: <BCM5750 10/100/1000baseTX PHY> PHY 1 on miibus0
brgphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT,
1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, auto, auto-flow
bge0: Ethernet address: 00:11:25:22:0d:ec
bge1: <Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x004101>
mem 0xc0300000-0xc030ffff irq 17 at device 0.0 on pci5
bge1: CHIP ID 0x00004101; ASIC REV 0x04; CHIP REV 0x41; PCI-E
miibus1: <MII bus> on bge1
brgphy1: <BCM5750 10/100/1000baseTX PHY> PHY 1 on miibus1
brgphy1:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT,
1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, auto, auto-flow
bge1: Ethernet address: 00:11:25:22:0d:ed
-- and here --

but pciconf says otherwise:
-- cut here --
bge0@pci0:4:0:0:        class=0x020000 card=0x02c61014 chip=0x165914e4
rev=0x11 hdr=0x00
    vendor     = 'Broadcom Corporation'
    device     = 'NetXtreme Gigabit Ethernet PCI Express (BCM5721)'
    class      = network
    subclass   = ethernet
bge1@pci0:5:0:0:        class=0x020000 card=0x02c61014 chip=0x165914e4
rev=0x11 hdr=0x00
    vendor     = 'Broadcom Corporation'
    device     = 'NetXtreme Gigabit Ethernet PCI Express (BCM5721)'
    class      = network
    subclass   = ethernet
-- and here --

kern.ipc.nmbclusters is set to 131072. Other settings:
-- cut here --
net.inet.tcp.rfc1323: 1
net.inet.tcp.mssdflt: 512
net.inet.tcp.keepidle: 7200000
net.inet.tcp.keepintvl: 75000
net.inet.tcp.sendspace: 262144
net.inet.tcp.recvspace: 262144
net.inet.tcp.keepinit: 75000
net.inet.tcp.delacktime: 100
net.inet.tcp.hostcache.purge: 0
net.inet.tcp.hostcache.prune: 300
net.inet.tcp.hostcache.expire: 3600
net.inet.tcp.hostcache.count: 389
net.inet.tcp.hostcache.bucketlimit: 30
net.inet.tcp.hostcache.hashsize: 512
net.inet.tcp.hostcache.cachelimit: 15360
net.inet.tcp.read_locking: 1
net.inet.tcp.recvbuf_max: 262144
net.inet.tcp.recvbuf_inc: 16384
net.inet.tcp.recvbuf_auto: 1
net.inet.tcp.insecure_rst: 0
net.inet.tcp.ecn.maxretries: 1
net.inet.tcp.ecn.enable: 0
net.inet.tcp.abc_l_var: 2
net.inet.tcp.rfc3465: 1
net.inet.tcp.rfc3390: 0
net.inet.tcp.rfc3042: 0
net.inet.tcp.drop_synfin: 1
net.inet.tcp.delayed_ack: 1
net.inet.tcp.blackhole: 2
net.inet.tcp.log_in_vain: 0
net.inet.tcp.sendbuf_max: 262144
net.inet.tcp.sendbuf_inc: 8192
net.inet.tcp.sendbuf_auto: 1
net.inet.tcp.tso: 1
net.inet.tcp.newreno: 1
net.inet.tcp.local_slowstart_flightsize: 4
net.inet.tcp.slowstart_flightsize: 4
net.inet.tcp.path_mtu_discovery: 1
net.inet.tcp.reass.overflows: 958
net.inet.tcp.reass.cursegments: 0
net.inet.tcp.reass.maxsegments: 16464
net.inet.tcp.sack.globalholes: 0
net.inet.tcp.sack.globalmaxholes: 65536
net.inet.tcp.sack.maxholes: 128
net.inet.tcp.sack.enable: 1
net.inet.tcp.inflight.stab: 20
net.inet.tcp.inflight.max: 1073725440
net.inet.tcp.inflight.min: 6144
net.inet.tcp.inflight.rttthresh: 10
net.inet.tcp.inflight.debug: 0
net.inet.tcp.inflight.enable: 0
net.inet.tcp.isn_reseed_interval: 0
net.inet.tcp.icmp_may_rst: 1
net.inet.tcp.pcbcount: 924
net.inet.tcp.do_tcpdrain: 1
net.inet.tcp.tcbhashsize: 512
net.inet.tcp.log_debug: 0
net.inet.tcp.minmss: 216
net.inet.tcp.syncache.rst_on_sock_fail: 0
net.inet.tcp.syncache.rexmtlimit: 3
net.inet.tcp.syncache.hashsize: 512
net.inet.tcp.syncache.count: 0
net.inet.tcp.syncache.cachelimit: 15360
net.inet.tcp.syncache.bucketlimit: 30
net.inet.tcp.syncookies_only: 0
net.inet.tcp.syncookies: 1
net.inet.tcp.timer_race: 0
net.inet.tcp.finwait2_timeout: 60000
net.inet.tcp.fast_finwait2_recycle: 1
net.inet.tcp.always_keepalive: 0
net.inet.tcp.rexmit_slop: 200
net.inet.tcp.rexmit_min: 30
net.inet.tcp.msl: 30000
net.inet.tcp.nolocaltimewait: 0
net.inet.tcp.maxtcptw: 13107
net.inet.udp.checksum: 1
net.inet.udp.maxdgram: 9216
net.inet.udp.recvspace: 41600
net.inet.udp.blackhole: 1
net.inet.udp.log_in_vain: 0
net.isr.numthreads: 1
net.isr.maxprot: 16
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 0
net.isr.maxthreads: 1
net.isr.direct: 1
net.isr.direct_force: 0
-- and here --

The machine normally uses PF so I disabled it for a while, but that didn't
remove the symptom, which didn't occur before - this machine initially ran
RELENG_6, then it was upgraded to RELENG_7 and finally to RELENG_8. Any
insight would be greatly appreciated.

Thanks,
Vlad

-- 
Good, fast & cheap. Pick any two.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTimSs48ftRv8oh1wTwMEpgN1Ny3B1ahzfS=AbML_>