Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 26 Dec 2009 14:37:32 -0800
From:      Julian Elischer <julian@elischer.org>
To:        tvtube blog <tvtubeblog@gmail.com>
Cc:        freebsd-performance@freebsd.org
Subject:   Re: Strange Freebsd Router Problem
Message-ID:  <4B36902C.6010302@elischer.org>
In-Reply-To: <8c7234990912260717r2da3f231ne0b07f573fa889b5@mail.gmail.com>
References:  <8c7234990912260717r2da3f231ne0b07f573fa889b5@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
tvtube blog wrote:
> Dear All,
> 
> Now a days I getting a Problem with my 4.11 FreeBSD Router with 3 Gigabits
> (bge) Broadcom Ethernet Cards, It was working fine and uses to pass more
> then 200 Mbits/s Internet Traffic but now a day not passing more then
> 90Mbits/s. Even All cards are have Giga Connectivity.
> 
> I am using ipfw + dummynet for bandwidth Shaping.
> 
> Now what I got using TOP command is very high *81.3% interrupt.*
> 
> 17 processes:  1 running, 16 sleeping
> *CPU states:  0.0% user,  0.0% nice,  0.0% system, 81.3% interrupt, 18.8%
> idle*
> Mem: 6016K Active, 6944K Inact, 48M Wired, 32K Cache, 5888K Buf, 1939M Free
> Swap: 240M Total, 240M Free
> 
>   PID USERNAME PRI NICE  SIZE    RES STATE    TIME   WCPU    CPU COMMAND
>   932 root      28   0  1916K  1212K RUN      0:00  1.61%  0.29% top
>   847 root       2   0  5740K  4432K select   0:02  0.00%  0.00% snmpd
>   877 root       2   0  5300K  2292K select   0:01  0.00%  0.00% sshd
>   880 root      18   0  1328K   980K pause    0:00  0.00%  0.00% csh
>   824 root       2   0  2600K  1948K select   0:00  0.00%  0.00% sshd
>   813 root       2   0   992K   728K select   0:00  0.00%  0.00% syslogd
>   870 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   822 root      10   0  1036K   800K nanslp   0:00  0.00%  0.00% cron
>   872 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   871 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   869 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   873 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   866 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   867 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   868 root       3   0   960K   608K ttyin    0:00  0.00%  0.00% getty
>   820 root       2   0  1052K   644K select   0:00  0.00%  0.00% inetd
>    29 root      18   0   212K    96K pause    0:00  0.00%  0.00% adjkerntz
> 
> ######### netstat -m show every thing good.
> 
> alpha4# netstat -m
> 2736/3872/262000 mbufs in use (current/peak/max):
>         2736 mbufs allocated to data
> 2734/3862/65500 mbuf clusters in use (current/peak/max)
> 8692 Kbytes allocated to network (4% of mb_map in use)
> 0 requests for memory denied
> 0 requests for memory delayed
> 0 calls to protocol drain routines
> 
> ### vmstat -z Shows evey thing is good
> 
> ITEM            SIZE     LIMIT    USED    FREE  REQUESTS
> 
> PIPE:            160,        0,     10,     92,      130
> SWAPMETA:        160,   233016,      0,      0,        0
> unpcb:           160,        0,      1,     49,       21
> ripcb:           192,    65500,      0,     42,      685
> divcb:           192,    65500,      0,      0,        0
> syncache:        160,    15359,      0,     25,        1
> tcpcb:           576,    65500,      4,     10,        4
> udpcb:           192,    65500,      3,     39,       30
> socket:          224,    65500,      8,     28,      773
> KNOTE:            64,        0,      0,    128,        3
> DIRHASH:        1024,        0,     30,      6,       30
> NFSNODE:         352,        0,      0,      0,        0
> NFSMOUNT:        544,        0,      0,      0,        0
> VNODE:           192,        0,   1128,     38,     1128
> NAMEI:          1024,        0,      0,     16,    14184
> VMSPACE:         192,        0,     18,     46,      926
> PROC:            416,        0,     29,     20,      942
> DP fakepg:        64,        0,      0,      0,        0
> PV ENTRY:         28,  2276416,  11662, 510146,   200115
> MAP ENTRY:        48,        0,    381,    172,    29674
> KMAP ENTRY:       48,    57551,     87,    126,     3393
> MAP:             108,        0,      7,      3,        7
> VM OBJECT:        92,        0,    551,    101,    13873
> 
> 
> Please advice me what to do or how to decrease this *81.3% interrupt.
> 
> Thanks in advance.


It sounds as if the nature of the traffic has changed.
maybe the average packet size has shrunk.

I presume that the software has not changed for a long time because
4.11, thougha wonderful release, is ANCIENT. :-)

have you tried profiling the traffic.
You could use ipfw to count packets by size (see the iplen parameter)
or you could use trafshow or wireshark.




> 
> TheOne KHan
> 
> *
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4B36902C.6010302>