Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 01 Feb 2008 13:49:30 +0200
From:      Stefan Lambrev <stefan.lambrev@moneybookers.com>
To:        gnn@freebsd.org
Cc:        freebsd-performance@freebsd.org
Subject:   Re: network performance
Message-ID:  <47A3074A.3040409@moneybookers.com>
In-Reply-To: <m21w7x5ilg.wl%gnn@neville-neil.com>
References:  <4794E6CC.1050107@moneybookers.com>	<47A0B023.5020401@moneybookers.com> <m21w7x5ilg.wl%gnn@neville-neil.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Greetings,

gnn@freebsd.org wrote:
> At Wed, 30 Jan 2008 19:13:07 +0200,
> Stefan Lambrev wrote:
>   
>> Greetings,
>>
>> After playing with many settings and testing various configuration, now 
>> I'm able to to receive on bridge more then 800,000 packets/s
>> without errors, which is amazing!
>> Unfortunately the server behind bridge can't handle more then 250,000 
>> packets/s
>> Please advise how I can increase those limits?
>> Is is possible?
>>
>> The servers are with 82573E Gigabit Ethernet Controller (quad port)
>> So far I tried with lagg and ng_fec, but with them I see more problems
>> then benefits :)
>> Tried polling with kern.polling.user_frac from 5 to 95,
>> different HZ, but nothing helped.
>>     
>
> Increase the size of your socket buffers.
>
> Increase the amount of mbufs in the system.
>
> Best,
> George
>   
Here is what I put in my sysctl.conf:

kern.random.sys.harvest.ethernet=0
kern.ipc.nmbclusters=262144
kern.ipc.maxsockbuf=2097152
kern.ipc.maxsockets=98624
kern.ipc.somaxconn=1024

and in /boot/loader.conf:
vm.kmem_size="1024M"
kern.hz="500"

this is from netstat -m
516/774/1290 mbufs in use (current/cache/total)
513/411/924/262144 mbuf clusters in use (current/cache/total/max)
513/383 mbuf+clusters out of packet secondary zone in use (current/cache)
0/2/2/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
1155K/1023K/2178K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

But still  netstat -w1 -I em0 shows:

            input          (em0)           output
   packets  errs      bytes    packets  errs      bytes colls
    273877 113313   16432620     254270     0   14746500     0
    273397 109905   16403820     253946     0   14728810     0
    273945 113337   16436700     254285     0   14750560     0

What bothers me is the output of top -S:

  PID USERNAME  THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
   22 root        1 -68    -     0K    16K CPU1   1  12:11 100.00% em0 taskq
   11 root        1 171 ki31     0K    16K RUN    0  21:56 99.17% idle: cpu0
   10 root        1 171 ki31     0K    16K RUN    1   9:16  0.00% idle: cpu1
   14 root        1 -44    -     0K    16K WAIT   0   0:07  0.00% swi1: net

and vmstat:

 procs      memory      page                   disk   faults      cpu
 r b w     avm    fre   flt  re  pi  po    fr  sr ad4   in   sy   cs us 
sy id
 1 0 0   67088 1939700     0   0   0   0     0   0   0 2759  119 1325  0 
50 50
 0 0 0   67088 1939700     0   0   0   0     0   0   0 2760  127 1178  0 
50 50
 0 0 0   67088 1939700     0   0   0   0     0   0   0 2761  120 1269  0 
50 50

What I'm missing?




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47A3074A.3040409>