Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 25 Feb 2012 23:29:43 +0200
From:      =?utf-8?B?0JrQvtC90YzQutC+0LIg0JXQstCz0LXQvdC40Lk=?= <kes-kes@yandex.ru>
To:        =?utf-8?B?0JrQvtC90YzQutC+0LIg0JXQstCz0LXQvdC40Lk=?= <kes-kes@yandex.ru>
Cc:        Volodymyr Kostyrko <c.kworr@gmail.com>, freebsd-questions@freebsd.org
Subject:   Re[3]: VERY slow performance on igb+FreeBSD8.2+mpd5.6
Message-ID:  <1114911301.20120225232943@yandex.ru>
In-Reply-To: <1088424644.20120225220036@yandex.ru>
References:  <1454516861.20120223091945@yandex.ru> <4F4745E4.8080002@gmail.com> <1088424644.20120225220036@yandex.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
Здравствуйте, Коньков.

Вы писали 25 февраля 2012 г., 22:00:36:

КЕ> Здравствуйте, Volodymyr.

КЕ> Вы писали 24 февраля 2012 г., 10:10:12:

VK>> Коньков Евгений wrote:
>>>
>>> #uname FreeBSD 8.3-PRERELEASE #2 r231881: Thu Feb 23 00:53:28 UTC 2012
>>> и Version 5.6 (root@ 10:03 20-Feb-2012)
>>> http://www.speedtest.net/result/1790445113.png
>>> try to reconnect to mpd 10-20times and you get next:
>>> http://www.speedtest.net/result/1790454801.png

VK>> Used server differs in your images. Would you please track down assigned
VK>> IP's?

КЕ> I have load video on youtube.

КЕ> When low performance occur top -SHP shows next:
КЕ>   PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
КЕ>    11 root       171 ki31     0K    64K CPU2    2 222:21 100.00% idle{idle: cpu
КЕ>    11 root       171 ki31     0K    64K CPU1    1 222:11 100.00% idle{idle: cpu
КЕ>    11 root       171 ki31     0K    64K CPU3    3 222:07 100.00% idle{idle: cpu
КЕ>    11 root       171 ki31     0K    64K RUN     0 221:39 98.73% idle{idle: cpu0
КЕ>    12 root       -32    -     0K   624K WAIT    1   1:30  0.00% intr{swi4: cloc
КЕ>     0 root       -68    0     0K   384K -       3   1:03  0.00% kernel{dummynet
КЕ>     0 root       -16    0     0K   384K sched   0   0:44  0.00% kernel{swapper}
КЕ>    12 root       -44    -     0K   624K WAIT    3   0:31  0.00% intr{swi1: neti
КЕ>    12 root       -44    -     0K   624K WAIT    3   0:11  0.00% intr{swi1: neti
КЕ>    12 root       -68    -     0K   624K WAIT    2   0:07  0.00% intr{irq263: ig
КЕ>    12 root       -44    -     0K   624K WAIT    1   0:06  0.00% intr{swi1: neti
КЕ>    13 root       -16    -     0K    64K sleep   2   0:06  0.00% ng_queue{ng_que
КЕ>    13 root       -16    -     0K    64K sleep   0   0:06  0.00% ng_queue{ng_que
КЕ>    13 root       -16    -     0K    64K sleep   2   0:06  0.00% ng_queue{ng_que
КЕ>    13 root       -16    -     0K    64K sleep   1   0:06  0.00% ng_queue{ng_que
КЕ>    12 root       -68    -     0K   624K WAIT    3   0:05  0.00% intr{irq276: re

КЕ> it seems like no ticks are given to new subsystem
КЕ> what is comming on you can see on video (there are also shown netstat, vmstat etc)

КЕ> see video #3 this is best video then #2 then #1
КЕ> http://youtu.be/f90nMtNdKB8

when setting up values:
net.isr.bindthreads: 1
net.isr.direct: 1
net.isr.direct_force: 1

seems have another problem: dummynet take 1/4 of system CPU even when
no traffic at all (<10Kbit/s)


 bwm-ng v0.6 (probing every 0.500s), press 'h' for help
  input: getifaddrs type: rate
  |         iface                   Rx                   Tx                Total
  ==============================================================================
             igb0:           2.96 Kb/s            1.93 Kb/s            4.89 Kb/s
             igb1:           2.09 Kb/s            1.93 Kb/s            4.02 Kb/s
             igb2:           0.00  b/s            0.00  b/s            0.00  b/s
             igb3:           2.80 Kb/s            0.00  b/s            2.80 Kb/s
              re0:         956.18  b/s            9.28 Kb/s           10.21 Kb/s
  ------------------------------------------------------------------------------
            total:           8.78 Kb/s           13.13 Kb/s           21.91 Kb/s


last pid: 37916;  load averages:  0.01,  0.04,  0.06               up 0+00:11:43  23:20:40
148 processes: 5 running, 104 sleeping, 39 waiting
CPU 0:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 1:  0.0% user,  0.0% nice, 97.3% system,  0.0% interrupt,  2.7% idle
CPU 2:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 3:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 92M Active, 14M Inact, 260M Wired, 308K Cache, 23M Buf, 3474M Free
Swap: 4096M Total, 4096M Free

  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11 root       171 ki31     0K    64K CPU3    3  11:37 100.00% idle{idle: cpu3}
   11 root       171 ki31     0K    64K CPU2    2  11:31 100.00% idle{idle: cpu2}
   11 root       171 ki31     0K    64K RUN     0  11:29 100.00% idle{idle: cpu0}
    0 root       -68    0     0K   384K -       1   0:05 82.23% kernel{dummynet}
   11 root       171 ki31     0K    64K CPU1    1  11:34 20.12% idle{idle: cpu1}
   12 root       -32    -     0K   624K WAIT    0   0:05  0.10% intr{swi4: clock}
    0 root       -16    0     0K   384K sched   0   0:44  0.00% kernel{swapper}
   12 root       -68    -     0K   624K WAIT    2   0:02  0.00% intr{irq263: igb1:que}
   12 root       -68    -     0K   624K WAIT    1   0:01  0.00% intr{irq262: igb1:que}
   13 root       -16    -     0K    64K sleep   1   0:01  0.00% ng_queue{ng_queue3}
   13 root       -16    -     0K    64K sleep   1   0:01  0.00% ng_queue{ng_queue1}
   13 root       -16    -     0K    64K sleep   0   0:01  0.00% ng_queue{ng_queue2}
   13 root       -16    -     0K    64K sleep   1   0:01  0.00% ng_queue{ng_queue0}
   12 root       -68    -     0K   624K WAIT    3   0:00  0.00% intr{irq276: re0}
   12 root       -68    -     0K   624K WAIT    0   0:00  0.00% intr{irq261: igb1:que}
   12 root       -68    -     0K   624K WAIT    3   0:00  0.00% intr{irq264: igb1:que}
   14 root       -16    -     0K    16K -       1   0:00  0.00% yarrow
 5292 root        44  -10 43876K 13760K select  1   0:00  0.00% mpd5{mpd5}
   12 root       -68    -     0K   624K WAIT    1   0:00  0.00% intr{irq257: igb0:que}
 4070 root        44    0  5248K  3212K select  2   0:00  0.00% devd
   12 root       -68    -     0K   624K WAIT    2   0:00  0.00% intr{irq258: igb0:que}
 2301 root        44    0 14548K  6968K select  2   0:00  0.00% bgpd
 5226 bind        44    0 59228K 30092K ucond   0   0:00  0.00% named{named}


# ipfw show | grep queue
26275   1193    227000 queue 54 ip from any not 80,110 to any in recv vlan492
# ipfw show | grep pipe
# ipfw pipe show
00051: 170.000 Mbit/s    0 ms burst 0
q131123  50 sl. 0 flows (1 buckets) sched 65587 weight 0 lmax 0 pri 0
         GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991
 sched 65587 type FIFO flags 0x1 64 buckets 0 active
    mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
00054:  60.000 Mbit/s    0 ms burst 0
q131126  50 sl. 0 flows (1 buckets) sched 65590 weight 0 lmax 0 pri 0
         GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991
 sched 65590 type FIFO flags 0x1 64 buckets 0 active
    mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
00053:  60.000 Mbit/s    0 ms burst 0
q131125  50 sl. 0 flows (1 buckets) sched 65589 weight 0 lmax 0 pri 0
         GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991
 sched 65589 type FIFO flags 0x1 64 buckets 0 active
    mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
00052: 170.000 Mbit/s    0 ms burst 0
q131124  50 sl. 0 flows (1 buckets) sched 65588 weight 0 lmax 0 pri 0
         GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991
 sched 65588 type FIFO flags 0x1 64 buckets 0 active
    mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp

# netstat -m
65778/1557/67335 mbufs in use (current/cache/total)
65776/790/66566/262144 mbuf clusters in use (current/cache/total/max)
65776/784 mbuf+clusters out of packet secondary zone in use (current/cache)
0/44/44/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
147996K/2145K/150141K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

-- 
С уважением,
 Коньков                          mailto:kes-kes@yandex.ru




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1114911301.20120225232943>