From owner-freebsd-questions@FreeBSD.ORG Sat Feb 25 21:29:49 2012 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4FA78106564A for ; Sat, 25 Feb 2012 21:29:49 +0000 (UTC) (envelope-from kes-kes@yandex.ru) Received: from forward19.mail.yandex.net (forward19.mail.yandex.net [IPv6:2a02:6b8:0:1402::4]) by mx1.freebsd.org (Postfix) with ESMTP id 154318FC08 for ; Sat, 25 Feb 2012 21:29:48 +0000 (UTC) Received: from smtp16.mail.yandex.net (smtp16.mail.yandex.net [95.108.252.16]) by forward19.mail.yandex.net (Yandex) with ESMTP id 688AF1121FBD; Sun, 26 Feb 2012 01:29:46 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1330205386; bh=nL5YhowgLths9/lxJg9X44chxFaEh+H09iUviUzEa0o=; h=Date:From:Reply-To:Message-ID:To:CC:Subject:In-Reply-To: References:MIME-Version:Content-Type:Content-Transfer-Encoding; b=jzO/Rn4HMoUMpUv8TyDimKr+EPh8qPc1K/TEh8xcrmg0D9FqIEouqC3BBQFXPTkwu j0GFpjwTLxQXiwvoxUhKeWMxmmJ3alBRT6RV4YT9FfmXVhyrMsWVqlQ+XPr1Suqfdk 14c+ZUBayrySSOZQADaNSJpWr/G7aVs3ML9z5A28= Received: from smtp16.mail.yandex.net (localhost [127.0.0.1]) by smtp16.mail.yandex.net (Yandex) with ESMTP id 37AAA6A0195; Sun, 26 Feb 2012 01:29:46 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1330205386; bh=nL5YhowgLths9/lxJg9X44chxFaEh+H09iUviUzEa0o=; h=Date:From:Reply-To:Message-ID:To:CC:Subject:In-Reply-To: References:MIME-Version:Content-Type:Content-Transfer-Encoding; b=jzO/Rn4HMoUMpUv8TyDimKr+EPh8qPc1K/TEh8xcrmg0D9FqIEouqC3BBQFXPTkwu j0GFpjwTLxQXiwvoxUhKeWMxmmJ3alBRT6RV4YT9FfmXVhyrMsWVqlQ+XPr1Suqfdk 14c+ZUBayrySSOZQADaNSJpWr/G7aVs3ML9z5A28= Received: from unknown (unknown [77.93.52.20]) by smtp16.mail.yandex.net (nwsmtp/Yandex) with ESMTP id TjrqCnGx-TjrSJHGl; Sun, 26 Feb 2012 01:29:45 +0400 X-Yandex-Spam: 1 Date: Sat, 25 Feb 2012 23:29:43 +0200 From: =?utf-8?B?0JrQvtC90YzQutC+0LIg0JXQstCz0LXQvdC40Lk=?= X-Mailer: The Bat! (v4.0.24) Professional Organization: =?utf-8?B?0KfQnyDQmtC+0L3RjNC60L7QsiwgRnJlZUxpbmU=?= X-Priority: 3 (Normal) Message-ID: <1114911301.20120225232943@yandex.ru> To: =?utf-8?B?0JrQvtC90YzQutC+0LIg0JXQstCz0LXQvdC40Lk=?= In-Reply-To: <1088424644.20120225220036@yandex.ru> References: <1454516861.20120223091945@yandex.ru> <4F4745E4.8080002@gmail.com> <1088424644.20120225220036@yandex.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: Volodymyr Kostyrko , freebsd-questions@freebsd.org Subject: Re[3]: VERY slow performance on igb+FreeBSD8.2+mpd5.6 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: =?utf-8?B?0JrQvtC90YzQutC+0LIg0JXQstCz0LXQvdC40Lk=?= List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 25 Feb 2012 21:29:49 -0000 Здравствуйте, Коньков. Вы писали 25 февраля 2012 г., 22:00:36: КЕ> Здравствуйте, Volodymyr. КЕ> Вы писали 24 февраля 2012 г., 10:10:12: VK>> Коньков Евгений wrote: >>> >>> #uname FreeBSD 8.3-PRERELEASE #2 r231881: Thu Feb 23 00:53:28 UTC 2012 >>> и Version 5.6 (root@ 10:03 20-Feb-2012) >>> http://www.speedtest.net/result/1790445113.png >>> try to reconnect to mpd 10-20times and you get next: >>> http://www.speedtest.net/result/1790454801.png VK>> Used server differs in your images. Would you please track down assigned VK>> IP's? КЕ> I have load video on youtube. КЕ> When low performance occur top -SHP shows next: КЕ> PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND КЕ> 11 root 171 ki31 0K 64K CPU2 2 222:21 100.00% idle{idle: cpu КЕ> 11 root 171 ki31 0K 64K CPU1 1 222:11 100.00% idle{idle: cpu КЕ> 11 root 171 ki31 0K 64K CPU3 3 222:07 100.00% idle{idle: cpu КЕ> 11 root 171 ki31 0K 64K RUN 0 221:39 98.73% idle{idle: cpu0 КЕ> 12 root -32 - 0K 624K WAIT 1 1:30 0.00% intr{swi4: cloc КЕ> 0 root -68 0 0K 384K - 3 1:03 0.00% kernel{dummynet КЕ> 0 root -16 0 0K 384K sched 0 0:44 0.00% kernel{swapper} КЕ> 12 root -44 - 0K 624K WAIT 3 0:31 0.00% intr{swi1: neti КЕ> 12 root -44 - 0K 624K WAIT 3 0:11 0.00% intr{swi1: neti КЕ> 12 root -68 - 0K 624K WAIT 2 0:07 0.00% intr{irq263: ig КЕ> 12 root -44 - 0K 624K WAIT 1 0:06 0.00% intr{swi1: neti КЕ> 13 root -16 - 0K 64K sleep 2 0:06 0.00% ng_queue{ng_que КЕ> 13 root -16 - 0K 64K sleep 0 0:06 0.00% ng_queue{ng_que КЕ> 13 root -16 - 0K 64K sleep 2 0:06 0.00% ng_queue{ng_que КЕ> 13 root -16 - 0K 64K sleep 1 0:06 0.00% ng_queue{ng_que КЕ> 12 root -68 - 0K 624K WAIT 3 0:05 0.00% intr{irq276: re КЕ> it seems like no ticks are given to new subsystem КЕ> what is comming on you can see on video (there are also shown netstat, vmstat etc) КЕ> see video #3 this is best video then #2 then #1 КЕ> http://youtu.be/f90nMtNdKB8 when setting up values: net.isr.bindthreads: 1 net.isr.direct: 1 net.isr.direct_force: 1 seems have another problem: dummynet take 1/4 of system CPU even when no traffic at all (<10Kbit/s) bwm-ng v0.6 (probing every 0.500s), press 'h' for help input: getifaddrs type: rate | iface Rx Tx Total ============================================================================== igb0: 2.96 Kb/s 1.93 Kb/s 4.89 Kb/s igb1: 2.09 Kb/s 1.93 Kb/s 4.02 Kb/s igb2: 0.00 b/s 0.00 b/s 0.00 b/s igb3: 2.80 Kb/s 0.00 b/s 2.80 Kb/s re0: 956.18 b/s 9.28 Kb/s 10.21 Kb/s ------------------------------------------------------------------------------ total: 8.78 Kb/s 13.13 Kb/s 21.91 Kb/s last pid: 37916; load averages: 0.01, 0.04, 0.06 up 0+00:11:43 23:20:40 148 processes: 5 running, 104 sleeping, 39 waiting CPU 0: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 1: 0.0% user, 0.0% nice, 97.3% system, 0.0% interrupt, 2.7% idle CPU 2: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 3: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Mem: 92M Active, 14M Inact, 260M Wired, 308K Cache, 23M Buf, 3474M Free Swap: 4096M Total, 4096M Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 171 ki31 0K 64K CPU3 3 11:37 100.00% idle{idle: cpu3} 11 root 171 ki31 0K 64K CPU2 2 11:31 100.00% idle{idle: cpu2} 11 root 171 ki31 0K 64K RUN 0 11:29 100.00% idle{idle: cpu0} 0 root -68 0 0K 384K - 1 0:05 82.23% kernel{dummynet} 11 root 171 ki31 0K 64K CPU1 1 11:34 20.12% idle{idle: cpu1} 12 root -32 - 0K 624K WAIT 0 0:05 0.10% intr{swi4: clock} 0 root -16 0 0K 384K sched 0 0:44 0.00% kernel{swapper} 12 root -68 - 0K 624K WAIT 2 0:02 0.00% intr{irq263: igb1:que} 12 root -68 - 0K 624K WAIT 1 0:01 0.00% intr{irq262: igb1:que} 13 root -16 - 0K 64K sleep 1 0:01 0.00% ng_queue{ng_queue3} 13 root -16 - 0K 64K sleep 1 0:01 0.00% ng_queue{ng_queue1} 13 root -16 - 0K 64K sleep 0 0:01 0.00% ng_queue{ng_queue2} 13 root -16 - 0K 64K sleep 1 0:01 0.00% ng_queue{ng_queue0} 12 root -68 - 0K 624K WAIT 3 0:00 0.00% intr{irq276: re0} 12 root -68 - 0K 624K WAIT 0 0:00 0.00% intr{irq261: igb1:que} 12 root -68 - 0K 624K WAIT 3 0:00 0.00% intr{irq264: igb1:que} 14 root -16 - 0K 16K - 1 0:00 0.00% yarrow 5292 root 44 -10 43876K 13760K select 1 0:00 0.00% mpd5{mpd5} 12 root -68 - 0K 624K WAIT 1 0:00 0.00% intr{irq257: igb0:que} 4070 root 44 0 5248K 3212K select 2 0:00 0.00% devd 12 root -68 - 0K 624K WAIT 2 0:00 0.00% intr{irq258: igb0:que} 2301 root 44 0 14548K 6968K select 2 0:00 0.00% bgpd 5226 bind 44 0 59228K 30092K ucond 0 0:00 0.00% named{named} # ipfw show | grep queue 26275 1193 227000 queue 54 ip from any not 80,110 to any in recv vlan492 # ipfw show | grep pipe # ipfw pipe show 00051: 170.000 Mbit/s 0 ms burst 0 q131123 50 sl. 0 flows (1 buckets) sched 65587 weight 0 lmax 0 pri 0 GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991 sched 65587 type FIFO flags 0x1 64 buckets 0 active mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 00054: 60.000 Mbit/s 0 ms burst 0 q131126 50 sl. 0 flows (1 buckets) sched 65590 weight 0 lmax 0 pri 0 GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991 sched 65590 type FIFO flags 0x1 64 buckets 0 active mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 00053: 60.000 Mbit/s 0 ms burst 0 q131125 50 sl. 0 flows (1 buckets) sched 65589 weight 0 lmax 0 pri 0 GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991 sched 65589 type FIFO flags 0x1 64 buckets 0 active mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 00052: 170.000 Mbit/s 0 ms burst 0 q131124 50 sl. 0 flows (1 buckets) sched 65588 weight 0 lmax 0 pri 0 GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991 sched 65588 type FIFO flags 0x1 64 buckets 0 active mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp # netstat -m 65778/1557/67335 mbufs in use (current/cache/total) 65776/790/66566/262144 mbuf clusters in use (current/cache/total/max) 65776/784 mbuf+clusters out of packet secondary zone in use (current/cache) 0/44/44/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 147996K/2145K/150141K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines -- С уважением, Коньков mailto:kes-kes@yandex.ru