Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 30 Oct 2010 20:07:11 +0300
From:      =?windows-1251?B?yu7t/Oru4iDF4uPl7ejp?= <kes-kes@yandex.ru>
To:        Pyun YongHyeon <pyunyh@gmail.com>
Cc:        freebsd-net@freebsd.org
Subject:   Re[2]: How to obtain place of low perfomance?
Message-ID:  <1698885470.20101030200711@yandex.ru>
In-Reply-To: <20101029181745.GC19479@michelle.cdnetworks.com>
References:  <364322520.20101029102010@yandex.ru> <20101029181745.GC19479@michelle.cdnetworks.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello, Pyun.

Вы писали 29 октября 2010 г., 21:17:45:

PY> On Fri, Oct 29, 2010 at 10:20:10AM +0300, ?????????????? ?????????????? wrote:
>> Hi, Freebsd-net.
>> 
>> serv1# ifocnfig nfe0
>> nfe0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
>>         options=10b<RXCSUM,TXCSUM,VLAN_MTU,TSO4>
>>         ether 00:13:d4:ce:82:16
>>         inet 10.11.8.17 netmask 0xfffffc00 broadcast 10.11.11.255
>>         inet 10.11.8.15 netmask 0xfffffc00 broadcast 10.11.11.255
>>         media: Ethernet autoselect (1000baseTX <full-duplex>)
>>         status: active
>> serv1# ifconfig igb0
>> igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
>>         options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4>
>>         ether 00:1b:21:45:da:b8
>>         media: Ethernet autoselect (1000baseTX <full-duplex>)
>>         status: active
>> serv1# ifconfig vlan7
>> vlan7: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
>>         options=3<RXCSUM,TXCSUM>
>>         ether 00:1b:21:45:da:b8
>>         inet 10.11.15.15 netmask 0xffffff00 broadcast 10.11.15.255
>>         inet 10.11.7.1 netmask 0xffffff00 broadcast 10.11.7.255
>>         media: Ethernet autoselect (1000baseTX <full-duplex>)
>>         status: active
>>         vlan: 7 parent interface: igb0
>> 
>> doing bw test with iperf it show low performance on nfe0.
>> 
>> # iperf -c 10.11.8.17
>> ------------------------------------------------------------
>> Client connecting to 10.11.8.17, TCP port 5001
>> TCP window size: 32.5 KByte (default)
>> ------------------------------------------------------------
>> [  3] local 10.11.8.16 port 63911 connected with 10.11.8.17 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-10.5 sec    124 MBytes  98.8 Mbits/sec
>> # iperf -c 10.11.7.1
>> ------------------------------------------------------------
>> Client connecting to 10.11.7.1, TCP port 5001
>> TCP window size: 32.5 KByte (default)
>> ------------------------------------------------------------
>> [  3] local 10.11.7.2 port 61422 connected with 10.11.7.1 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-10.3 sec    800 MBytes    653 Mbits/sec
>> 
>> despite on it is integrated I expect about 300-400Mbit throughput
>> does nfe0 really so poor NIC?

PY> nfe(4) controllers would not be one of best controllers targeted
PY> for server environments but generally it's not poor for desktop
PY> users. I mean you should be able to saturate link when you use bulk
PY> TCP/UDP transfers.
PY> Last time I tried iperf it was not reliable. Did you disable
PY> threading of iperf? Also note, both sender/receiver of iperf should
PY> be built with same configuration option.

igb and nfe on same machine.
       --------------\igb 10.11.7.1
CLIENT/               SERVER
      \--------------/nfe 10.11.8.17

-- 
С уважением,
 Коньков                          mailto:kes-kes@yandex.ru




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1698885470.20101030200711>