Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 5 Oct 2012 17:11:12 +0300
From:      Nikolay Denev <ndenev@gmail.com>
To:        Gleb Smirnoff <glebius@freebsd.org>
Cc:        "svn-src-all@freebsd.org" <svn-src-all@freebsd.org>
Subject:   Re: svn commit: r240742 - head/sys/net
Message-ID:  <E932BC95-8286-495D-9709-3F74379CB90B@gmail.com>
In-Reply-To: <2109548116005159772@unknownmsgid>
References:  <201209201005.q8KA5BqZ094414@svn.freebsd.org> <2966A49C-DE3F-4559-A799-D1E9C0A74A9C@gmail.com> <20121005070914.GI34622@glebius.int.ru> <F01FAEFE-8148-412D-9772-A48A1ADA64A7@gmail.com> <20121005080453.GL34622@glebius.int.ru> <2109548116005159772@unknownmsgid>

next in thread | previous in thread | raw e-mail | index | archive | help

On Oct 5, 2012, at 11:16 AM, Nikolay Denev <ndenev@gmail.com> wrote:

> On 05.10.2012, at 11:04, Gleb Smirnoff <glebius@freebsd.org> wrote:
>=20
>> On Fri, Oct 05, 2012 at 11:02:14AM +0300, Nikolay Denev wrote:
>> N> > On Fri, Oct 05, 2012 at 09:34:07AM +0300, Nikolay Denev wrote:
>> N> > N> > Date: Thu Sep 20 10:05:10 2012
>> N> > N> > New Revision: 240742
>> N> > N> > URL: http://svn.freebsd.org/changeset/base/240742
>> N> > N> >
>> N> > N> > Log:
>> N> > N> >  Convert lagg(4) to use if_transmit instead of if_start.
>> N> > N> >
>> N> > N> >  In collaboration with:    thompsa, sbruno, fabient
>> N> > N> >
>> N> > N> > Modified:
>> N> > N> >  head/sys/net/if_lagg.c
>> N> > ...
>> N> > N> Are there any plans to MFC this change and the one for =
if_bridge?
>> N> > N> This one applies cleanly on RELENG_9 and I will have the =
opportunity to test it later today.
>> N> >
>> N> > Sure we can, if you test it. Thanks!
>> N> >
>> N> > --
>> N> > Totus tuus, Glebius.
>> N>
>> N> Patch applied and module reloaded.
>> N>
>> N> I'm testing with 16 iperf instances from a RELENG_8 machine =
connected to a 10G port on
>> N> Extreme Networks switch with ix(4) interface, and on the other =
side is the machine with if_lagg,
>> N> with Intel quad igb(4) interface.
>> N>
>> N>                     /0   /1   /2   /3   /4   /5   /6   /7   /8   =
/9   /10
>> N>      Load Average   |||||||||||||||||||||||||||||||||||||||
>> N>
>> N>       Interface           Traffic               Peak               =
 Total
>> N>           lagg0  in    464.759 MB/s        465.483 MB/s           =
25.686 GB
>> N>                  out    14.900 MB/s         22.543 MB/s            =
3.845 GB
>> N>
>> N>             lo0  in      0.000 KB/s          0.000 KB/s            =
2.118 MB
>> N>                  out     0.000 KB/s          0.000 KB/s            =
2.118 MB
>> N>
>> N>            igb3  in    116.703 MB/s        117.322 MB/s            =
7.235 GB
>> N>                  out     3.427 MB/s          5.225 MB/s            =
2.303 GB
>> N>
>> N>            igb2  in    116.626 MB/s        117.301 MB/s            =
8.248 GB
>> N>                  out     4.789 MB/s         12.069 MB/s            =
3.331 GB
>> N>
>> N>            igb1  in    116.845 MB/s        117.138 MB/s            =
6.406 GB
>> N>                  out     4.222 MB/s          6.439 MB/s          =
267.546 MB
>> N>
>> N>            igb0  in    116.595 MB/s        117.298 MB/s            =
6.045 GB
>> N>                  out     2.984 MB/s          7.678 MB/s          =
221.413 MB
>> N>
>> N>
>> N> (High Load Average is because of simultaneously running disk IO =
test on the machine).
>> N>
>> N> And the same in the other direction :
>> N>
>> N>                     /0   /1   /2   /3   /4   /5   /6   /7   /8   =
/9   /10
>> N>      Load Average   |||||||||||||||||||||||||||||||||||
>> N>
>> N>       Interface           Traffic               Peak               =
 Total
>> N>           lagg0  in     14.427 MB/s         14.939 MB/s          =
155.813 GB
>> N>                  out   458.935 MB/s        459.789 MB/s           =
28.429 GB
>> N>
>> N>             lo0  in      0.000 KB/s          0.000 KB/s            =
2.118 MB
>> N>                  out     0.000 KB/s          0.000 KB/s            =
2.118 MB
>> N>
>> N>            igb3  in      2.797 MB/s          3.540 MB/s           =
39.869 GB
>> N>                  out   117.452 MB/s        121.691 MB/s            =
8.612 GB
>> N>
>> N>            igb2  in      3.641 MB/s          5.412 MB/s           =
40.939 GB
>> N>                  out   116.963 MB/s        127.053 MB/s           =
11.185 GB
>> N>
>> N>            igb1  in      4.202 MB/s          5.301 MB/s           =
39.097 GB
>> N>                  out   116.286 MB/s        117.230 MB/s            =
5.356 GB
>> N>
>> N>            igb0  in      3.818 MB/s          4.713 MB/s           =
38.755 GB
>> N>                  out   116.315 MB/s        117.053 MB/s            =
6.142 GB
>>=20
>> A cool test environment you do have :) Have you got results numbers =
prior to
>> applying the patch?
>>=20
>> --
>> Totus tuus, Glebius.
>=20
> It's not entirely test enironment, more like semi-production :)
>=20
> I will try to reload the old module and do a comparison.

With both modules I was able to saturate the four GigE interfaces, and =
got=20
about ~3.72 Gbits/sec total according to iperf, systat -ifstat showed
about 116MB/s per each interface.

However I'm seeing slightly different CPU stat graphs [1], the =
difference is not big,
but with the new if_lagg(4) driver, when the machine is acting as client =
I'm
seeing slightly higher system CPU time, and about the same interrupt, =
while
when acting as server both system and interrupt are slightly lower.
But please note that these tests were not very scientifically correct.
When the server is available again I might be able to perform several =
runs and
do a proper comparison.

[1] http://93.152.184.10/lagg.jpg






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E932BC95-8286-495D-9709-3F74379CB90B>