From owner-freebsd-stable@freebsd.org Mon Aug 24 08:13:46 2015 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3AFC59BF3F9; Mon, 24 Aug 2015 08:13:46 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B29AC161B; Mon, 24 Aug 2015 08:13:45 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from chamsa.cs.huji.ac.il ([132.65.80.19]) by kabab.cs.huji.ac.il with esmtp id 1ZTmsu-000BBi-69; Mon, 24 Aug 2015 11:13:40 +0300 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <1815942485.29539597.1440370972998.JavaMail.zimbra@uoguelph.ca> Date: Mon, 24 Aug 2015 11:13:39 +0300 Cc: pyunyh@gmail.com, Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Gleb Smirnoff , Christopher Forgeron Message-Id: <0495A92D-0A4C-4DDB-901A-8ACC3D49C866@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <20150820023024.GB996@michelle.fasterthan.com> <1153838447.28656490.1440193567940.JavaMail.zimbra@uoguelph.ca> <15D19823-08F7-4E55-BBD0-CE230F67D26E@cs.huji.ac.il> <818666007.28930310.1440244756872.JavaMail.zimbra@uoguelph.ca> <49173B1F-7B5E-4D59-8651-63D97B0CB5AC@cs.huji.ac.il> <1815942485.29539597.1440370972998.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.2104) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Aug 2015 08:13:46 -0000 > On 24 Aug 2015, at 02:02, Rick Macklem wrote: >=20 > Daniel Braniss wrote: >>=20 >>> On 22 Aug 2015, at 14:59, Rick Macklem wrote: >>>=20 >>> Daniel Braniss wrote: >>>>=20 >>>>> On Aug 22, 2015, at 12:46 AM, Rick Macklem = wrote: >>>>>=20 >>>>> Yonghyeon PYUN wrote: >>>>>> On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote: >>>>>>> Hans Petter Selasky wrote: >>>>>>>> On 08/19/15 09:42, Yonghyeon PYUN wrote: >>>>>>>>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky = wrote: >>>>>>>>>> On 08/18/15 23:54, Rick Macklem wrote: >>>>>>>>>>> Ouch! Yes, I now see that the code that counts the # of = mbufs is >>>>>>>>>>> before >>>>>>>>>>> the >>>>>>>>>>> code that adds the tcp/ip header mbuf. >>>>>>>>>>>=20 >>>>>>>>>>> In my opinion, this should be fixed by setting = if_hw_tsomaxsegcount >>>>>>>>>>> to >>>>>>>>>>> whatever >>>>>>>>>>> the driver provides - 1. It is not the driver's = responsibility to >>>>>>>>>>> know if >>>>>>>>>>> a tcp/ip >>>>>>>>>>> header mbuf will be added and is a lot less confusing that >>>>>>>>>>> expecting >>>>>>>>>>> the >>>>>>>>>>> driver >>>>>>>>>>> author to know to subtract one. (I had mistakenly thought = that >>>>>>>>>>> tcp_output() had >>>>>>>>>>> added the tc/ip header mbuf before the loop that counts = mbufs in >>>>>>>>>>> the >>>>>>>>>>> list. >>>>>>>>>>> Btw, >>>>>>>>>>> this tcp/ip header mbuf also has leading space for the MAC = layer >>>>>>>>>>> header.) >>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> Hi Rick, >>>>>>>>>>=20 >>>>>>>>>> Your question is good. With the Mellanox hardware we have = separate >>>>>>>>>> so-called inline data space for the TCP/IP headers, so if the = TCP >>>>>>>>>> stack >>>>>>>>>> subtracts something, then we would need to add something to = the >>>>>>>>>> limit, >>>>>>>>>> because then the scatter gather list is only used for the = data part. >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> I think all drivers in tree don't subtract 1 for >>>>>>>>> if_hw_tsomaxsegcount. Probably touching Mellanox driver would = be >>>>>>>>> simpler than fixing all other drivers in tree. >>>>>>>>>=20 >>>>>>>>>> Maybe it can be controlled by some kind of flag, if all the = three >>>>>>>>>> TSO >>>>>>>>>> limits should include the TCP/IP/ethernet headers too. I'm = pretty >>>>>>>>>> sure >>>>>>>>>> we want both versions. >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> Hmm, I'm afraid it's already complex. Drivers have to tell = almost >>>>>>>>> the same information to both bus_dma(9) and network stack. >>>>>>>>=20 >>>>>>>> Don't forget that not all drivers in the tree set the TSO = limits >>>>>>>> before >>>>>>>> if_attach(), so possibly the subtraction of one TSO fragment = needs to >>>>>>>> go >>>>>>>> into ip_output() .... >>>>>>>>=20 >>>>>>> Ok, I realized that some drivers may not know the answers before >>>>>>> ether_ifattach(), >>>>>>> due to the way they are configured/written (I saw the use of >>>>>>> if_hw_tsomax_update() >>>>>>> in the patch). >>>>>>=20 >>>>>> I was not able to find an interface that configures TSO = parameters >>>>>> after if_t conversion. I'm under the impression >>>>>> if_hw_tsomax_update() is not designed to use this way. Probably = we >>>>>> need a better one?(CCed to Gleb). >>>>>>=20 >>>>>>>=20 >>>>>>> If it is subtracted as a part of the assignment to = if_hw_tsomaxsegcount >>>>>>> in >>>>>>> tcp_output() >>>>>>> at line#791 in tcp_output() like the following, I don't think it = should >>>>>>> matter if the >>>>>>> values are set before ether_ifattach()? >>>>>>> /* >>>>>>> * Subtract 1 for the tcp/ip header mbuf = that >>>>>>> * will be prepended to the mbuf chain = in this >>>>>>> * function in the code below this = block. >>>>>>> */ >>>>>>> if_hw_tsomaxsegcount =3D = tp->t_tsomaxsegcount - 1; >>>>>>>=20 >>>>>>> I don't have a good solution for the case where a driver doesn't = plan >>>>>>> on >>>>>>> using the >>>>>>> tcp/ip header provided by tcp_output() except to say the driver = can add >>>>>>> one >>>>>>> to the >>>>>>> setting to compensate for that (and if they fail to do so, it = still >>>>>>> works, >>>>>>> although >>>>>>> somewhat suboptimally). When I now read the comment in = sys/net/if_var.h >>>>>>> it >>>>>>> is clear >>>>>>> what it means, but for some reason I didn't read it that way = before? (I >>>>>>> think it was >>>>>>> the part that said the driver didn't have to subtract for the = headers >>>>>>> that >>>>>>> confused me?) >>>>>>> In any case, we need to try and come up with a clear definition = of what >>>>>>> they need to >>>>>>> be set to. >>>>>>>=20 >>>>>>> I can now think of two ways to deal with this: >>>>>>> 1 - Leave tcp_output() as is, but provide a macro for the device = driver >>>>>>> authors to use >>>>>>> that sets if_hw_tsomaxsegcount with a flag for "driver uses = tcp/ip >>>>>>> header mbuf", >>>>>>> documenting that this flag should normally be true. >>>>>>> OR >>>>>>> 2 - Change tcp_output() as above, noting that this is a = workaround for >>>>>>> confusion w.r.t. >>>>>>> whether or not if_hw_tsomaxsegcount should include the tcp/ip = header >>>>>>> mbuf and >>>>>>> update the comment in if_var.h to reflect this. Then drivers = that >>>>>>> don't >>>>>>> use the >>>>>>> tcp/ip header mbuf can increase their value for = if_hw_tsomaxsegcount >>>>>>> by >>>>>>> 1. >>>>>>> (The comment should also mention that a value of 35 or greater = is >>>>>>> much >>>>>>> preferred to >>>>>>> 32 if the hardware will support that.) >>>>>>>=20 >>>>>>=20 >>>>>> Both works for me. My preference is 2 just because it's very >>>>>> common for most drivers that use tcp/ip header mbuf. >>>>> Thanks for this comment. I tend to agree, both for the reason you = state >>>>> and >>>>> also >>>>> because the patch is simple enough that it might qualify as an = errata for >>>>> 10.2. >>>>>=20 >>>>> I am hoping Daniel Braniss will be able to test the patch and let = us know >>>>> if it >>>>> improves performance with TSO enabled? >>>>=20 >>>> send me the patch and I=E2=80=99ll test it ASAP. >>>> danny >>>>=20 >>> Patch is attached. The one for head will also include an update to = the >>> comment >>> in sys/net/if_var.h, but that isn't needed for testing. >>=20 >>=20 >> well, the plot thickens. >>=20 >> Yesterday, before running the new kernel, I decided to re run my = test, and to >> my surprise >> i was getting good numbers, about 300MGB/s with and without TSO. >>=20 >> this morning, the numbers were again bad, around 70MGB/s,what the = ^%$#@! >>=20 >> so, after some coffee, I run some more tests, and some conclusions: >> using a netapp(*) as the nfs client: >> - doing >> ifconfig ix0 tso or -tso >> does some magic and numbers are back to normal - for a while >>=20 >> using another Fbsd/zfs as client all is nifty, actually a bit faster = than the >> netapp (not a fair >> comparison, since the zfs client is not heavily used) and I can=E2=80=99= t see any >> degradation. >>=20 > I assume you meant "server" and not "client" above. you are correct. >=20 >> btw, this is with the patch applied, but was seeing similar numbers = before >> the patch. >>=20 >> running with tso, initially I get around 300MGB/s, but after a = while(sorry >> can=E2=80=99t be more scientific) >> it drops down to about half, and finally to a pathetic 70MGB/s >>=20 > Ok, so it sounds like tso isn't the issue. (At least it seems the = patch, > which I believe is needed, doesn't cause a regression.) >=20 > All I can suggest is: > - looking at the ix stats (I know nothing about them), but if you post = them > maybe someone conversant with the chip can help? (Before and after = degredation.) > - if you captured packets for a short period of time when degraded and = then > after doing "ifconfig", looking at the packet capture in wireshark = might give > some indication of what changes? > - For this I'd be focused on the TCP layer (window sizes, etc) and = timing of > packets. > --> I don't know if there is a packet capture tool like tcpdump on a = Netapp, but > that might be better than capturing them on the client, in case = tcpdump affects > the outcome. However, tcpdump run on the client would be a = fallback, I think. >=20 > The other thing is the degradation seems to cut the rate by about half = each time. > 300-->150-->70 I have no idea if this helps to explain it. >=20 the halving is an optical illusion, it starts degrading slowly. actually it=E2=80=99s bad after reboot, fiddling with the two flags = shows the above =E2=80=98fetaure=E2=80=99. one conclusion so far: ix0 behaves much better without TSO when the server is a NetAPP BTW, this thread started because next week, our main NetAPP will be = upgraded, and I wanted to see if there will be any improvement. > Have fun with it, rick love your generosity ;-) cheers, and thanks, danny >=20 >> *: while running the tests I monitored the Netapp, and nothing out of = the >> ordinary there. >>=20 >> cheers, >> danny >>=20 >> _______________________________________________ >> freebsd-stable@freebsd.org = mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable = >> To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org = "