Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Apr 2010 14:06:09 -0400
From:      Stephen Sanders <ssanders@softhammer.net>
To:        Jack Vogel <jfvogel@gmail.com>
Cc:        Brandon Gooch <jamesbrandongooch@gmail.com>, freebsd-performance@freebsd.org
Subject:   Re: FreeBSD 8.0 ixgbe Poor Performance
Message-ID:  <4BD09011.6000104@softhammer.net>
In-Reply-To: <h2x2a41acea1004220939k23e34403hcd57480d92f4a0f1@mail.gmail.com>
References:  <4BCF0C9A.10005@softhammer.net>	 <y2y179b97fb1004210804s6ca12944qf194f3a6d8c33cfe@mail.gmail.com>	 <x2j2a41acea1004211113kf8e4de95s9ff5c1669156b82c@mail.gmail.com>	 <4BCF5783.9050007@softhammer.net>	 <v2t2a41acea1004211353nbfc4e68cy6dfaae6f47f86034@mail.gmail.com>	 <4BD06E28.3060609@softhammer.net> <h2x2a41acea1004220939k23e34403hcd57480d92f4a0f1@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Adding "-P 2 " to the iperf client got the rate up to what it should
be.  Also, running multiple tcpreplay's pushed the rate up as well.

Thanks again for the pointers.

On 4/22/2010 12:39 PM, Jack Vogel wrote:
> Couple more things that come to mind:
>
> make sure you increase mbuf pool, nmbclusters up to at least 262144,
> and the driver uses 4K clusters if
> you go to jumbo frames (nmbjumbop). some workloads will benefit from
> increeasing the various sendspace
> and recvspace parameters, maxsockets and maxfiles are other candidates.
>
> Another item: look in /var/log/messages to see if you are getting any
> Interrupt storm messages, if you are
> that can throttle the irq and reduce performance, there is an
> intr_storm_threshold that you can increase to
> keep that from happening.
>
> Finally, it is sometimes not possible to fully utilize the hardware
> from a single process, you can get limited
> by the socket layer, stack, scheduler, whatever, so you might want to
> use multiple test processes. I believe
> iperf has a builtin way to do this. Run more threads and look at your
> cumulative.
>
> Good luck,
>
> Jack
>
>
> On Thu, Apr 22, 2010 at 8:41 AM, Stephen Sanders
> <ssanders@softhammer.net <mailto:ssanders@softhammer.net>> wrote:
>
>     I believe that "pciconf -lvc" showed that the cards were in the
>     correct slot.  I'm not sure as to what all of the output means but
>     I'm guessing that " cap 10[a0] = PCI-Express 2 endpoint max data
>     128(256) link x8(x8)" means that the card is an 8 lane card and is
>     using all 8 lanes.
>
>     Setting  kern.ipc.maxsockbuf to16777216 got a better result with
>     ipref TCP testing.  The rate when from ~2.5Gpbs to ~5.5Gbps.
>
>     Running iperf in UDP test mode is still yielding ~2.5Gbps. 
>     Running tcpreplay tests is also yielding ~2.5Gbps as well.
>
>     Command lines for iperf testing are:
>
>     ipref -t 10 -w 2.5m -l 2.5m -c 169.1.0.2
>     iperf -s -w 2.5m -B 169.1.0.2
>
>     iperf -t 10 -w 2.5m  -c 169.1.0.2 -u
>     iperf -s -w 2.5m -B 169.1.0.2 -u
>
>     For the tcpdump test, I'm sending output to /dev/null and using
>     the cache flag on tcpreplay in order to avoid limiting my network
>     interface throughput to the disk speed.
>     Commands lines for this test are:
>
>     tcpdump -i ix1 -w /dev/null
>     tcpreplay -i ix1 -t -l 0 -K ./rate.pcap
>
>     Please forgive my lack of kernel building prowess but I'm guessing
>     that the latest driver needs to be built in a FreeBSD STABLE
>     tree.   I ran into an undefined symbol "drbr_needs_enqueue" in the
>     ixgbe code I downloaded.
>
>     Thanks for all the help.
>
>     On 4/21/2010 4:53 PM, Jack Vogel wrote:
>>     Use my new driver and it will tell you when it comes up with the
>>     slot speed is,
>>     and if its substandard it will SQUAWK loudly at you :)
>>
>>     I think the S5000PAL only has Gen1 PCIE slots which is going to
>>     limit you
>>     somewhat. Would recommend a current generation (x58 or 5520 chipset)
>>     system if you want the full benefit of 10G.
>>
>>     BTW, you dont way what adapter, 82598 or 82599, you are using?
>>
>>     Jack
>>
>>
>>     On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders
>>     <ssanders@softhammer.net <mailto:ssanders@softhammer.net>> wrote:
>>
>>         I'd be most pleased to get near 9k.
>>
>>         I'm running FreeBSD 8.0 amd64 on both of the the test hosts.
>>          I've reset
>>         the configurations to system default as I was getting no
>>         where with
>>         sysctl and loader.conf settings.
>>
>>         The motherboards have been configured to do MSI interrupts.  The
>>         S5000PAL has a MSI to old style interrupt BIOS setting that
>>         confuses the
>>         driver interrupt setup.
>>
>>         The 10Gbps cards should be plugged into the 8x PCI-E slots on
>>         both
>>         hosts.  I'm double checking that claim right now and will get
>>         back later.
>>
>>         Thanks
>>
>>
>>         On 4/21/2010 2:13 PM, Jack Vogel wrote:
>>         > When you get into the 10G world  your performance will only
>>         be as good
>>         > as your weakest link, what I mean is if you connect to
>>         something that has
>>         > less than stellar bus and/or memory performance it is going
>>         to throttle
>>         > everything.
>>         >
>>         > Running back to back with two good systems you should be
>>         able to get
>>         > near line rate (9K range).  Things that can effect that:
>>          64 bit kernel,
>>         > TSO, LRO, how many queues come to mind.  The default driver
>>         config
>>         > should get you there, so tell me more about your
>>         hardware/os config??
>>         >
>>         > Jack
>>         >
>>         >
>>         >
>>         > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch
>>         > <jamesbrandongooch@gmail.com
>>         <mailto:jamesbrandongooch@gmail.com>>wrote:
>>         >
>>         >
>>         >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders
>>         >> <ssanders@softhammer.net <mailto:ssanders@softhammer.net>>
>>         wrote:
>>         >>
>>         >>> I am running speed tests on a pair of systems equipped
>>         with Intel 10Gbps
>>         >>> cards and am getting poor performance.
>>         >>>
>>         >>> iperf and tcpdump testing indicates that the card is
>>         running at roughly
>>         >>> 2.5Gbps max transmit/receive.
>>         >>>
>>         >>> My attempts at turning fiddling with netisr, polling, and
>>         varying the
>>         >>> buffer sizes has been fruitless.  I'm sure there is
>>         something that I'm
>>         >>> missing so I'm hoping for suggestions.
>>         >>>
>>         >>> There are two systems that are connected head to head via
>>          cross over
>>         >>> cable.  The two systems have the same hardware
>>         configuration.  The
>>         >>> hardware is as follows:
>>         >>>
>>         >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz
>>         >>> Intel S5000PAL Motherboard
>>         >>> 16GB Memory
>>         >>>
>>         >>> My iperf command line for the client is:
>>         >>>
>>         >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M
>>         >>>
>>         >>> My TCP dump test command lines are:
>>         >>>
>>         >>> tcpdump -i ix0 -w/dev/null
>>         >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap
>>         >>>
>>         >> If you're running 8.0-RELEASE, you might try updating to
>>         8-STABLE.
>>         >> Jack Vogel recently committed updated Intel NIC driver code:
>>         >>
>>         >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/
>>         >>
>>         >> -Brandon
>>         >> _______________________________________________
>>         >> freebsd-performance@freebsd.org
>>         <mailto:freebsd-performance@freebsd.org> mailing list
>>         >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>>         >> To unsubscribe, send any mail to "
>>         >> freebsd-performance-unsubscribe@freebsd.org
>>         <mailto:freebsd-performance-unsubscribe@freebsd.org>"
>>         >>
>>         >>
>>         > _______________________________________________
>>         > freebsd-performance@freebsd.org
>>         <mailto:freebsd-performance@freebsd.org> mailing list
>>         > http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>>         > To unsubscribe, send any mail to
>>         "freebsd-performance-unsubscribe@freebsd.org
>>         <mailto:freebsd-performance-unsubscribe@freebsd.org>"
>>         >
>>         >
>>
>>
>
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BD09011.6000104>