Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 03 Jul 2013 23:39:02 +1000
From:      Lawrence Stewart <lstewart@freebsd.org>
To:        Outback Dingo <outbackdingo@gmail.com>
Cc:        net@freebsd.org
Subject:   Re: Terrible ix performance
Message-ID:  <51D42976.9020206@freebsd.org>
In-Reply-To: <CAKYr3zyWzQsFOrQ-MrGTdTzJzhP1kXNac%2BHu8NXfC_J6YJcOsg@mail.gmail.com>
References:  <CAKYr3zyV74DPLsJRuDoRiYsYdAXs=EoqJ6%2B_k4hJiSnwq5zhUQ@mail.gmail.com> <51D3E5BC.1000604@freebsd.org> <CAKYr3zyWzQsFOrQ-MrGTdTzJzhP1kXNac%2BHu8NXfC_J6YJcOsg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 07/03/13 22:58, Outback Dingo wrote:
> On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart <lstewart@freebsd.org
> <mailto:lstewart@freebsd.org>> wrote:
> 
>     On 07/03/13 14:28, Outback Dingo wrote:
>     > Ive got a high end storage server here, iperf shows decent network io
>     >
>     > iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
>     > ------------------------------------------------------------
>     > Client connecting to 10.0.96.1, TCP port 5001
>     > TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
>     > ------------------------------------------------------------
>     > [  3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001
>     > [ ID] Interval       Transfer     Bandwidth
>     > [  3]  0.0-10.0 sec  9.78 GBytes  8.40 Gbits/sec
>     > [  3] 10.0-20.0 sec  8.95 GBytes  7.69 Gbits/sec
>     > [  3]  0.0-20.0 sec  18.7 GBytes  8.05 Gbits/sec
> 
>     Given that iperf exercises the ixgbe driver (ix), network path and TCP,
>     I would suggest that your subject is rather misleading ;)
> 
>     > the card has a 3 meter twinax cable from cisco connected to it, going
>     > through a fujitsu switch. We have tweaked various networking, and
>     kernel
>     > sysctls, however from a sftp and nfs session i cant get better
>     then 100MBs
>     > from a zpool with 8 mirrored vdevs. We also have an identical box
>     that will
>     > get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs
>     compared
>     > to reads only 1.4Gbs...
> 
>     I take it the RTT between both hosts is very low i.e. sub 1ms?

An answer to the above question would be useful.

>     > does anyone have an idea of what the bottle neck could be?? This is a
>     > shared storage array with dual LSI controllers connected to 32
>     drives via
>     > an enclosure, local dd and other tests show the zpool performs
>     quite well.
>     > however as soon as we introduce any type of protocol, sftp, samba, nfs
>     > performance plummets. Im quite puzzled and have run out of ideas.
>     so now
>     > curiousity has me........ its loading the ix driver and working
>     but not up
>     > to speed,
> 
>     ssh (and sftp by extension) aren't often tuned for high speed operation.
>     Are you running with the HPN patch applied or a new enough FreeBSD that
>     has the patch included? Samba and NFS are both likely to need tuning for
>     multi-Gbps operation.
> 
> 
> Running 9-STABLE as of 3 days ago, what are you referring to s i can
> validate i dont need to apply it

Ok so your SSH should have the HPN patch.

> as for tuning for NFS/SAMBA sambas configured with AIO, and sendfile,
> and there so much information
> on tuninig these things that its a bit hard to decipher whats right and
> not right

Before looking at tuning, I'd suggest testing with a protocol that
involves the disk but isn't as heavy weight as SSH/NFS/CIFS. FTP is the
obvious choice. Set up an inetd-based FTP instance, serve a file large
enough that it will take ~60s to transfer to the client and report back
what data rates you get from 5 back-to-back transfer trials.

Cheers,
Lawrence



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51D42976.9020206>