Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jul 2013 19:06:47 -0400
From:      Outback Dingo <outbackdingo@gmail.com>
To:        Lawrence Stewart <lstewart@freebsd.org>
Cc:        net@freebsd.org
Subject:   Re: Terrible ix performance
Message-ID:  <CAKYr3zyFF%2BA-OHsEL7t6rdv6Jc4c2ByvvRhV-Fv%2BPXt9Y-sXwg@mail.gmail.com>
In-Reply-To: <51D42976.9020206@freebsd.org>
References:  <CAKYr3zyV74DPLsJRuDoRiYsYdAXs=EoqJ6%2B_k4hJiSnwq5zhUQ@mail.gmail.com> <51D3E5BC.1000604@freebsd.org> <CAKYr3zyWzQsFOrQ-MrGTdTzJzhP1kXNac%2BHu8NXfC_J6YJcOsg@mail.gmail.com> <51D42976.9020206@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jul 3, 2013 at 9:39 AM, Lawrence Stewart <lstewart@freebsd.org>wrote:

> On 07/03/13 22:58, Outback Dingo wrote:
> > On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart <lstewart@freebsd.org
> > <mailto:lstewart@freebsd.org>> wrote:
> >
> >     On 07/03/13 14:28, Outback Dingo wrote:
> >     > Ive got a high end storage server here, iperf shows decent network
> io
> >     >
> >     > iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
> >     > ------------------------------------------------------------
> >     > Client connecting to 10.0.96.1, TCP port 5001
> >     > TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
> >     > ------------------------------------------------------------
> >     > [  3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001
> >     > [ ID] Interval       Transfer     Bandwidth
> >     > [  3]  0.0-10.0 sec  9.78 GBytes  8.40 Gbits/sec
> >     > [  3] 10.0-20.0 sec  8.95 GBytes  7.69 Gbits/sec
> >     > [  3]  0.0-20.0 sec  18.7 GBytes  8.05 Gbits/sec
> >
> >     Given that iperf exercises the ixgbe driver (ix), network path and
> TCP,
> >     I would suggest that your subject is rather misleading ;)
> >
> >     > the card has a 3 meter twinax cable from cisco connected to it,
> going
> >     > through a fujitsu switch. We have tweaked various networking, and
> >     kernel
> >     > sysctls, however from a sftp and nfs session i cant get better
> >     then 100MBs
> >     > from a zpool with 8 mirrored vdevs. We also have an identical box
> >     that will
> >     > get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs
> >     compared
> >     > to reads only 1.4Gbs...
> >
> >     I take it the RTT between both hosts is very low i.e. sub 1ms?
>
> An answer to the above question would be useful.
>
> >     > does anyone have an idea of what the bottle neck could be?? This
> is a
> >     > shared storage array with dual LSI controllers connected to 32
> >     drives via
> >     > an enclosure, local dd and other tests show the zpool performs
> >     quite well.
> >     > however as soon as we introduce any type of protocol, sftp, samba,
> nfs
> >     > performance plummets. Im quite puzzled and have run out of ideas.
> >     so now
> >     > curiousity has me........ its loading the ix driver and working
> >     but not up
> >     > to speed,
> >
> >     ssh (and sftp by extension) aren't often tuned for high speed
> operation.
> >     Are you running with the HPN patch applied or a new enough FreeBSD
> that
> >     has the patch included? Samba and NFS are both likely to need tuning
> for
> >     multi-Gbps operation.
> >
> >
> > Running 9-STABLE as of 3 days ago, what are you referring to s i can
> > validate i dont need to apply it
>
> Ok so your SSH should have the HPN patch.
>
> > as for tuning for NFS/SAMBA sambas configured with AIO, and sendfile,
> > and there so much information
> > on tuninig these things that its a bit hard to decipher whats right and
> > not right
>
> Before looking at tuning, I'd suggest testing with a protocol that
> involves the disk but isn't as heavy weight as SSH/NFS/CIFS. FTP is the
> obvious choice. Set up an inetd-based FTP instance, serve a file large
> enough that it will take ~60s to transfer to the client and report back
> what data rates you get from 5 back-to-back transfer trials.
>
>
on the 1GB interface i get 100MB/s, on the 10GB interface i get 250MB/s
via NFS
on the 1GB Interface 1 get 112MB/s, on the 10GB interface i get

 ftp> put TEST3
53829697536 bytes sent in 01:56 (439.28 MiB/s)
ftp> get TEST3
53829697536 bytes received in 01:21 (632.18 MiB/s)
ftp> get TEST3
53829697536 bytes received in 01:37 (525.37 MiB/s)
ftp> put TEST3
43474223104 bytes sent in 01:50 (376.35 MiB/s)
ftp> put TEST3
local: TEST3 remote: TEST3
229 Entering Extended Passive Mode (|||10613|)
226 Transfer complete
43474223104 bytes sent in 01:41 (410.09 MiB/s)
ftp>

 so still about 50% performance on 10GB

Cheers,
> Lawrence
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAKYr3zyFF%2BA-OHsEL7t6rdv6Jc4c2ByvvRhV-Fv%2BPXt9Y-sXwg>