Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jul 2013 08:58:41 -0400
From:      Outback Dingo <outbackdingo@gmail.com>
To:        Lawrence Stewart <lstewart@freebsd.org>
Cc:        net@freebsd.org
Subject:   Re: Terrible ix performance
Message-ID:  <CAKYr3zyWzQsFOrQ-MrGTdTzJzhP1kXNac%2BHu8NXfC_J6YJcOsg@mail.gmail.com>
In-Reply-To: <51D3E5BC.1000604@freebsd.org>
References:  <CAKYr3zyV74DPLsJRuDoRiYsYdAXs=EoqJ6%2B_k4hJiSnwq5zhUQ@mail.gmail.com> <51D3E5BC.1000604@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart <lstewart@freebsd.org>wrote:

> On 07/03/13 14:28, Outback Dingo wrote:
> > Ive got a high end storage server here, iperf shows decent network io
> >
> > iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
> > ------------------------------------------------------------
> > Client connecting to 10.0.96.1, TCP port 5001
> > TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
> > ------------------------------------------------------------
> > [  3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [  3]  0.0-10.0 sec  9.78 GBytes  8.40 Gbits/sec
> > [  3] 10.0-20.0 sec  8.95 GBytes  7.69 Gbits/sec
> > [  3]  0.0-20.0 sec  18.7 GBytes  8.05 Gbits/sec
>
> Given that iperf exercises the ixgbe driver (ix), network path and TCP,
> I would suggest that your subject is rather misleading ;)
>
> > the card has a 3 meter twinax cable from cisco connected to it, going
> > through a fujitsu switch. We have tweaked various networking, and kernel
> > sysctls, however from a sftp and nfs session i cant get better then
> 100MBs
> > from a zpool with 8 mirrored vdevs. We also have an identical box that
> will
> > get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs compared
> > to reads only 1.4Gbs...
>
> I take it the RTT between both hosts is very low i.e. sub 1ms?
>
> > does anyone have an idea of what the bottle neck could be?? This is a
> > shared storage array with dual LSI controllers connected to 32 drives via
> > an enclosure, local dd and other tests show the zpool performs quite
> well.
> > however as soon as we introduce any type of protocol, sftp, samba, nfs
> > performance plummets. Im quite puzzled and have run out of ideas. so now
> > curiousity has me........ its loading the ix driver and working but not
> up
> > to speed,
>
> ssh (and sftp by extension) aren't often tuned for high speed operation.
> Are you running with the HPN patch applied or a new enough FreeBSD that
> has the patch included? Samba and NFS are both likely to need tuning for
> multi-Gbps operation.
>

Running 9-STABLE as of 3 days ago, what are you referring to s i can
validate i dont need to apply it

as for tuning for NFS/SAMBA sambas configured with AIO, and sendfile, and
there so much information
on tuninig these things that its a bit hard to decipher whats right and not
right


>
> Cheers,
> Lawrence
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAKYr3zyWzQsFOrQ-MrGTdTzJzhP1kXNac%2BHu8NXfC_J6YJcOsg>