From owner-freebsd-net@FreeBSD.ORG Wed Jul 3 23:06:48 2013 Return-Path: Delivered-To: net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 3C07A649; Wed, 3 Jul 2013 23:06:48 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-ob0-x235.google.com (mail-ob0-x235.google.com [IPv6:2607:f8b0:4003:c01::235]) by mx1.freebsd.org (Postfix) with ESMTP id F2D571D4A; Wed, 3 Jul 2013 23:06:47 +0000 (UTC) Received: by mail-ob0-f181.google.com with SMTP id 16so839149obc.40 for ; Wed, 03 Jul 2013 16:06:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=gggR0eOzzLZsFSIJnliJMaJl5k4rI/QO0cKIU/o6u7o=; b=qv1YMP687X6eCdTBMq2nfnxxK6wP0botnQYocPIkomEXVjVY9Xb6JW2f6w1LOwa/qg y27K8heODqlY5d4nk14znoWWpBVLdUUlqrx8jC4YwBgKCS907Kc4nZX6zCvL3aDIsFuQ Izaki9XYBIWC29apoGJjpDtpgtf0YyJZ6ll8PqBFLsK4DXQUynmW0Z7wNehstNaIJ6R0 wMZdoa6zyuDX0Rszmlj/ofnhcf7XRqzHcxhPYn82VhFeLAIKJKPtJXrZFByMyaeAvtyx fYcqn9zZg2kSmFtwZVoim7Un+Fei3eqekdddWHkf1UxpJcZBc/BJp6pjEup1fjHn57ey JKFA== MIME-Version: 1.0 X-Received: by 10.60.145.173 with SMTP id sv13mr3183531oeb.63.1372892807558; Wed, 03 Jul 2013 16:06:47 -0700 (PDT) Received: by 10.76.90.197 with HTTP; Wed, 3 Jul 2013 16:06:47 -0700 (PDT) In-Reply-To: <51D42976.9020206@freebsd.org> References: <51D3E5BC.1000604@freebsd.org> <51D42976.9020206@freebsd.org> Date: Wed, 3 Jul 2013 19:06:47 -0400 Message-ID: Subject: Re: Terrible ix performance From: Outback Dingo To: Lawrence Stewart Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: net@freebsd.org X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Jul 2013 23:06:48 -0000 On Wed, Jul 3, 2013 at 9:39 AM, Lawrence Stewart wrote: > On 07/03/13 22:58, Outback Dingo wrote: > > On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart > > wrote: > > > > On 07/03/13 14:28, Outback Dingo wrote: > > > Ive got a high end storage server here, iperf shows decent network > io > > > > > > iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M > > > ------------------------------------------------------------ > > > Client connecting to 10.0.96.1, TCP port 5001 > > > TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte) > > > ------------------------------------------------------------ > > > [ 3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001 > > > [ ID] Interval Transfer Bandwidth > > > [ 3] 0.0-10.0 sec 9.78 GBytes 8.40 Gbits/sec > > > [ 3] 10.0-20.0 sec 8.95 GBytes 7.69 Gbits/sec > > > [ 3] 0.0-20.0 sec 18.7 GBytes 8.05 Gbits/sec > > > > Given that iperf exercises the ixgbe driver (ix), network path and > TCP, > > I would suggest that your subject is rather misleading ;) > > > > > the card has a 3 meter twinax cable from cisco connected to it, > going > > > through a fujitsu switch. We have tweaked various networking, and > > kernel > > > sysctls, however from a sftp and nfs session i cant get better > > then 100MBs > > > from a zpool with 8 mirrored vdevs. We also have an identical box > > that will > > > get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs > > compared > > > to reads only 1.4Gbs... > > > > I take it the RTT between both hosts is very low i.e. sub 1ms? > > An answer to the above question would be useful. > > > > does anyone have an idea of what the bottle neck could be?? This > is a > > > shared storage array with dual LSI controllers connected to 32 > > drives via > > > an enclosure, local dd and other tests show the zpool performs > > quite well. > > > however as soon as we introduce any type of protocol, sftp, samba, > nfs > > > performance plummets. Im quite puzzled and have run out of ideas. > > so now > > > curiousity has me........ its loading the ix driver and working > > but not up > > > to speed, > > > > ssh (and sftp by extension) aren't often tuned for high speed > operation. > > Are you running with the HPN patch applied or a new enough FreeBSD > that > > has the patch included? Samba and NFS are both likely to need tuning > for > > multi-Gbps operation. > > > > > > Running 9-STABLE as of 3 days ago, what are you referring to s i can > > validate i dont need to apply it > > Ok so your SSH should have the HPN patch. > > > as for tuning for NFS/SAMBA sambas configured with AIO, and sendfile, > > and there so much information > > on tuninig these things that its a bit hard to decipher whats right and > > not right > > Before looking at tuning, I'd suggest testing with a protocol that > involves the disk but isn't as heavy weight as SSH/NFS/CIFS. FTP is the > obvious choice. Set up an inetd-based FTP instance, serve a file large > enough that it will take ~60s to transfer to the client and report back > what data rates you get from 5 back-to-back transfer trials. > > on the 1GB interface i get 100MB/s, on the 10GB interface i get 250MB/s via NFS on the 1GB Interface 1 get 112MB/s, on the 10GB interface i get ftp> put TEST3 53829697536 bytes sent in 01:56 (439.28 MiB/s) ftp> get TEST3 53829697536 bytes received in 01:21 (632.18 MiB/s) ftp> get TEST3 53829697536 bytes received in 01:37 (525.37 MiB/s) ftp> put TEST3 43474223104 bytes sent in 01:50 (376.35 MiB/s) ftp> put TEST3 local: TEST3 remote: TEST3 229 Entering Extended Passive Mode (|||10613|) 226 Transfer complete 43474223104 bytes sent in 01:41 (410.09 MiB/s) ftp> so still about 50% performance on 10GB Cheers, > Lawrence >