Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 2 Jul 2013 23:00:53 -0700
From:      Jack Vogel <jfvogel@gmail.com>
To:        Outback Dingo <outbackdingo@gmail.com>
Cc:        net@freebsd.org
Subject:   Re: Terrible ix performance
Message-ID:  <CAFOYbc=Q%2BBoix0xwc%2BNu4mpoO2G3QaOkZLCYGgYhcgyFpsOqTw@mail.gmail.com>
In-Reply-To: <CAKYr3zyV74DPLsJRuDoRiYsYdAXs=EoqJ6%2B_k4hJiSnwq5zhUQ@mail.gmail.com>
References:  <CAKYr3zyV74DPLsJRuDoRiYsYdAXs=EoqJ6%2B_k4hJiSnwq5zhUQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
ix is just the device name, it is using the ixgbe driver. The driver should
print some kind of banner when it loads, what version of the OS and driver
are you using?? I have little experience testing nfs or samba so I am
not sure right off what might be the problem.

Jack



On Tue, Jul 2, 2013 at 9:28 PM, Outback Dingo <outbackdingo@gmail.com>wrote:

> Ive got a high end storage server here, iperf shows decent network io
>
> iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
> ------------------------------------------------------------
> Client connecting to 10.0.96.1, TCP port 5001
> TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
> ------------------------------------------------------------
> [  3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec  9.78 GBytes  8.40 Gbits/sec
> [  3] 10.0-20.0 sec  8.95 GBytes  7.69 Gbits/sec
> [  3]  0.0-20.0 sec  18.7 GBytes  8.05 Gbits/sec
>
>
> the card has a 3 meter twinax cable from cisco connected to it, going
> through a fujitsu switch. We have tweaked various networking, and kernel
> sysctls, however from a sftp and nfs session i cant get better then 100MBs
> from a zpool with 8 mirrored vdevs. We also have an identical box that will
> get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs compared
> to reads only 1.4Gbs...
>
> does anyone have an idea of what the bottle neck could be?? This is a
> shared storage array with dual LSI controllers connected to 32 drives via
> an enclosure, local dd and other tests show the zpool performs quite well.
> however as soon as we introduce any type of protocol, sftp, samba, nfs
> performance plummets. Im quite puzzled and have run out of ideas. so now
> curiousity has me........ its loading the ix driver and working but not up
> to speed,
> it is feasible it should be using the ixgbe driver??
>
> ix0@pci0:2:0:0: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01
> hdr=0x00
>     vendor     = 'Intel Corporation'
>     device     = '82599EB 10-Gigabit SFI/SFP+ Network Connection'
>     class      = network
>     subclass   = ethernet
> ix1@pci0:2:0:1: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01
> hdr=0x00
>     vendor     = 'Intel Corporation'
>     device     = '82599EB 10-Gigabit SFI/SFP+ Network Connection'
>     class      = network
>     subclass   = ethernet
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFOYbc=Q%2BBoix0xwc%2BNu4mpoO2G3QaOkZLCYGgYhcgyFpsOqTw>