Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jul 2013 09:05:30 -0400
From:      Outback Dingo <outbackdingo@gmail.com>
To:        Jack Vogel <jfvogel@gmail.com>
Cc:        net@freebsd.org
Subject:   Re: Terrible ix performance
Message-ID:  <CAKYr3zx8qdS-1MAcuPF0RAgF53nohZCxv-dm1m-NqYakSAJtxw@mail.gmail.com>
In-Reply-To: <CAFOYbc=Q%2BBoix0xwc%2BNu4mpoO2G3QaOkZLCYGgYhcgyFpsOqTw@mail.gmail.com>
References:  <CAKYr3zyV74DPLsJRuDoRiYsYdAXs=EoqJ6%2B_k4hJiSnwq5zhUQ@mail.gmail.com> <CAFOYbc=Q%2BBoix0xwc%2BNu4mpoO2G3QaOkZLCYGgYhcgyFpsOqTw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jul 3, 2013 at 2:00 AM, Jack Vogel <jfvogel@gmail.com> wrote:

> ix is just the device name, it is using the ixgbe driver. The driver should
> print some kind of banner when it loads, what version of the OS and driver
> are you using?? I have little experience testing nfs or samba so I am
> not sure right off what might be the problem.
>
> Jack
>
>
uname -a
FreeBSD XXXX.XXX.net 9.1-STABLE FreeBSD 9.1-STABLE #0 r249621M: Thu Apr 18
08:46:50 UTC 2013     root@builder-9:/usr/obj/san/usr/src/sys/SAN-amd64
 amd64

loader.conf
kernel="kernel"
bootfile="kernel"
kernel_options=""
kern.hz="20000"
hw.est.msr_info="0"
hw.hptrr.attach_generic="0"
kern.maxfiles="65536"
kern.maxfilesperproc="50000"
kern.cam.boot_delay="8000"
autoboot_delay="5"
isboot_load="YES"
zfs_load="YES"
kern.geom.label.gptid.enable="0"
kern.geom.label.gpt.enable="1"
geom_multipath_load="YES"
aio_load="yes"
hw.ixgbe.enable_aim=0
# ZFS kernel tune
vm.kmem_size="128000M"
vfs.zfs.arc_min="124928M"
vfs.zfs.arc_max="124928M"
vfs.zfs.prefetch_disable="0"
vfs.zfs.txg.timeout="5"
vfs.zfs.vdev.max_pending="10"
vfs.zfs.vdev.min_pending="4"
vfs.zfs.write_limit_override="0"
vfs.zfs.no_write_throttle="0"

cat /etc/sysctl.conf
# System tuning
hw.intr_storm_threshold=9000
# Disable core dump
kern.coredump=0
# System tuning
kern.ipc.maxsockbuf=16777216
# System tuning
kern.ipc.nmbclusters=262144
# System tuning
kern.ipc.nmbjumbo9=131072
# System tuning
kern.ipc.nmbjumbo16=65536
# System tuning
kern.ipc.nmbjumbop=262144
# System tuning
kern.ipc.somaxconn=8192
# System tuning
kern.maxfiles=65536
# System tuning
kern.maxfilesperproc=50000
# System tuning
net.inet.icmp.icmplim=300
# System tuning
net.inet.icmp.icmplim_output=1
# System tuning
net.inet.tcp.delayed_ack=0
# System tuning
net.inet.tcp.path_mtu_discovery=0
# System tuning
net.inet.tcp.recvbuf_auto=1
# System tuning
net.inet.tcp.recvbuf_inc=262144
# System tuning
net.inet.tcp.recvbuf_max=4194304
# System tuning
net.inet.tcp.recvspace=262144
# System tuning
net.inet.tcp.rfc1323=1
# System tuning
net.inet.tcp.sendbuf_auto=1
# System tuning
net.inet.tcp.sendbuf_inc=262144
# System tuning
net.inet.tcp.sendbuf_max=4194304
# System tuning
net.inet.tcp.sendspace=262144
# System tuning
net.inet.udp.maxdgram=57344
# System tuning
net.inet.udp.recvspace=65536
# System tuning
net.local.stream.recvspace=65536
# System tuning
net.local.stream.sendspace=65536



 On Tue, Jul 2, 2013 at 9:28 PM, Outback Dingo <outbackdingo@gmail.com>wrote:

> Ive got a high end storage server here, iperf shows decent network io
>>
>> iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
>> ------------------------------------------------------------
>> Client connecting to 10.0.96.1, TCP port 5001
>> TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
>> ------------------------------------------------------------
>> [  3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-10.0 sec  9.78 GBytes  8.40 Gbits/sec
>> [  3] 10.0-20.0 sec  8.95 GBytes  7.69 Gbits/sec
>> [  3]  0.0-20.0 sec  18.7 GBytes  8.05 Gbits/sec
>>
>>
>> the card has a 3 meter twinax cable from cisco connected to it, going
>> through a fujitsu switch. We have tweaked various networking, and kernel
>> sysctls, however from a sftp and nfs session i cant get better then 100MBs
>> from a zpool with 8 mirrored vdevs. We also have an identical box that
>> will
>> get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs compared
>> to reads only 1.4Gbs...
>>
>> does anyone have an idea of what the bottle neck could be?? This is a
>> shared storage array with dual LSI controllers connected to 32 drives via
>> an enclosure, local dd and other tests show the zpool performs quite well.
>> however as soon as we introduce any type of protocol, sftp, samba, nfs
>> performance plummets. Im quite puzzled and have run out of ideas. so now
>> curiousity has me........ its loading the ix driver and working but not up
>> to speed,
>> it is feasible it should be using the ixgbe driver??
>>
>> ix0@pci0:2:0:0: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01
>> hdr=0x00
>>     vendor     = 'Intel Corporation'
>>     device     = '82599EB 10-Gigabit SFI/SFP+ Network Connection'
>>     class      = network
>>     subclass   = ethernet
>> ix1@pci0:2:0:1: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01
>> hdr=0x00
>>     vendor     = 'Intel Corporation'
>>     device     = '82599EB 10-Gigabit SFI/SFP+ Network Connection'
>>     class      = network
>>     subclass   = ethernet
>> _______________________________________________
>> freebsd-net@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>>
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAKYr3zx8qdS-1MAcuPF0RAgF53nohZCxv-dm1m-NqYakSAJtxw>