Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Mar 2010 03:28:02 +0200
From:      Dan Naumov <dan.naumov@gmail.com>
To:        freebsd-net@freebsd.org, freebsd-questions@freebsd.org,  FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>, freebsd-performance@freebsd.org
Subject:   Re: Samba read speed performance tuning
Message-ID:  <cf9b1ee01003191828g5bea26e7i2ecc1d7135ea5102@mail.gmail.com>
In-Reply-To: <cf9b1ee01003191414q35d884f1oaa72e700305abd51@mail.gmail.com>
References:  <cf9b1ee01003191414q35d884f1oaa72e700305abd51@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Mar 19, 2010 at 11:14 PM, Dan Naumov <dan.naumov@gmail.com> wrote:
> On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
> using an Intel gigabit nic with the em driver, running on top of a ZFS
> mirror, I was seeing a strange issue. Local reads and writes to the
> pool easily saturate the disks with roughly 75mb/s throughput, which
> is roughly the best these drives can do. However, working with Samba,
> writes to a share could easily pull off 75mb/s and saturate the disks,
> but reads off a share were resulting in rather pathetic 18mb/s
> throughput.
>
> I found a threadon the FreeBSD forums
> (http://forums.freebsd.org/showthread.php?t=9187) and followed the
> suggested advice. I rebuilt Samba with AIO support, kldloaded the aio
> module and made the following changes to my smb.conf
>
> From:
> socket options=TCP_NODELAY
>
> To:
> socket options=SO_RCVBUF=131072 SO_SNDBUF=131072 TCP_NODELAY
> min receivefile size=16384
> use sendfile=true
> aio read size = 16384
> aio write size = 16384
> aio write behind = true
> dns proxy = no[/CODE]
>
> This showed a very welcome improvement in read speed, I went from
> 18mb/s to 48mb/s. The write speed remained unchanged and was still
> saturating the disks. Now I tried the suggested sysctl tunables:
>
> atombsd# sysctl net.inet.tcp.delayed_ack=0
> net.inet.tcp.delayed_ack: 1 -> 0
>
> atombsd# sysctl net.inet.tcp.path_mtu_discovery=0
> net.inet.tcp.path_mtu_discovery: 1 -> 0
>
> atombsd# sysctl net.inet.tcp.recvbuf_inc=524288
> net.inet.tcp.recvbuf_inc: 16384 -> 524288
>
> atombsd# sysctl net.inet.tcp.recvbuf_max=16777216
> net.inet.tcp.recvbuf_max: 262144 -> 16777216
>
> atombsd# sysctl net.inet.tcp.sendbuf_inc=524288
> net.inet.tcp.sendbuf_inc: 8192 -> 524288
>
> atombsd# sysctl net.inet.tcp.sendbuf_max=16777216
> net.inet.tcp.sendbuf_max: 262144 -> 16777216
>
> atombsd# sysctl net.inet.tcp.sendspace=65536
> net.inet.tcp.sendspace: 32768 -> 65536
>
> atombsd# sysctl net.inet.udp.maxdgram=57344
> net.inet.udp.maxdgram: 9216 -> 57344
>
> atombsd# sysctl net.inet.udp.recvspace=65536
> net.inet.udp.recvspace: 42080 -> 65536
>
> atombsd# sysctl net.local.stream.recvspace=65536
> net.local.stream.recvspace: 8192 -> 65536
>
> atombsd# sysctl net.local.stream.sendspace=65536
> net.local.stream.sendspace: 8192 -> 65536
>
> This improved the read speeds a further tiny bit, now I went from
> 48mb/s to 54mb/s. This is it however, I can't figure out how to
> increase Samba read speed any further. Any ideas?


Oh my god... Why did noone tell me how much of an enormous performance
boost vfs.zfs.prefetch_disable=0 (aka actually enabling prefetch) is.
My local reads off the mirror pool jumped from 75mb/s to 96mb/s (ie.
they are now nearly 25% faster than reading off an individual disk)
and reads off a Samba share skyrocketed from 50mb/s to 90mb/s.

By default, FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386
systems and on any amd64 systems with less than 4GB of avaiable
memory. My system is amd64 with 4gb ram, but integrated video eats
some of that, so the autotuning disabled the prefetch. I had read up
on it and a fair amount of people seemed to have performance issues
caused by having prefetch enabled and get better results with it
turned off, in my case however, it seems that enabling it gave a
really solid boost to performance.


- Sincerely
Dan Naumov



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?cf9b1ee01003191828g5bea26e7i2ecc1d7135ea5102>