Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 23 Feb 2011 23:55:17 -0800
From:      Jeremy Chadwick <freebsd@jdc.parodius.com>
To:        Damien Fleuriot <ml@my.gd>
Cc:        "freebsd-stable@freebsd.org" <freebsd-stable@freebsd.org>
Subject:   Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE
Message-ID:  <20110224075517.GA18146@icarus.home.lan>
In-Reply-To: <4D660909.6090202@my.gd>
References:  <4D660909.6090202@my.gd>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Feb 24, 2011 at 08:30:17AM +0100, Damien Fleuriot wrote:
> Hello list,
> 
> I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and since
> then I've been experiencing *abysmal* performance with samba.
> 
> We're talking transfer rates of say 50kbytes/s here, and I'm the only
> client on the box.

I have a similar system with significantly less disks (two pools, one
disk each; yes, no redundancy).  The system can push, via SMB/CIFS
across the network about 65-70MBytes/sec, and 80-90MByte/sec via FTP.
I'll share with you my tunings for Samba, ZFS, and the system.  I spent
quite some time messing with different values in Samba and FreeBSD to
find out what got me the "best" performance without destroying the
system horribly.

Please note the amount of memory matters greatly here, so don't go
blindly setting these if your system has some absurdly small amount of
physical RAM installed.

Before getting into what my system has, I also want to make clear that
there have been cases in the past where people were seeing abysmal
performance from ZFS, only to find out it was a *single disk* in their
pool which was causing all of the problems (meaning a single disk was
performing horribly, impacting everything).  I can try to find the
mailing list post, but I believe the user offlined the disk (and later
replaced it) and everything was fast again.  Just a FYI.


System specifications
=======================
* Case - Supermicro SC733T-645B
*   MB - Supermicro X7SBA
*  CPU - Intel Core 2 Duo E8400
*  RAM - CT2KIT25672AA800, 4GB ECC
*  RAM - CT2KIT25672AA80E, 4GB ECC
* Disk - Intel X25-V SSD (ada0, boot)
* Disk - WD1002FAEX (ada1, ZFS "data" pool)
* Disk - WD2001FASS (ada2, ZFS "backups" pool)



Samba
=======================
Rebuild the port (ports/net/samba35) with AIO_SUPPORT enabled.  To use
AIO you will need to load the aio.ko kernel module (kldload aio) first.

Relevant smb.conf tunings:

  [global]
  socket options = TCP_NODELAY SO_SNDBUF=131072 SO_RCVBUF=131072
  use sendfile = no
  min receivefile size = 16384
  aio read size = 16384
  aio write size = 16384
  aio write behind = yes



ZFS pools
=======================
  pool: backups
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        backups     ONLINE       0     0     0
          ada2      ONLINE       0     0     0

errors: No known data errors

  pool: data
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          ada1      ONLINE       0     0     0

errors: No known data errors



ZFS tunings
=======================
Your tunings here are "wild" (meaning all over the place).  Your use
of vfs.zfs.txg.synctime="1" is probably hurting you quite badly, in
addition to your choice to enable prefetching (every ZFS FreeBSD system
I've used has benefit tremendously from having prefetching disabled,
even on systems with 8GB RAM and more).  You do not need to specify
vm.kmem_size_max, so please remove that.  Keeping vm.kmem_size is fine.
Also get rid of your vdev tunings, I'm not sure why you have those.

My relevant /boot/loader.conf tunings for 8.2-RELEASE (note to readers:
the version of FreeBSD you're running, and build date, matters greatly
here so do not just blindly apply these without thinking first):

  # We use Samba built with AIO support; we need this module!
  aio_load="yes"

  # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory.
  vm.kmem_size="8192M"
  vfs.zfs.arc_max="6144M"

  # Disable ZFS prefetching
  # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html
  # Increases overall speed of ZFS, but when disk flushing/writes occur,
  # system is less responsive (due to extreme disk I/O).
  # NOTE: Systems with 8GB of RAM or more have prefetch enabled by
  # default.
  vfs.zfs.prefetch_disable="1"

  # Decrease ZFS txg timeout value from 30 (default) to 5 seconds.  This
  # should increase throughput and decrease the "bursty" stalls that
  # happen during immense I/O with ZFS.
  # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html
  # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html
  vfs.zfs.txg.timeout="5"



sysctl tunings
=======================
Please note that the below kern.maxvnodes tuning is based on my system
usage, and yours may vary, so you can remove or comment out this option
if you wish.  The same goes for vfs.ufs.dirhash_maxmem.  As for
vfs.zfs.txg.write_limit_override, I strongly suggest you keep this
commented out for starters; it effectively "rate limits" ZFS I/O, and
this smooths out overall performance (otherwise I was seeing what
appeared to be incredible network transfer speed, then the system would
churn hard for quite some time on physical I/O, then fast network speed,
physical I/O, etc... very "bursty", which I didn't want).

  # Increase send/receive buffer maximums from 256KB to 16MB.
  # FreeBSD 7.x and later will auto-tune the size, but only up to the max.
  net.inet.tcp.sendbuf_max=16777216
  net.inet.tcp.recvbuf_max=16777216

  # Double send/receive TCP datagram memory allocation.  This defines the
  # amount of memory taken up by default *per socket*.
  net.inet.tcp.sendspace=65536
  net.inet.tcp.recvspace=131072

  # dirhash_maxmem defaults to 2097152 (2048KB).  dirhash_mem has reached
  # this limit a few times, so we should increase dirhash_maxmem to
  # something like 16MB (16384*1024).
  vfs.ufs.dirhash_maxmem=16777216

  #
  # ZFS tuning parameters
  # NOTE: Be sure to see /boot/loader.conf for additional tunings
  #

  # Increase number of vnodes; we've seen vfs.numvnodes reach 115,000
  # at times.  Default max is a little over 200,000.  Playing it safe...
  kern.maxvnodes=250000

  # Set TXG write limit to a lower threshold.  This helps "level out"
  # the throughput rate (see "zpool iostat").  A value of 256MB works well
  # for systems with 4GB of RAM, while 1GB works well for us w/ 8GB on
  # disks which have 64MB cache.
  vfs.zfs.txg.write_limit_override=1073741824



Good luck.

-- 
| Jeremy Chadwick                                   jdc@parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.               PGP 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110224075517.GA18146>