Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 Dec 2018 23:02:58 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Peter Eriksson <peter@ifm.liu.se>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: Suggestion for hardware for ZFS fileserver
Message-ID:  <YQBPR01MB038881C93E6214DE11FE5E9CDDBF0@YQBPR01MB0388.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <2CB9CF77-DBC4-4452-8FC1-0A302884E71B@ifm.liu.se>
References:  <CAEW%2BogZnWC07OCSuzO7E4TeYGr1E9BARKSKEh9ELCL9Zc4YY3w@mail.gmail.com>, <2CB9CF77-DBC4-4452-8FC1-0A302884E71B@ifm.liu.se>

next in thread | previous in thread | raw e-mail | index | archive | help
Peter Eriksson wrote:
>I can give you the specs for the servers we use here for our FreeBSD-based=
 >fileservers - which have been working really well for us serving Home dir=
ectors
[good stuff snipped]

>NFS (NFSv4 only, Kerberos/GSS authentication)
>  More or less the only thing we=92ve tuned for NFS so far is:
>     nfsuserd_flags=3D"-manage-gids -domain OURDOMAIN -usertimeout 10 -use=
rmax >100000 16=94
>  As more clients start using NFS I assume we will have to adjust other st=
uff too.. >Suggestions are welcome :-)
I am not the best person to suggest values for these tunables because I nev=
e
run an NFS server under heavy load, but at least I can mention possible val=
ues.
(I'll assume a 64bit arch with more than a few Gbytes of RAM that can be de=
dicated
 to serving NFS.)
For NFSv3 and NFSv4.0 clients:
- The DRC (which improves correctness and not performance) is enabled for T=
CP.
  (Some NFS server vendors only use the DRC for UDP.) This can result in si=
gnificant
  CPU overheads and RPC RTT delays. You have two alternatives:
  1 - set vfs.nfsd.cachetcp =3D 0 to disable use of the DRC for TCP.
  2 - Increase vfs.nfsd.tcphighwater to something like 100000.
       You can also decrease vfs.nfsd.tcpcachetimeo, but that reduces the
       effectiveness of the DRC for TCP, since the timeout needs to be larg=
er
       than the longest time it is likely for a client to take to do a TCP =
reconnect and
       retry RPCs after a server crash or network partitioning.
  For NFSv4.1, you don't need to do the above, because it uses something ca=
lled
  sessions instead of the DRC. For NFSv4.1 clients you will, however, want =
to
  increase vfs.nfsd.sessionhashsize to something like 1000.

For NFSv4.0 and NFSv4.1 clients, you will want to increase the state relate=
d stuff
to something like:
vfs.nfsd.fhhashsize=3D10000
vfs.nfsd.statehashsize=3D100
vfs.nfsd.clienthashsize=3D1000 (or 1/10th of the number of client mounts up=
 to
   something like 10000)

As you can see, it depends upon which NFS version your clients are using.
("nfsstat -m" should tell you that on both FreeBSD and Linux clients.)

If your exported file systems are UFS, you might consider increasing your b=
uffer
cache size, but not for ZFS exports.

Most/all of these need to be set in your /boot/loader.conf, since they need
to be statically configured. vfs.nfsd.cachetcp can be cleared at any time, =
I think?

For your case of mostly non-NFS usage, it is hard to say if/when you want t=
o do
the above, but these changes probably won't hurt when you have 256Gbytes
of RAM.

Good luck with it, rick
[more good stuff snipped]



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YQBPR01MB038881C93E6214DE11FE5E9CDDBF0>