From owner-freebsd-fs@FreeBSD.ORG Wed Apr 1 20:49:59 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C6CA6E5F for ; Wed, 1 Apr 2015 20:49:59 +0000 (UTC) Received: from smtp102-5.vfemail.net (eightfive.vfemail.net [96.30.253.185]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 92936870 for ; Wed, 1 Apr 2015 20:49:59 +0000 (UTC) Received: (qmail 4234 invoked by uid 89); 1 Apr 2015 20:43:14 -0000 Received: by simscan 1.4.0 ppid: 4225, pid: 4230, t: 0.0901s scanners:none Received: from unknown (HELO d3d3MTExQDE0Mjc5MjA5OTQ=) (cmlja0BoYXZva21vbi5jb21AMTQyNzkyMDk5NA==@MTcyLjE2LjEwMC45M0AxNDI3OTIwOTk0) by 172.16.100.62 with ESMTPA; 1 Apr 2015 20:43:14 -0000 Date: Wed, 01 Apr 2015 15:43:14 -0500 Message-ID: <20150401154314.Horde.e_w-9XEJOaa4SwYyNLlttA3@www.vfemail.net> From: Rick Romero To: freebsd-fs@freebsd.org Subject: Re: NFSD high CPU usage In-Reply-To: <551C4F1D.1000206@tezzaron.com> User-Agent: Internet Messaging Program (IMP) H5 (6.2.2) X-VFEmail-Originating-IP: MTIuMzEuMTAwLjE0Ng== X-VFEmail-AntiSpam: Notify admin@vfemail.net of any spam, and include VFEmail headers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; DelSp=Yes Content-Transfer-Encoding: 8bit Content-Disposition: inline Content-Description: Plaintext Message X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Apr 2015 20:49:59 -0000 Quoting Adam Guimont : > I have an issue where NFSD will max out the CPU (1200% in this case) > when a client workstation runs out of memory while trying to write via > NFS. What also happens is the TCP Recv-Q fills up and causes connection > timeouts for any other client trying to use the NFS server. > > I can reproduce the issue by running stress on a low-end client > workstation. Change into the NFS mounted directory and then use stress > to write via NFS and exhaust the memory, example: > > stress --cpu 2 --io 4 --vm 20 --hdd 4 > > The client workstation will eventually run out of memory trying to write > into the NFS directory, fill the TCP Recv-Q on the NFS server, and then > NFSD will max out the CPU. > > The actual client workstations (~50) are not running stress when this > happens, it's a mixture of EDA tools (simulation and verification). > > For what it's worth, this is how I've been monitoring the TCP buffer > queues where "xx.xxx.xx.xxx" is the IP address of the NFS server: > > cmdwatch -n1 'netstat -an | grep -e "Proto" -e "tcp4" | grep -e "Proto" > -e "xx.xxx.xx.xxx.2049"' > > I have tried several tuning recommendations but it has not solved the > problem. > > Has anyone else experienced this and is anyone else able to reproduce it? > > --- > NFS server specs: > > OS = FreeBSD 10.0-RELEASE > CPU = E5-1650 v3 > Memory = 96GB > Disks = 24x ST6000NM0034 in 4x raidz2 > HBA = LSI SAS 9300-8i > NIC = Intel 10Gb X540-T2 > --- > /boot/loader.conf > > autoboot_delay="3" > geom_mirror_load="YES" > mpslsi3_load="YES" > cc_htcp_load="YES" > --- > /etc/rc.conf > > hostname="***" > ifconfig_ix0="inet *** netmask 255.255.248.0 -tso -vlanhwtso" > defaultrouter="***" > sshd_enable="YES" > ntpd_enable="YES" > zfs_enable="YES" > sendmail_enable="NO" > nfs_server_enable="YES" > nfs_server_flags="-h *** -t -n 128" > nfs_client_enable="YES" > rpcbind_enable="YES" > rpc_lockd_enable="YES" > rpc_statd_enable="YES" > samba_enable="YES" > atop_enable="YES" > atop_interval="5" > zabbix_agentd_enable="YES" > --- > /etc/sysctl.conf > > vfs.nfsd.server_min_nfsvers=3 > vfs.nfsd.cachetcp=0 > kern.ipc.maxsockbuf=16777216 > net.inet.tcp.sendbuf_max=16777216 > net.inet.tcp.recvbuf_max=16777216 > net.inet.tcp.sendspace=1048576 > net.inet.tcp.recvspace=1048576 > net.inet.tcp.sendbuf_inc=32768 > net.inet.tcp.recvbuf_inc=65536 > net.inet.tcp.keepidle=10000 > net.inet.tcp.keepintvl=2500 > net.inet.tcp.always_keepalive=1 > net.inet.tcp.cc.algorithm=htcp > net.inet.tcp.cc.htcp.adaptive_backoff=1 > net.inet.tcp.cc.htcp.rtt_scaling=1 > net.inet.tcp.sack.enable=0 > kern.ipc.soacceptqueue=1024 > net.inet.tcp.mssdflt=1460 > net.inet.tcp.minmss=1300 > net.inet.tcp.tso=0 Does your ZFS pool have log devices? How does gstat -d   look? If the drives are busy, try adding vfs.nfsd.async: 0 Rick