From owner-freebsd-fs@freebsd.org Tue Feb 2 22:41:35 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 06B8CA9874C; Tue, 2 Feb 2016 22:41:35 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 82B621FE; Tue, 2 Feb 2016 22:41:34 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:2F9y7hG1+BJyMDhwVapMAJ1GYnF86YWxBRYc798ds5kLTJ75os2wAkXT6L1XgUPTWs2DsrQf27WQ6vyrATdIyK3CmU5BWaQEbwUCh8QSkl5oK+++Imq/EsTXaTcnFt9JTl5v8iLzG0FUHMHjew+a+SXqvnYsExnyfTB4Ov7yUtaLyZ/niKbrp9aLOE1hv3mUX/BbFF2OtwLft80b08NJC50a7V/3mEZOYPlc3mhyJFiezF7W78a0+4N/oWwL46pyv50IbaKvU6M+BZhVEzU9ezQp/tDgthzKSyOh/HYReF461B1SDF6Wwgv9W8LLsyD5/s900yqeMMi+GaoxUD+h66puYALvhzoKMyY5tmre3J8jxJlHqQ6s8kQsi7XfZ5uYYaJz X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2B+AgBiL7FW/61jaINehAxtBohTsW4OgWQXCoUiSgKCARQBAQEBAQEBAYEJgi2CFAEBAQMBAQEBIAQnIAsFCwIBCA4KEQUBEwICAh8GAQkmAgQIBwQBGgIEh2UDCggOsEOKNA2EMQEBAQEBAQEBAQEBAQEBAQEBAQEBAQ0Ihg+BdYJCgjeBXAEBBRUZMgGCN4E6BYdOhks7iB2CeoJNhR0kUINOSocehS6Fb4EPhz8CHgFDggMZgWYeLgEGiCsCBxcDGnwBAQE X-IronPort-AV: E=Sophos;i="5.22,386,1449550800"; d="scan'208";a="264312737" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 02 Feb 2016 17:41:32 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 5BF7F15F578; Tue, 2 Feb 2016 17:41:32 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id HmibYBRc7mpc; Tue, 2 Feb 2016 17:41:31 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 433A415F57B; Tue, 2 Feb 2016 17:41:31 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id ymtB8tLpUu22; Tue, 2 Feb 2016 17:41:31 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 2605C15F578; Tue, 2 Feb 2016 17:41:31 -0500 (EST) Date: Tue, 2 Feb 2016 17:41:31 -0500 (EST) From: Rick Macklem To: Don Lewis Cc: spork@bway.net, freebsd-fs@freebsd.org, vivek@khera.org, freebsd-questions@freebsd.org Message-ID: <1270648257.999240.1454452891099.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <201602021848.u12ImDES067799@gw.catspoiler.org> References: <201602021848.u12ImDES067799@gw.catspoiler.org> Subject: Re: NFS unstable with high load on server MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_999238_496640285.1454452891098" X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF44 (Win)/8.0.9_GA_6191) Thread-Topic: NFS unstable with high load on server Thread-Index: +eTWuH+jkSVFugXCRSfnA0DZY2m8bQ== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Feb 2016 22:41:35 -0000 ------=_Part_999238_496640285.1454452891098 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Don Lewis wrote: > On 2 Feb, Charles Sprickman wrote: > > On Feb 2, 2016, at 1:10 AM, Ben Woods wrote: > >>=20 > >> On Monday, 1 February 2016, Vick Khera wrote: > >>=20 > >>> I have a handful of servers at my data center all running FreeBSD 10.= 2. > >>> On > >>> one of them I have a copy of the FreeBSD sources shared via NFS. When > >>> this > >>> server is running a large poudriere run re-building all the ports I n= eed, > >>> the clients' NFS mounts become unstable. That is, the clients keep > >>> getting > >>> read failures. The interactive performance of the NFS server is just > >>> fine, > >>> however. The local file system is a ZFS mirror. > >>>=20 > >>> What could be causing NFS to be unstable in this situation? > >>>=20 > >>> Specifics: > >>>=20 > >>> Server "lorax" FreeBSD 10.2-RELEASE-p7 kernel locally compiled, with = NFS > >>> server and ZFS as dynamic kernel modules. 16GB RAM, Xeon 3.1GHz quad > >>> processor. > >>>=20 > >>> The directory /u/lorax1 a ZFS dataset on a mirrored pool, and is NFS > >>> exported via the ZFS exports file. I put the FreeBSD sources on this > >>> dataset and symlink to /usr/src. > >>>=20 > >>>=20 > >>> Client "bluefish" FreeBSD 10.2-RELEASE-p5 kernel locally compiled, NF= S > >>> client built in to kernel. 32GB RAM, Xeon 3.1GHz quad processor > >>> (basically > >>> same hardware but more RAM). > >>>=20 > >>> The directory /n/lorax1 is NFS mounted from lorax via autofs. The NFS > >>> options are "intr,nolockd". /usr/src is symlinked to the sources in t= hat > >>> NFS mount. > >>>=20 > >>>=20 > >>> What I observe: > >>>=20 > >>> [lorax]~% cd /usr/src > >>> [lorax]src% svn status > >>> [lorax]src% w > >>> 9:12AM up 12 days, 19:19, 4 users, load averages: 4.43, 4.45, 3.61 > >>> USER TTY FROM LOGIN@ IDLE WHAT > >>> vivek pts/0 vick.int.kcilink.com 8:44AM - tmux: clie= nt > >>> (/tmp/ > >>> vivek pts/1 tmux(19747).%0 8:44AM 19 sed > >>> y%*+%pp%;s%[^_a > >>> vivek pts/2 tmux(19747).%1 8:56AM - w > >>> vivek pts/3 tmux(19747).%2 8:56AM - slogin > >>> bluefish-prv > >>> [lorax]src% pwd > >>> /u/lorax1/usr10/src > >>>=20 > >>> So right now the load average is more than 1 per processor on lorax. = I > >>> can > >>> quite easily run "svn status" on the source directory, and the > >>> interactive > >>> performance is pretty snappy for editing local files and navigating > >>> around > >>> the file system. > >>>=20 > >>>=20 > >>> On the client: > >>>=20 > >>> [bluefish]~% cd /usr/src > >>> [bluefish]src% pwd > >>> /n/lorax1/usr10/src > >>> [bluefish]src% svn status > >>> svn: E070008: Can't read directory '/n/lorax1/usr10/src/contrib/sqlit= e3': > >>> Partial results are valid but processing is incomplete > >>> [bluefish]src% svn status > >>> svn: E070008: Can't read directory '/n/lorax1/usr10/src/lib/libfetch'= : > >>> Partial results are valid but processing is incomplete > >>> [bluefish]src% svn status > >>> svn: E070008: Can't read directory > >>> '/n/lorax1/usr10/src/release/picobsd/tinyware/msg': Partial results a= re > >>> valid but processing is incomplete > >>> [bluefish]src% w > >>> 9:14AM up 93 days, 23:55, 1 user, load averages: 0.10, 0.15, 0.15 > >>> USER TTY FROM LOGIN@ IDLE WHAT > >>> vivek pts/0 lorax-prv.kcilink.com 8:56AM - w > >>> [bluefish]src% df . > >>> Filesystem 1K-blocks Used Avail Capacity Mounted on > >>> lorax-prv:/u/lorax1 932845181 6090910 926754271 1% /n/lorax1 > >>>=20 > >>>=20 > >>> What I see is more or less random failures to read the NFS volume. Wh= en > >>> the > >>> server is not so busy running poudriere builds, the client never has = any > >>> failures. > >>>=20 > >>> I also observe this kind of failure doing buildworld or installworld= on > >>> the client when the server is busy -- I get strange random failures > >>> reading > >>> the files causing the build or install to fail. > >>>=20 > >>> My workaround is to not do build/installs on client machines when the= NFS > >>> server is busy doing large jobs like building all packages, but there= is > >>> definitely something wrong here I'd like to fix. I observe this on al= l > >>> the > >>> local NFS clients. I rebooted the server before to try to clear this = up > >>> but > >>> it did not fix it. > >>>=20 > >>> Any help would be appreciated. > >>>=20 > >>=20 > >> I just wanted to point out that I am experiencing this exact same issu= e in > >> my home setup. > >>=20 > >> Performing an installworld from an NFS mount works perfectly, until I > >> start > >> running poudriere on the NFS server. Then I start getting NFS timeouts= and > >> the installworld fails. > >>=20 > >> The NFS server is also using ZFS, but the NFS export in my case is bei= ng > >> done via the ZFS property "sharenfs" (I am not using the /etc/exports > >> file). > >=20 > > Me three. I=E2=80=99m actually updating a small group of servers now a= nd started > > blowing up my installworlds by trying to do some poudriere builds at th= e > > same > > time. Very repeatable. Of note, I=E2=80=99m on 9.3, and saw this on 8= .4 as well. > > If I > > track down the client-side failures, it=E2=80=99s always =E2=80=9Cpermi= ssion denied=E2=80=9D. >=20 > That sort of sounds like the problem that was fixed in HEAD with r241561 > and r241568. It was merged to 9-STABLE before 9.3-RELEASE. Try adding > the -S option to mountd_flags. I have no idea why that isn't the > default. >=20 It isn't the default because... - The first time I proposed it, the consensus was that it wasn't the correct fix and it shouldn't go in FreeBSD. - About 2 years later, folks agreed that it was ok as an interim solution, so I made it a non-default option. --> This voids it being considered a POLA violation. Maybe in a couple more years it can become the default? > When poudriere is running, it frequently mounts and unmounts > filesystems. When this happens, mount(8) and umount(8) notify mountd to > update the exports list. This is not done atomically so NFS > transactions can fail while the mountd updates the export list. The fix > mentioned above pauses the nfsd threads while the export list update is > in progress to prevent the problem. >=20 > I don't know how this works with ZFS sharenfs, though. >=20 I think it should be fine either way. (ZFS sharenfs is an alternate way to = set up ZFS exports, but I believe the result is just adding the entries to /etc/ex= ports.) If it doesn't work for some reason, just put lines in /etc/exports for the = ZFS volumes instead of using ZFS sharenfs. I recently had a report that "-S" would get stuck for a long time before performing an update of the exports when the server is under heavy load. I don't think this affects many people, but the attached 2-line patch (not yet in head) fixes the problem for the guy that reported it. rick > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 ------=_Part_999238_496640285.1454452891098 Content-Type: text/x-patch; name=nfssuspend.patch Content-Disposition: attachment; filename=nfssuspend.patch Content-Transfer-Encoding: base64 LS0tIGZzL25mc3NlcnZlci9uZnNfbmZzZGtycGMuYy5zYXYyCTIwMTYtMDEtMTUgMTg6NDI6MTUu NDc5NzgzMDAwIC0wNTAwCisrKyBmcy9uZnNzZXJ2ZXIvbmZzX25mc2RrcnBjLmMJMjAxNi0wMS0x NSAxODo0NTo1OS40MTgyNDUwMDAgLTA1MDAKQEAgLTIzMSwxMCArMjMxLDE2IEBAIG5mc3N2Y19w cm9ncmFtKHN0cnVjdCBzdmNfcmVxICpycXN0LCBTVkMKIAkJICogR2V0IGEgcmVmY250IChzaGFy ZWQgbG9jaykgb24gbmZzZF9zdXNwZW5kX2xvY2suCiAJCSAqIE5GU1NWQ19TVVNQRU5ETkZTRCB3 aWxsIHRha2UgYW4gZXhjbHVzaXZlIGxvY2sgb24KIAkJICogbmZzZF9zdXNwZW5kX2xvY2sgdG8g c3VzcGVuZCB0aGVzZSB0aHJlYWRzLgorCQkgKiBUaGUgY2FsbCB0byBuZnN2NF9sb2NrKCkgdGhh dCBwcmVjZWVkcyBuZnN2NF9nZXRyZWYoKQorCQkgKiBlbnN1cmVzIHRoYXQgdGhlIGFjcXVpc2l0 aW9uIG9mIHRoZSBleGNsdXNpdmUgbG9jaworCQkgKiB0YWtlcyBwcmlvcml0eSBvdmVyIGFjcXVp c2l0aW9uIG9mIHRoZSBzaGFyZWQgbG9jayBieQorCQkgKiB3YWl0aW5nIGZvciBhbnkgZXhjbHVz aXZlIGxvY2sgcmVxdWVzdCB0byBjb21wbGV0ZS4KIAkJICogVGhpcyBtdXN0IGJlIGRvbmUgaGVy ZSwgYmVmb3JlIHRoZSBjaGVjayBvZgogCQkgKiBuZnN2NHJvb3QgZXhwb3J0cyBieSBuZnN2bm9f djRyb290ZXhwb3J0KCkuCiAJCSAqLwogCQlORlNMT0NLVjRST09UTVVURVgoKTsKKwkJbmZzdjRf bG9jaygmbmZzZF9zdXNwZW5kX2xvY2ssIDAsIE5VTEwsIE5GU1Y0Uk9PVExPQ0tNVVRFWFBUUiwK KwkJICAgIE5VTEwpOwogCQluZnN2NF9nZXRyZWYoJm5mc2Rfc3VzcGVuZF9sb2NrLCBOVUxMLCBO RlNWNFJPT1RMT0NLTVVURVhQVFIsCiAJCSAgICBOVUxMKTsKIAkJTkZTVU5MT0NLVjRST09UTVVU RVgoKTsK ------=_Part_999238_496640285.1454452891098--