From owner-freebsd-fs@FreeBSD.ORG Sun Dec 21 00:33:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DA54E185 for ; Sun, 21 Dec 2014 00:33:37 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 5BA9C26FF for ; Sun, 21 Dec 2014 00:33:36 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AsoEAOgTllSDaFve/2dsb2JhbABbg1hYBIMAwx4KhSZKAoEnAQEBAQF9hAwBAQEDAQEBARcBCCsgCwUWGAICDRkCKQEJJgYIAgUEARwEiAMIDbkclTQBAQEBAQUBAQEBAQEBAQEZgSGOAAEBGwEzB4ItOxGBMAWJR4gIgx6DIzCCNIIzg0CELoM5IoF/H4FuIDEBAQWBBTl+AQEB X-IronPort-AV: E=Sophos;i="5.07,615,1413259200"; d="scan'208";a="180004897" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 20 Dec 2014 19:33:27 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 280D73CE1D; Sat, 20 Dec 2014 19:33:27 -0500 (EST) Date: Sat, 20 Dec 2014 19:33:27 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= BLOT Message-ID: <2087358136.248078.1419122007097.JavaMail.root@uoguelph.ca> In-Reply-To: <1419070626.4549.5.camel@unix-experience.fr> Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 21 Dec 2014 00:33:37 -0000 Loic Blot wrote: > Hi Rick, > ok, i don't need locallocks, i haven't understand option was for that > usage, i removed it. > I do more tests on monday. > Thanks for the deadlock fix, for other people :) >=20 Good. Please let us know if running with vfs.nfsd.enable_locallocks=3D0 gets rid of the deadlocks? (I think it fixes the one you saw.) On the performance side, you might also want to try different values of readahead, if the Linux client has such a mount option. (With the NFSv4-ZFS sequential vs random I/O heuristic, I have no idea what the optimal readahead value would be.) Good luck with it and please let us know how it goes, rick ps: I now have a patch to fix the deadlock when vfs.nfsd.enable_locallocks= =3D1 is set. I'll post it for anyone who is interested after I put it through some testing. >=20 > -- > Best regards, > Lo=C3=AFc BLOT, > UNIX systems, security and network engineer > http://www.unix-experience.fr >=20 >=20 >=20 > Le jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a =C3=A9c= rit : > > Loic Blot wrote: > > > Hi rick, > > > i tried to start a LXC container on Debian Squeeze from my > > > freebsd > > > ZFS+NFSv4 server and i also have a deadlock on nfsd > > > (vfs.lookup_shared=3D0). Deadlock procs each time i launch a > > > squeeze > > > container, it seems (3 tries, 3 fails). > > >=20 > > Well, I`ll take a look at this `procstat -kk`, but the only thing > > I`ve seen posted w.r.t. avoiding deadlocks in ZFS is to not use > > nullfs. (I have no idea if you are using any nullfs mounts, but > > if so, try getting rid of them.) > >=20 > > Here`s a high level post about the ZFS and vnode locking problem, > > but there is no patch available, as far as I know. > >=20 > > http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407 > >=20 > > rick > >=20 > > > 921 - D 0:00.02 nfsd: server (nfsd) > > >=20 > > > Here is the procstat -kk > > >=20 > > > PID TID COMM TDNAME KSTACK > > > 921 100538 nfsd nfsd: master mi_switch+0xe1 > > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > > > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > > > svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > > > nfssvc_nfsd+0x107 sys_nfssvc+0x9c > > > 921 100572 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100573 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100574 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100575 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100576 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100577 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100578 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100579 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100580 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100581 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100582 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100583 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100584 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100585 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100586 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100587 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100588 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100589 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100590 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100591 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100592 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100593 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100594 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100595 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100596 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100597 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100598 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100599 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100600 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100601 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100602 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100603 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100604 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100605 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100606 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100607 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100608 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100609 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100610 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100611 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100612 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100613 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100614 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100615 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100616 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > > > nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > > > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > 921 100617 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100618 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > 921 100619 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100620 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100621 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100622 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100623 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100624 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100625 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100626 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100627 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100628 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100629 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100630 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100631 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100632 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100633 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100634 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100635 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100636 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100637 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100638 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100639 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100640 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100641 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100642 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100643 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100644 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100645 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100646 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100647 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100648 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100649 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100650 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100651 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100652 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100653 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100654 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100655 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100656 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100657 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100658 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100659 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100660 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100661 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100662 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100663 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100664 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100665 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > _cv_wait_sig+0x16a > > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > fork_trampoline+0xe > > > 921 100666 nfsd nfsd: service mi_switch+0xe1 > > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > > > nfsrvd_dorpc+0xc76 > > > nfssvc_program+0x554 svc_run_internal+0xc77 > > > svc_thread_start+0xb > > > fork_exit+0x9a fork_trampoline+0xe > > >=20 > > >=20 > > > Regards, > > >=20 > > > Lo=C3=AFc Blot, > > > UNIX Systems, Network and Security Engineer > > > http://www.unix-experience.fr > > >=20 > > > 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a > > > =C3=A9crit: > > > > Loic Blot wrote: > > > >=20 > > > >> For more informations, here is procstat -kk on nfsd, if you > > > >> need > > > >> more > > > >> hot datas, tell me. > > > >>=20 > > > >> Regards, PID TID COMM TDNAME KSTACK > > > >> 918 100529 nfsd nfsd: master mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > > > >> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > > > >> amd64_syscall+0x351 > > > >=20 > > > > Well, most of the threads are stuck like this one, waiting for > > > > a > > > > vnode > > > > lock in ZFS. All of them appear to be in zfs_fhtovp(). > > > > I`m not a ZFS guy, so I can`t help much. I`ll try changing the > > > > subject line > > > > to include ZFS vnode lock, so maybe the ZFS guys will take a > > > > look. > > > >=20 > > > > The only thing I`ve seen suggested is trying: > > > > sysctl vfs.lookup_shared=3D0 > > > > to disable shared vop_lookup()s. Apparently zfs_lookup() > > > > doesn`t > > > > obey the vnode locking rules for lookup and rename, according > > > > to > > > > the posting I saw. > > > >=20 > > > > I`ve added a couple of comments about the other threads below, > > > > but > > > > they are all either waiting for an RPC request or waiting for > > > > the > > > > threads stuck on the ZFS vnode lock to complete. > > > >=20 > > > > rick > > > >=20 > > > >> 918 100564 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > >> _cv_wait_sig+0x16a > > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >=20 > > > > Fyi, this thread is just waiting for an RPC to arrive. (Normal) > > > >=20 > > > >> 918 100565 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > >> _cv_wait_sig+0x16a > > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >> 918 100566 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > >> _cv_wait_sig+0x16a > > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >> 918 100567 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > >> _cv_wait_sig+0x16a > > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >> 918 100568 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > >> _cv_wait_sig+0x16a > > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >> 918 100569 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > >> _cv_wait_sig+0x16a > > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >> 918 100570 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > > >> _cv_wait_sig+0x16a > > > >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >> 918 100571 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100572 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > > > >> nfsrvd_dorpc+0xc76 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >=20 > > > > This one (and a few others) are waiting for the nfsv4_lock. > > > > This > > > > happens > > > > because other threads are stuck with RPCs in progress. (ie. The > > > > ones > > > > waiting on the vnode lock in zfs_fhtovp().) > > > > For these, the RPC needs to lock out other threads to do the > > > > operation, > > > > so it waits for the nfsv4_lock() which can exclusively lock the > > > > NFSv4 > > > > data structures once all other nfsd threads complete their RPCs > > > > in > > > > progress. > > > >=20 > > > >> 918 100573 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >=20 > > > > Same as above. > > > >=20 > > > >> 918 100574 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100575 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100576 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100577 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100578 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100579 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100580 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100581 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100582 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100583 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100584 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100585 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100586 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100587 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100588 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100589 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100590 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100591 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100592 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100593 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100594 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100595 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100596 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100597 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100598 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100599 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100600 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100601 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100602 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100603 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100604 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100605 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100606 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100607 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >=20 > > > > Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). > > > >=20 > > > >> 918 100608 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > > > >> nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100609 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100610 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > > > >> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > > > >> svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > > > >> fork_trampoline+0xe > > > >> 918 100611 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100612 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100613 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100614 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100615 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100616 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100617 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100618 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100619 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100620 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100621 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100622 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100623 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > > >> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100624 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100625 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100626 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100627 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100628 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100629 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100630 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100631 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100632 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100633 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100634 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100635 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100636 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100637 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100638 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100639 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100640 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100641 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100642 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100643 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100644 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100645 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100646 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100647 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100648 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100649 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100650 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100651 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100652 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100653 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100654 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100655 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100656 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100657 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >> 918 100658 nfsd nfsd: service mi_switch+0xe1 > > > >> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > > >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > > >> zfs_fhtovp+0x38d > > > >> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > > >> nfssvc_program+0x554 svc_run_internal+0xc77 > > > >> svc_thread_start+0xb > > > >> fork_exit+0x9a fork_trampoline+0xe > > > >>=20 > > > >> Lo=C3=AFc Blot, > > > >> UNIX Systems, Network and Security Engineer > > > >> http://www.unix-experience.fr > > > >>=20 > > > >> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" > > > >> > > > >> a > > > >> =C3=A9crit: > > > >>> Hmmm... > > > >>> now i'm experiencing a deadlock. > > > >>>=20 > > > >>> 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server > > > >>> (nfsd) > > > >>>=20 > > > >>> the only issue was to reboot the server, but after rebooting > > > >>> deadlock arrives a second time when i > > > >>> start my jails over NFS. > > > >>>=20 > > > >>> Regards, > > > >>>=20 > > > >>> Lo=C3=AFc Blot, > > > >>> UNIX Systems, Network and Security Engineer > > > >>> http://www.unix-experience.fr > > > >>>=20 > > > >>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" > > > >>> > > > >>> a > > > >>> =C3=A9crit: > > > >>>=20 > > > >>> Hi Rick, > > > >>> after talking with my N+1, NFSv4 is required on our > > > >>> infrastructure. > > > >>> I tried to upgrade NFSv4+ZFS > > > >>> server from 9.3 to 10.1, i hope this will resolve some > > > >>> issues... > > > >>>=20 > > > >>> Regards, > > > >>>=20 > > > >>> Lo=C3=AFc Blot, > > > >>> UNIX Systems, Network and Security Engineer > > > >>> http://www.unix-experience.fr > > > >>>=20 > > > >>> 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" > > > >>> > > > >>> a > > > >>> =C3=A9crit: > > > >>>=20 > > > >>> Hi Rick, > > > >>> thanks for your suggestion. > > > >>> For my locking bug, rpc.lockd is stucked in rpcrecv state on > > > >>> the > > > >>> server. kill -9 doesn't affect the > > > >>> process, it's blocked.... (State: Ds) > > > >>>=20 > > > >>> for the performances > > > >>>=20 > > > >>> NFSv3: 60Mbps > > > >>> NFSv4: 45Mbps > > > >>> Regards, > > > >>>=20 > > > >>> Lo=C3=AFc Blot, > > > >>> UNIX Systems, Network and Security Engineer > > > >>> http://www.unix-experience.fr > > > >>>=20 > > > >>> 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" > > > >>> a > > > >>> =C3=A9crit: > > > >>>=20 > > > >>>> Loic Blot wrote: > > > >>>>=20 > > > >>>>> Hi Rick, > > > >>>>> I'm trying NFSv3. > > > >>>>> Some jails are starting very well but now i have an issue > > > >>>>> with > > > >>>>> lockd > > > >>>>> after some minutes: > > > >>>>>=20 > > > >>>>> nfs server 10.10.X.8:/jails: lockd not responding > > > >>>>> nfs server 10.10.X.8:/jails lockd is alive again > > > >>>>>=20 > > > >>>>> I look at mbuf, but i seems there is no problem. > > > >>>>=20 > > > >>>> Well, if you need locks to be visible across multiple > > > >>>> clients, > > > >>>> then > > > >>>> I'm afraid you are stuck with using NFSv4 and the > > > >>>> performance > > > >>>> you > > > >>>> get > > > >>>> from it. (There is no way to do file handle affinity for > > > >>>> NFSv4 > > > >>>> because > > > >>>> the read and write ops are buried in the compound RPC and > > > >>>> not > > > >>>> easily > > > >>>> recognized.) > > > >>>>=20 > > > >>>> If the locks don't need to be visible across multiple > > > >>>> clients, > > > >>>> I'd > > > >>>> suggest trying the "nolockd" option with nfsv3. > > > >>>>=20 > > > >>>>> Here is my rc.conf on server: > > > >>>>>=20 > > > >>>>> nfs_server_enable=3D"YES" > > > >>>>> nfsv4_server_enable=3D"YES" > > > >>>>> nfsuserd_enable=3D"YES" > > > >>>>> nfsd_server_flags=3D"-u -t -n 256" > > > >>>>> mountd_enable=3D"YES" > > > >>>>> mountd_flags=3D"-r" > > > >>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > > > >>>>> rpcbind_enable=3D"YES" > > > >>>>> rpc_lockd_enable=3D"YES" > > > >>>>> rpc_statd_enable=3D"YES" > > > >>>>>=20 > > > >>>>> Here is the client: > > > >>>>>=20 > > > >>>>> nfsuserd_enable=3D"YES" > > > >>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20" > > > >>>>> nfscbd_enable=3D"YES" > > > >>>>> rpc_lockd_enable=3D"YES" > > > >>>>> rpc_statd_enable=3D"YES" > > > >>>>>=20 > > > >>>>> Have you got an idea ? > > > >>>>>=20 > > > >>>>> Regards, > > > >>>>>=20 > > > >>>>> Lo=C3=AFc Blot, > > > >>>>> UNIX Systems, Network and Security Engineer > > > >>>>> http://www.unix-experience.fr > > > >>>>>=20 > > > >>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" > > > >>>>> a > > > >>>>> =C3=A9crit: > > > >>>>>> Loic Blot wrote: > > > >>>>>>=20 > > > >>>>>>> Hi rick, > > > >>>>>>>=20 > > > >>>>>>> I waited 3 hours (no lag at jail launch) and now I do: > > > >>>>>>> sysrc > > > >>>>>>> memcached_flags=3D"-v -m 512" > > > >>>>>>> Command was very very slow... > > > >>>>>>>=20 > > > >>>>>>> Here is a dd over NFS: > > > >>>>>>>=20 > > > >>>>>>> 601062912 bytes transferred in 21.060679 secs (28539579 > > > >>>>>>> bytes/sec) > > > >>>>>>=20 > > > >>>>>> Can you try the same read using an NFSv3 mount? > > > >>>>>> (If it runs much faster, you have probably been bitten by > > > >>>>>> the > > > >>>>>> ZFS > > > >>>>>> "sequential vs random" read heuristic which I've been told > > > >>>>>> things > > > >>>>>> NFS is doing "random" reads without file handle affinity. > > > >>>>>> File > > > >>>>>> handle affinity is very hard to do for NFSv4, so it isn't > > > >>>>>> done.) > > > >>>>=20 > > > >>>> I was actually suggesting that you try the "dd" over nfsv3 > > > >>>> to > > > >>>> see > > > >>>> how > > > >>>> the performance compared with nfsv4. If you do that, please > > > >>>> post > > > >>>> the > > > >>>> comparable results. > > > >>>>=20 > > > >>>> Someday I would like to try and get ZFS's sequential vs > > > >>>> random > > > >>>> read > > > >>>> heuristic modified and any info on what difference in > > > >>>> performance > > > >>>> that > > > >>>> might make for NFS would be useful. > > > >>>>=20 > > > >>>> rick > > > >>>>=20 > > > >>>>>> rick > > > >>>>>>=20 > > > >>>>>>> This is quite slow... > > > >>>>>>>=20 > > > >>>>>>> You can found some nfsstat below (command isn't finished > > > >>>>>>> yet) > > > >>>>>>>=20 > > > >>>>>>> nfsstat -c -w 1 > > > >>>>>>>=20 > > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 4 0 0 0 0 0 16 0 > > > >>>>>>> 2 0 0 0 0 0 17 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 4 0 0 0 0 4 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 4 0 0 0 0 0 3 0 > > > >>>>>>> 0 0 0 0 0 0 3 0 > > > >>>>>>> 37 10 0 8 0 0 14 1 > > > >>>>>>> 18 16 0 4 1 2 4 0 > > > >>>>>>> 78 91 0 82 6 12 30 0 > > > >>>>>>> 19 18 0 2 2 4 2 0 > > > >>>>>>> 0 0 0 0 2 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 1 0 0 0 0 1 0 > > > >>>>>>> 4 6 0 0 6 0 3 0 > > > >>>>>>> 2 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 1 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 1 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 6 108 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 98 54 0 86 11 0 25 0 > > > >>>>>>> 36 24 0 39 25 0 10 1 > > > >>>>>>> 67 8 0 63 63 0 41 0 > > > >>>>>>> 34 0 0 35 34 0 0 0 > > > >>>>>>> 75 0 0 75 77 0 0 0 > > > >>>>>>> 34 0 0 35 35 0 0 0 > > > >>>>>>> 75 0 0 74 76 0 0 0 > > > >>>>>>> 33 0 0 34 33 0 0 0 > > > >>>>>>> 0 0 0 0 5 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 6 0 > > > >>>>>>> 11 0 0 0 0 0 11 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 17 0 0 0 0 1 0 > > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > > >>>>>>> 4 5 0 0 0 0 12 0 > > > >>>>>>> 2 0 0 0 0 0 26 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 4 0 0 0 0 4 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 4 0 0 0 0 0 2 0 > > > >>>>>>> 2 0 0 0 0 0 24 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 4 0 0 0 0 0 7 0 > > > >>>>>>> 2 1 0 0 0 0 1 0 > > > >>>>>>> 0 0 0 0 2 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 6 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 4 6 0 0 0 0 3 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 2 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 4 71 0 0 0 0 0 0 > > > >>>>>>> 0 1 0 0 0 0 0 0 > > > >>>>>>> 2 36 0 0 0 0 1 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 1 0 0 0 0 0 1 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 79 6 0 79 79 0 2 0 > > > >>>>>>> 25 0 0 25 26 0 6 0 > > > >>>>>>> 43 18 0 39 46 0 23 0 > > > >>>>>>> 36 0 0 36 36 0 31 0 > > > >>>>>>> 68 1 0 66 68 0 0 0 > > > >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir > > > >>>>>>> 36 0 0 36 36 0 0 0 > > > >>>>>>> 48 0 0 48 49 0 0 0 > > > >>>>>>> 20 0 0 20 20 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 3 14 0 1 0 0 11 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 0 4 0 0 0 0 4 0 > > > >>>>>>> 0 0 0 0 0 0 0 0 > > > >>>>>>> 4 22 0 0 0 0 16 0 > > > >>>>>>> 2 0 0 0 0 0 23 0 > > > >>>>>>>=20 > > > >>>>>>> Regards, > > > >>>>>>>=20 > > > >>>>>>> Lo=C3=AFc Blot, > > > >>>>>>> UNIX Systems, Network and Security Engineer > > > >>>>>>> http://www.unix-experience.fr > > > >>>>>>>=20 > > > >>>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > > > >>>>>>> a > > > >>>>>>> =C3=A9crit: > > > >>>>>>>> Hi Rick, > > > >>>>>>>> I stopped the jails this week-end and started it this > > > >>>>>>>> morning, > > > >>>>>>>> i'll > > > >>>>>>>> give you some stats this week. > > > >>>>>>>>=20 > > > >>>>>>>> Here is my nfsstat -m output (with your rsize/wsize > > > >>>>>>>> tweaks) > > > >>>=20 > > > >>>=20 > > > >>=20 > > > > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,ac= regmin=3D5,acregmax=3D60,nametimeo=3D60,negna > > > >>>=20 > > > >>>>>>>>=20 > > > >>>=20 > > > >>>=20 > > > >>=20 > > > > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahe= ad=3D1,wcommitsize=3D773136,timeout=3D120,retra > > > >>>=20 > > > >>> s=3D2147483647 > > > >>>=20 > > > >>> On server side my disks are on a raid controller which show a > > > >>> 512b > > > >>> volume and write performances > > > >>> are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > > > >>> count=3D100000000 =3D> 450MBps) > > > >>>=20 > > > >>> Regards, > > > >>>=20 > > > >>> Lo=C3=AFc Blot, > > > >>> UNIX Systems, Network and Security Engineer > > > >>> http://www.unix-experience.fr > > > >>>=20 > > > >>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" = a > > > >>> =C3=A9crit: > > > >>>=20 > > > >>>> Loic Blot wrote: > > > >>>>=20 > > > >>>>> Hi, > > > >>>>> i'm trying to create a virtualisation environment based on > > > >>>>> jails. > > > >>>>> Those jails are stored under a big ZFS pool on a FreeBSD > > > >>>>> 9.3 > > > >>>>> which > > > >>>>> export a NFSv4 volume. This NFSv4 volume was mounted on a > > > >>>>> big > > > >>>>> hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but > > > >>>>> only 1 > > > >>>>> was > > > >>>>> used at this time). > > > >>>>>=20 > > > >>>>> The problem is simple, my hypervisors runs 6 jails (used 1% > > > >>>>> cpu > > > >>>>> and > > > >>>>> 10GB RAM approximatively and less than 1MB bandwidth) and > > > >>>>> works > > > >>>>> fine at start but the system slows down and after 2-3 days > > > >>>>> become > > > >>>>> unusable. When i look at top command i see 80-100% on > > > >>>>> system > > > >>>>> and > > > >>>>> commands are very very slow. Many process are tagged with > > > >>>>> nfs_cl*. > > > >>>>=20 > > > >>>> To be honest, I would expect the slowness to be because of > > > >>>> slow > > > >>>> response > > > >>>> from the NFSv4 server, but if you do: > > > >>>> # ps axHl > > > >>>> on a client when it is slow and post that, it would give us > > > >>>> some > > > >>>> more > > > >>>> information on where the client side processes are sitting. > > > >>>> If you also do something like: > > > >>>> # nfsstat -c -w 1 > > > >>>> and let it run for a while, that should show you how many > > > >>>> RPCs > > > >>>> are > > > >>>> being done and which ones. > > > >>>>=20 > > > >>>> # nfsstat -m > > > >>>> will show you what your mount is actually using. > > > >>>> The only mount option I can suggest trying is > > > >>>> "rsize=3D32768,wsize=3D32768", > > > >>>> since some network environments have difficulties with 64K. > > > >>>>=20 > > > >>>> There are a few things you can try on the NFSv4 server side, > > > >>>> if > > > >>>> it > > > >>>> appears > > > >>>> that the clients are generating a large RPC load. > > > >>>> - disabling the DRC cache for TCP by setting > > > >>>> vfs.nfsd.cachetcp=3D0 > > > >>>> - If the server is seeing a large write RPC load, then > > > >>>> "sync=3Ddisabled" > > > >>>> might help, although it does run a risk of data loss when > > > >>>> the > > > >>>> server > > > >>>> crashes. > > > >>>> Then there are a couple of other ZFS related things (I'm not > > > >>>> a > > > >>>> ZFS > > > >>>> guy, > > > >>>> but these have shown up on the mailing lists). > > > >>>> - make sure your volumes are 4K aligned and ashift=3D12 (in > > > >>>> case a > > > >>>> drive > > > >>>> that uses 4K sectors is pretending to be 512byte sectored) > > > >>>> - never run over 70-80% full if write performance is an > > > >>>> issue > > > >>>> - use a zil on an SSD with good write performance > > > >>>>=20 > > > >>>> The only NFSv4 thing I can tell you is that it is known that > > > >>>> ZFS's > > > >>>> algorithm for determining sequential vs random I/O fails for > > > >>>> NFSv4 > > > >>>> during writing and this can be a performance hit. The only > > > >>>> workaround > > > >>>> is to use NFSv3 mounts, since file handle affinity > > > >>>> apparently > > > >>>> fixes > > > >>>> the problem and this is only done for NFSv3. > > > >>>>=20 > > > >>>> rick > > > >>>>=20 > > > >>>>> I saw that there are TSO issues with igb then i'm trying to > > > >>>>> disable > > > >>>>> it with sysctl but the situation wasn't solved. > > > >>>>>=20 > > > >>>>> Someone has got ideas ? I can give you more informations if > > > >>>>> you > > > >>>>> need. > > > >>>>>=20 > > > >>>>> Thanks in advance. > > > >>>>> Regards, > > > >>>>>=20 > > > >>>>> Lo=C3=AFc Blot, > > > >>>>> UNIX Systems, Network and Security Engineer > > > >>>>> http://www.unix-experience.fr > > > >>>>> _______________________________________________ > > > >>>>> freebsd-fs@freebsd.org mailing list > > > >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > >>>>> To unsubscribe, send any mail to > > > >>>>> "freebsd-fs-unsubscribe@freebsd.org" > > > >>>=20 > > > >>> _______________________________________________ > > > >>> freebsd-fs@freebsd.org mailing list > > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > >>> To unsubscribe, send any mail to > > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > > >>>=20 > > > >>> _______________________________________________ > > > >>> freebsd-fs@freebsd.org mailing list > > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > >>> To unsubscribe, send any mail to > > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > > >>>=20 > > > >>> _______________________________________________ > > > >>> freebsd-fs@freebsd.org mailing list > > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > >>> To unsubscribe, send any mail to > > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > > >>> _______________________________________________ > > > >>> freebsd-fs@freebsd.org mailing list > > > >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > >>> To unsubscribe, send any mail to > > > >>> "freebsd-fs-unsubscribe@freebsd.org" > > >=20 >=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Sun Dec 21 11:41:40 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DE4D7820 for ; Sun, 21 Dec 2014 11:41:40 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C5C162ED6 for ; Sun, 21 Dec 2014 11:41:40 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBLBfe4I032129 for ; Sun, 21 Dec 2014 11:41:40 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 184013] [fusefs] truecrypt broken (probably fusefs issue) Date: Sun, 21 Dec 2014 11:41:40 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mybox@at-hacker.in X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 21 Dec 2014 11:41:41 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=184013 Alexey Pereklad changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mybox@at-hacker.in --- Comment #2 from Alexey Pereklad --- Got the same problem on FreeBSD 10.1. While trying to mount truecrypt file container (/root/cc) truecrypt just hangs: # ps -xa | grep truecrypt 1159 - I 0:00.00 truecrypt --filesystem=none -k --protect-hidden=no /root/cc 1161 - Is 0:00.00 truecrypt --filesystem=none -k --protect-hidden=no /root/cc 1165 1 D 0:00.00 umount -- /tmp/.truecrypt_aux_mnt1 Can't kill any pf truecrypt processes. Message in /var/log/messages: Dec 21 13:54:19 desktop kernel: FUSE: strategy: filehandles are closed -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sun Dec 21 16:03:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 81EE943C for ; Sun, 21 Dec 2014 16:03:17 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id 44AC22C8C for ; Sun, 21 Dec 2014 16:03:16 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id 065D157DD; Sun, 21 Dec 2014 17:03:14 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id E11C257DC; Sun, 21 Dec 2014 17:03:14 +0100 (CET) Date: Sun, 21 Dec 2014 17:03:14 +0100 (CET) From: krichy@tvnetwork.hu To: Nikolay Denev Subject: Re: 16 exabytes of L2ARC? In-Reply-To: Message-ID: References: <542560C1.9070207@fsn.hu> <54267FD3.2080603@fsn.hu> User-Agent: Alpine 2.11 (DEB 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 21 Dec 2014 16:03:17 -0000 See https://bugs.freenas.org/issues/6239 Kojedzinszky Richard Euronet Magyarorszag Informatika Zrt. On Sat, 20 Dec 2014, Nikolay Denev wrote: > Date: Sat, 20 Dec 2014 15:32:38 +0100 > From: Nikolay Denev > To: "Nagy, Attila" > Cc: freebsd-fs > Subject: Re: 16 exabytes of L2ARC? > > On Sat, Sep 27, 2014 at 11:13 AM, Nagy, Attila wrote: > >> On 09/26/14 14:49, Nagy, Attila wrote: >> >>> Hi, >>> >>> Running stable/10@r271944: >>> # zpool iostat -v >>> capacity operations bandwidth >>> pool alloc free read write read write >>> ---------- ----- ----- ----- ----- ----- ----- >>> data 17.3T 40.7T 165 1.24K 1.63M 90.8M >>> da0 4.31T 10.2T 41 318 418K 22.7M >>> da1 4.32T 10.2T 41 317 416K 22.7M >>> da2 4.32T 10.2T 41 317 416K 22.7M >>> da3 4.31T 10.2T 41 317 418K 22.7M >>> cache - - - - - - >>> ada0 513G 16.0E 222 179 1.05M 2.79M >>> ada1 511G 16.0E 222 180 1.05M 2.80M >>> ---------- ----- ----- ----- ----- ----- ----- >>> >>> # egrep 'ada.*MB' /var/run/dmesg.boot >>> ada0: 600.000MB/s transfers (SATA 3.x, UDMA5, PIO 512bytes) >>> ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C) >>> ada1: 600.000MB/s transfers (SATA 3.x, UDMA5, PIO 512bytes) >>> ada1: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C) >>> >> I've removed the cache devices and re-added them, now it's fine: >> cache - - - - - - >> ada0 355M 372G 24 0 151K 0 >> ada1 345M 372G 12 505 71.7K 2.79M >> >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > > Hi, > > Did you figure out the root cause of this? > I seem to be having the same issue: > > > [14:31][root@nas:~]#zpool iostat -v | grep -A1 cache > cache - - - - - - > ada4p1 215G 16.0E 9 3 107K 322K > > uname -a : FreeBSD nas.home.lan 10.1-STABLE FreeBSD 10.1-STABLE #14 > r274549: Sat Nov 15 14:43:56 UTC 2014 > root@nas.home.lan:/usr/obj/usr/src/sys/NAS > amd64 > > > --Nikolay > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun Dec 21 21:00:25 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3F75C68B for ; Sun, 21 Dec 2014 21:00:25 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 143852DED for ; Sun, 21 Dec 2014 21:00:25 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBLL0O6G071098 for ; Sun, 21 Dec 2014 21:00:24 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201412212100.sBLL0O6G071098@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-fs@FreeBSD.org Subject: Problem reports for freebsd-fs@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 21 Dec 2014 21:00:24 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 21 Dec 2014 21:00:25 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- Open | 136470 | [nfs] Cannot mount / in read-only, over NFS Open | 139651 | [nfs] mount(8): read-only remount of NFS volume d Open | 144447 | [zfs] sharenfs fsunshare() & fsshare_main() non f 3 problems total for which you should take action. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 22 08:41:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AD6707E2 for ; Mon, 22 Dec 2014 08:41:45 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C918B300B for ; Mon, 22 Dec 2014 08:41:43 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 44B072678E; Mon, 22 Dec 2014 08:41:34 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id to2dyExgxrJg; Mon, 22 Dec 2014 08:41:27 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id D4EA926780; Mon, 22 Dec 2014 08:41:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1419237686; bh=IrN+f3AquMe1x2jalDWp7ltNQUoC/r1sNHY3BL/OUK0=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=o/nYmlRFzgj0GfT2vtX7lmuNxesB6mKELnkRdH7RsMN7rq1eEthwRvqVuAmr7eD8v tehmKOe31iixY8bijRcTGV6/t54U7Kcdt8yNLHXoG/fUPkjcSVlIzmZxeyRAU84VHg n2kIwqZ1mcCfYCQh8wr1mqpoYyArMVDdai7YtKNs= Mime-Version: 1.0 Date: Mon, 22 Dec 2014 08:41:26 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <9fcfcbfe720a9b56a995cd6e227b8f9f@mail.unix-experience.fr> X-Mailer: RainLoop/1.7.0.203 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <2087358136.248078.1419122007097.JavaMail.root@uoguelph.ca> References: <2087358136.248078.1419122007097.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Dec 2014 08:41:45 -0000 Hi Rick,=0Amy 5 jails runs this weekend and now i have some stats on this= monday.=0A=0AHopefully deadlock was fixed, yeah, but everything isn't go= od :(=0A=0AOn NFSv4 server (FreeBSD 10.1) system uses 35% CPU=0A=0AAs i c= an see this is because of nfsd:=0A=0A918 root 96 20 0 1235= 2K 3372K rpcsvc 6 51.4H 273.68% nfsd: server (nfsd)=0A=0AIf i look at = dmesg i see:=0Anfsd server cache flooded, try increasing vfs.nfsd.tcphigh= water=0A=0Avfs.nfsd.tcphighwater was set to 10000, i increase it to 15000= =0A=0AHere is 'nfsstat -s' output:=0A=0AServer Info:=0A Getattr Setatt= r Lookup Readlink Read Write Create Remove=0A 12600652= 1812 2501097 156 1386423 1983729 123 162067=0A= Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access=0A 36762 9 0 0 0 3147 = 0 623524=0A Mknod Fsstat Fsinfo PathConf Commit=0A = 0 0 0 0 328117=0AServer Ret-Failed=0A = 0=0AServer Faults=0A 0=0AServer Cache Stats:=0A = Inprog Idem Non-idem Misses=0A 0 0 0 126= 35512=0AServer Write Gathering:=0A WriteOps WriteRPC Opsaved=0A 19837= 29 1983729 0=0A=0AAnd here is 'procstat -kk' for nfsd (server)= =0A=0A 918 100528 nfsd nfsd: master mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_timedwait_sig+0x10 _cv_timedwait_sig_sbt+0x18= b svc_run_internal+0x4a1 svc_run+0x1de nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x10= 7 sys_nfssvc+0x9c amd64_syscall+0x351 Xfast_syscall+0xfb =0A 918 100568 = nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_threa= d_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100569 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe =0A 918 100570 nfsd nfs= d: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0x= f _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe =0A 918 100571 nfsd nfsd: service= mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait= _sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe =0A 918 100572 nfsd nfsd: service mi_swi= tch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoli= ne+0xe =0A 918 100573 nfsd nfsd: service mi_switch+0xe1 s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A= 918 100574 nfsd nfsd: service mi_switch+0xe1 sleepq_catc= h_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1005= 75 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100576 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100577 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe =0A 918 100578 nfsd nfsd: serv= ice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe =0A 918 100579 nfsd nfsd: service mi_= switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe =0A 918 100580 nfsd nfsd: service mi_switch+0xe= 1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 918 100581 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1= 00582 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100583 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100584 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 918 100585 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 918 100586 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 918 100587 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 918 100588 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 91= 8 100589 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100590 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100591 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 918 100592 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 918 100593 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 918 100594 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 918 100595 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100596 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 10059= 7 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100598 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100599 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 918 100600 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A 918 100601 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A 918 100602 nfsd nfsd: service mi_switch+0xe1= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 918 100603 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1= 00604 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100605 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100606 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 918 100607 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 918 100608 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 918 100609 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 918 100610 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 91= 8 100611 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100612 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100613 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 918 100614 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 918 100615 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 918 100616 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 918 100617 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100618 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 10061= 9 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100620 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100621 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 918 100622 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A 918 100623 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A 918 100624 nfsd nfsd: service mi_switch+0xe1= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 918 100625 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1= 00626 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100627 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100628 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 918 100629 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 918 100630 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 918 100631 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 918 100632 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 91= 8 100633 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100634 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100635 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 918 100636 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 918 100637 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 918 100638 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 918 100639 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100640 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 10064= 1 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100642 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100643 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A 918 100644 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A 918 100645 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A 918 100646 nfsd nfsd: service mi_switch+0xe1= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A 918 100647 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 1= 00648 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100649 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100650 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A 918 100651 nfsd nfsd: s= ervice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _c= v_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe =0A 918 100652 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A 918 100653 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A 918 100654 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 91= 8 100655 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100656 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A 918 100657 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe =0A 918 100658 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A 918 100659 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A 918 100660 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A 918 100661 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A = 918 100662 nfsd nfsd: service mi_switch+0xe1 sleepq_catch= _signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A---=0A=0ANow= if we look at client (FreeBSD 9.3)=0A=0AWe see system was very busy and = do many and many interrupts=0A=0ACPU: 0.0% user, 0.0% nice, 37.8% syste= m, 51.2% interrupt, 11.0% idle=0A=0AA look at process list shows that the= re are many sendmail process in state nfstry=0A=0Anfstry 18 32:27 0.88= % sendmail: Queue runner@00:30:00 for /var/spool/clientm=0A=0AHere is 'nf= sstat -c' output:=0A=0AClient Info:=0ARpc Counts:=0A Getattr Setattr = Lookup Readlink Read Write Create Remove=0A 1051347 = 1724 2494481 118 903902 1901285 162676 161899=0A = Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Acc= ess=0A 36744 2 0 114 40 3131 = 0 544136=0A Mknod Fsstat Fsinfo PathConf Commit=0A = 9 0 0 0 245821=0ARpc Info:=0A TimedOut Inv= alid X Replies Retries Requests=0A 0 0 0 = 0 8356557=0ACache Info:=0AAttr Hits Misses Lkup Hits Misses Bio= R Hits Misses BioW Hits Misses=0A108754455 491475 54229224 24= 37229 46814561 821723 5132123 1871871=0ABioRLHits Misses BioD = Hits Misses DirE Hits Misses Accs Hits Misses=0A 144035 = 118 53736 2753 27813 1 57238839 544205=0A=0A=0AI= f you need more things, tell me, i let the PoC in this state.=0A=0AThanks= =0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security = Engineer=0Ahttp://www.unix-experience.fr=0A=0A21 d=C3=A9cembre 2014 01:33= "Rick Macklem" a =C3=A9crit: =0A> Loic Blot wrote= :=0A> =0A>> Hi Rick,=0A>> ok, i don't need locallocks, i haven't understa= nd option was for that=0A>> usage, i removed it.=0A>> I do more tests on = monday.=0A>> Thanks for the deadlock fix, for other people :)=0A> =0A> Go= od. Please let us know if running with vfs.nfsd.enable_locallocks=3D0=0A>= gets rid of the deadlocks? (I think it fixes the one you saw.)=0A> =0A> = On the performance side, you might also want to try different values of= =0A> readahead, if the Linux client has such a mount option. (With the=0A= > NFSv4-ZFS sequential vs random I/O heuristic, I have no idea what the= =0A> optimal readahead value would be.)=0A> =0A> Good luck with it and pl= ease let us know how it goes, rick=0A> ps: I now have a patch to fix the = deadlock when vfs.nfsd.enable_locallocks=3D1=0A> is set. I'll post it for= anyone who is interested after I put it=0A> through some testing.=0A> = =0A>> --=0A>> Best regards,=0A>> Lo=C3=AFc BLOT,=0A>> UNIX systems, secur= ity and network engineer=0A>> http://www.unix-experience.fr=0A>> =0A>> Le= jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a =C3=A9cri= t :=0A>>> Loic Blot wrote:=0A>>>> Hi rick,=0A>>>> i tried to start a LXC = container on Debian Squeeze from my=0A>>>> freebsd=0A>>>> ZFS+NFSv4 serve= r and i also have a deadlock on nfsd=0A>>>> (vfs.lookup_shared=3D0). Dead= lock procs each time i launch a=0A>>>> squeeze=0A>>>> container, it seems= (3 tries, 3 fails).=0A>>>> =0A>>> Well, I`ll take a look at this `procst= at -kk`, but the only thing=0A>>> I`ve seen posted w.r.t. avoiding deadlo= cks in ZFS is to not use=0A>>> nullfs. (I have no idea if you are using a= ny nullfs mounts, but=0A>>> if so, try getting rid of them.)=0A>>> =0A>>>= Here`s a high level post about the ZFS and vnode locking problem,=0A>>> = but there is no patch available, as far as I know.=0A>>> =0A>>> http://do= cs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407=0A>>> =0A>>> rick=0A>>> =0A>>= >> 921 - D 0:00.02 nfsd: server (nfsd)=0A>>>> =0A>>>> Here is the = procstat -kk=0A>>>> =0A>>>> PID TID COMM TDNAME = KSTACK=0A>>>> 921 100538 nfsd nfsd: master mi_switch+0xe1= =0A>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>> nfsvno_advlock+0x119 n= fsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>>>> nfsrvd_locku+0x283 nfsrvd_= dorpc+0xec6 nfssvc_program+0x554=0A>>>> svc_run_internal+0xc77 svc_run+0x= 1de nfsrvd_nfsd+0x1ca=0A>>>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>> 921= 100572 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_tramp= oline+0xe=0A>>>> 921 100573 nfsd nfsd: service mi_switch+0= xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_= sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100574 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100575 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>= >>> 921 100576 nfsd nfsd: service mi_switch+0xe1=0A>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>= >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> for= k_trampoline+0xe=0A>>>> 921 100577 nfsd nfsd: service mi_s= witch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _c= v_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100578 nfsd n= fsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 10057= 9 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+= 0xe=0A>>>> 921 100580 nfsd nfsd: service mi_switch+0xe1=0A= >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x= 16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>> fork_trampoline+0xe=0A>>>> 921 100581 nfsd nfsd: service = mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A= >>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100582 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 92= 1 100583 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_tram= poline+0xe=0A>>>> 921 100584 nfsd nfsd: service mi_switch+= 0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait= _sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100585 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100586 nfsd= nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A= >>>> 921 100587 nfsd nfsd: service mi_switch+0xe1=0A>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A= >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fo= rk_trampoline+0xe=0A>>>> 921 100588 nfsd nfsd: service mi_= switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _= cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100589 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 1005= 90 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline= +0xe=0A>>>> 921 100591 nfsd nfsd: service mi_switch+0xe1= =0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig= +0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 921 100592 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100593 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>= > 921 100594 nfsd nfsd: service mi_switch+0xe1=0A>>>> slee= pq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_= trampoline+0xe=0A>>>> 921 100595 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_= wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100596 nfsd nfs= d: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thre= ad_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100597 = nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0x= e=0A>>>> 921 100598 nfsd nfsd: service mi_switch+0xe1=0A>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16= a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= > fork_trampoline+0xe=0A>>>> 921 100599 nfsd nfsd: service = mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100600 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 = 100601 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampo= line+0xe=0A>>>> 921 100602 nfsd nfsd: service mi_switch+0x= e1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_s= ig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100603 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100604 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>= >> 921 100605 nfsd nfsd: service mi_switch+0xe1=0A>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>= >> svc_run_internal+0x87e svc_thread_start+0xb=20fork_exit+0x9a=0A>>>> fo= rk_trampoline+0xe=0A>>>> 921 100606 nfsd nfsd: service mi_= switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _= cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100607 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 1006= 08 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline= +0xe=0A>>>> 921 100609 nfsd nfsd: service mi_switch+0xe1= =0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig= +0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 921 100610 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100611 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>= > 921 100612 nfsd nfsd: service mi_switch+0xe1=0A>>>> slee= pq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_= trampoline+0xe=0A>>>> 921 100613 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_= wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100614 nfsd nfs= d: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thre= ad_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100615 = nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0x= e=0A>>>> 921 100616 nfsd nfsd: service mi_switch+0xe1=0A>>= >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>> nf= srv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1=0A>>>> nfsrv= d_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thre= ad_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>> 921 100617 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>= > 921 100618 nfsd nfsd: service mi_switch+0xe1=0A>>>> slee= pq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>> nfsrvd_do= rpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>> svc_thread_s= tart+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>> 921 100619 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 92= 1 100620 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_tram= poline+0xe=0A>>>> 921 100621 nfsd nfsd: service mi_switch+= 0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait= _sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100622 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100623 nfsd= nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A= >>>> 921 100624 nfsd nfsd: service mi_switch+0xe1=0A>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A= >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fo= rk_trampoline+0xe=0A>>>> 921 100625 nfsd nfsd: service mi_= switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _= cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100626 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 1006= 27 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline= +0xe=0A>>>> 921 100628 nfsd nfsd: service mi_switch+0xe1= =0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig= +0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 921 100629 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100630 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xa= b sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>= > 921 100631 nfsd nfsd: service mi_switch+0xe1=0A>>>> slee= pq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_= trampoline+0xe=0A>>>> 921 100632 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_= wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100633 nfsd nfs= d: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait= _sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thre= ad_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100634 = nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0x= e=0A>>>> 921 100635 nfsd nfsd: service mi_switch+0xe1=0A>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16= a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>= > fork_trampoline+0xe=0A>>>> 921 100636 nfsd nfsd: service = mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>= >> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100637 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 = 100638 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampo= line+0xe=0A>>>> 921 100639 nfsd nfsd: service mi_switch+0x= e1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_s= ig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100640 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100641 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>= >> 921 100642 nfsd nfsd: service mi_switch+0xe1=0A>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork= _trampoline+0xe=0A>>>> 921 100643 nfsd nfsd: service mi_sw= itch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv= _wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100644 nfsd nf= sd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100645= nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0= xe=0A>>>> 921 100646 nfsd nfsd: service mi_switch+0xe1=0A>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x1= 6a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >> fork_trampoline+0xe=0A>>>> 921 100647 nfsd nfsd: service = mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100648 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sle= epq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921= 100649 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_tramp= oline+0xe=0A>>>> 921 100650 nfsd nfsd: service mi_switch+0= xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_= sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100651 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100652 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>= >>> 921 100653 nfsd nfsd: service mi_switch+0xe1=0A>>>> sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>= >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> for= k_trampoline+0xe=0A>>>> 921 100654 nfsd nfsd: service mi_s= witch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _c= v_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100655 nfsd n= fsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 10065= 6 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+= 0xe=0A>>>> 921 100657 nfsd nfsd: service mi_switch+0xe1=0A= >>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x= 16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>= >>> fork_trampoline+0xe=0A>>>> 921 100658 nfsd nfsd: service = mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A= >>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100659 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 92= 1 100660 nfsd nfsd: service mi_switch+0xe1=0A>>>> sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_tram= poline+0xe=0A>>>> 921 100661 nfsd nfsd: service mi_switch+= 0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait= _sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0= x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100662 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100663 nfsd= nfsd: service mi_switch+0xe1=0A>>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A>>>> svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fork_trampoline+0xe=0A= >>>> 921 100664 nfsd nfsd: service mi_switch+0xe1=0A>>>> s= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _cv_wait_sig+0x16a=0A= >>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>> fo= rk_trampoline+0xe=0A>>>> 921 100665 nfsd nfsd: service mi_= switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>> _= cv_wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 921 100666 nfsd = nfsd: service mi_switch+0xe1=0A>>>> sleepq_wait+0x3a _sleep+0x287 nfsm= sleep+0x66 nfsv4_lock+0x9b=0A>>>> nfsrv_setclient+0xbd nfsrvd_setclientid= +0x3c8=0A>>>> nfsrvd_dorpc+0xc76=0A>>>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>> svc_thread_start+0xb=0A>>>> fork_exit+0x9a fork_trampol= ine+0xe=0A>>>> =0A>>>> =0A>>>> Regards,=0A>>>> =0A>>>> Lo=C3=AFc Blot,=0A= >>>> UNIX Systems, Network and Security Engineer=0A>>>> http://www.unix-e= xperience.fr=0A>>>> =0A>>>> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a=0A>>>> =C3=A9crit:=0A>>>>> Loic Blot wrote:=0A>>>>= > =0A>>>>>> For more informations, here is procstat -kk on nfsd, if you= =0A>>>>>> need=0A>>>>>> more=0A>>>>>> hot datas, tell me.=0A>>>>>> =0A>>>= >>> Regards, PID TID COMM TDNAME KSTACK=0A>>>>>= > 918 100529 nfsd nfsd: master mi_switch+0xe1=0A>>>>>> sl= eepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x= 3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_prog= ram+0x554 svc_run_internal+0xc77 svc_run+0x1de=0A>>>>>> nfsrvd_nfsd+0x1ca= nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>>>> amd64_syscall+0x351=0A>>>>> = =0A>>>>> Well, most of the threads are stuck like this one, waiting for= =0A>>>>> a=0A>>>>> vnode=0A>>>>> lock in ZFS. All of them appear to be in= zfs_fhtovp().=0A>>>>> I`m not a ZFS guy, so I can`t help much. I`ll try = changing the=0A>>>>> subject line=0A>>>>> to include ZFS vnode lock, so m= aybe the ZFS guys will take a=0A>>>>> look.=0A>>>>> =0A>>>>> The only thi= ng I`ve seen suggested is trying:=0A>>>>> sysctl vfs.lookup_shared=3D0=0A= >>>>> to disable shared vop_lookup()s. Apparently zfs_lookup()=0A>>>>> do= esn`t=0A>>>>> obey the vnode locking rules for lookup and rename, accordi= ng=0A>>>>> to=0A>>>>> the posting I saw.=0A>>>>> =0A>>>>> I`ve added a co= uple of comments about the other threads below,=0A>>>>> but=0A>>>>> they = are all either waiting for an RPC request or waiting for=0A>>>>> the=0A>>= >>> threads stuck on the ZFS vnode lock to complete.=0A>>>>> =0A>>>>> ric= k=0A>>>>> =0A>>>>>> 918 100564 nfsd nfsd: service mi_switc= h+0xe1=0A>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>> _c= v_wait_sig+0x16a=0A>>>>>> svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a=0A>>>>>> fork_trampoline+0xe=0A>>>>> =0A>>>>> Fyi, this threa= d is just waiting for an RPC to arrive. (Normal)=0A>>>>> =0A>>>>>> 918 10= 0565 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a=0A>>>>>> = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>> fork_= trampoline+0xe=0A>>>>>> 918 100566 nfsd nfsd: service mi_s= witch+0xe1=0A>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>= > _cv_wait_sig+0x16a=0A>>>>>> svc_run_internal+0x87e svc_thread_start+0xb= fork_exit+0x9a=0A>>>>>> fork_trampoline+0xe=0A>>>>>> 918 100567 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a=0A>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>> fork_trampoline+0x= e=0A>>>>>> 918 100568 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv_wait_si= g+0x16a=0A>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a=0A>>>>>> fork_trampoline+0xe=0A>>>>>> 918 100569 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a=0A>>>>>> svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0A>>>>>> fork_trampoline+0xe=0A>>>>>> 91= 8 100570 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>> _cv_wait_sig+0x16a=0A>>>= >>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>> f= ork_trampoline+0xe=0A>>>>>> 918 100571 nfsd nfsd: service = mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv= 4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe=0A>>>>>> 918 100572 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A= >>>>>> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8=0A>>>>>> nfsrvd_dorp= c+0xc76=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc= _thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> =0A= >>>>> This one (and a few others) are waiting for the nfsv4_lock.=0A>>>>>= This=0A>>>>> happens=0A>>>>> because other threads are stuck with RPCs i= n progress. (ie. The=0A>>>>> ones=0A>>>>> waiting on the vnode lock in zf= s_fhtovp().)=0A>>>>> For these, the RPC needs to lock out other threads t= o do the=0A>>>>> operation,=0A>>>>> so it waits for the nfsv4_lock() whic= h can exclusively lock the=0A>>>>> NFSv4=0A>>>>> data structures once all= other nfsd threads complete their RPCs=0A>>>>> in=0A>>>>> progress.=0A>>= >>> =0A>>>>>> 918 100573 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A= >>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>= >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>> =0A= >>>>> Same as above.=0A>>>>> =0A>>>>>> 918 100574 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockm= gr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 n= fsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>> 918 100575 nfsd nfsd: service mi_switch+0xe1=0A>= >>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>= >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfs= svc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A= >>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100576 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0= x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_intern= al+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>> 918 100577 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>= >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+= 0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>= >>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_sta= rt+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100578 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a = sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_AP= V+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a f= ork_trampoline+0xe=0A>>>>>> 918 100579 nfsd nfsd: service = mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zf= s_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_t= hread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 = 100580 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP= _LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x= 554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_ex= it+0x9a fork_trampoline+0xe=0A>>>>>> 918 100581 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>= >>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>= >>>> 918 100582 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock= +0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>= > fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100583 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_loc= k+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+= 0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline= +0xe=0A>>>>>> 918 100584 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d= =0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0x= b=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100585 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xa= b _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfs= d_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_t= rampoline+0xe=0A>>>>>> 918 100586 nfsd nfsd: service mi_sw= itch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A= >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fht= ovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10058= 7 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>> 918 100588 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= > zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> s= vc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> = 918 100589 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100590 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0x= e=0A>>>>>> 918 100591 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A= >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb= =0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100592 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100593 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhto= vp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10059= 4 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>> 918 100595 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= > zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> s= vc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> = 918 100596 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100597 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0x= e=0A>>>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A= >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb= =0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100599 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100600 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhto= vp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10060= 1 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>> 918 100602 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= > zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> s= vc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> = 918 100603 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100604 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0x= e=0A>>>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A= >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb= =0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100606 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100607 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhto= vp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>> =0A>>>>> L= ots more waiting for the ZFS vnode lock in zfs_fhtovp().=0A>>>>> =0A>>>>>= > 918 100608 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sl= eepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrv= _getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1=0A>>>>>> nfsrvd= _dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100609 nfsd= nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sl= eeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+= 0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c = nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run= _internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a for= k_trampoline+0xe=0A>>>>>> 918 100610 nfsd nfsd: service mi= _switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e= =0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> nfsv= no_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>>>>>> nfsrvd= _locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554=0A>>>>>> svc_run_int= ernal+0xc77 svc_thread_start+0xb fork_exit+0x9a=0A>>>>>> fork_trampoline+= 0xe=0A>>>>>> 918 100611 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A= >>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>= >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 91= 8 100612 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq= _wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_do= rpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100613 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a _slee= p+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100614 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmslee= p+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_t= rampoline+0xe=0A>>>>>> 918 100615 nfsd nfsd: service mi_sw= itch+0xe1=0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_loc= k+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+= 0xc77=0A>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A= >>>>>> 918 100616 nfsd nfsd: service mi_switch+0xe1=0A>>>>= >> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> = nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> s= vc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10061= 7 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0= x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x3= 16 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100618 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a _sleep+0x287= nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>> 918 100619 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= > zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> s= vc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> = 918 100620 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100621 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsl= eep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe=0A>>>>>> 918 100622 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_= fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x= 917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thr= ead_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10= 0623 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wai= t+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>> nfsrvd_dorpc+= 0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_sta= rt+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100624 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0= x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_intern= al+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>> 918 100625 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>= >> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+= 0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>= >>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_sta= rt+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100626 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a = sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_AP= V+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_r= un_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a f= ork_trampoline+0xe=0A>>>>>> 918 100627 nfsd nfsd: service = mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zf= s_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+= 0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_t= hread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 = 100628 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_w= ait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP= _LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x= 554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_ex= it+0x9a fork_trampoline+0xe=0A>>>>>> 918 100629 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr= _args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>= >>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>= >>>> 918 100630 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock= +0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>= > fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100631 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_loc= k+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+= 0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline= +0xe=0A>>>>>> 918 100632 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d= =0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0x= b=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100633 nfsd=20= =20=20=20=20 nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+= 0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtov= p+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 = svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0= x9a fork_trampoline+0xe=0A>>>>>> 918 100634 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_arg= s+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>= >> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> = svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>= 918 100635 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfs= vno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_progr= am+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fo= rk_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100636 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x= 43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8= nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0x= e=0A>>>>>> 918 100637 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A= >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb= =0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100638 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100639 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhto= vp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10064= 0 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>> 918 100641 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= > zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> s= vc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> = 918 100642 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100643 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0x= e=0A>>>>>> 918 100644 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A= >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb= =0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100645 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100646 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhto= vp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10064= 7 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK= 1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>> 918 100648 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>= > zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> s= vc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> = 918 100649 nfsd nfsd: service mi_switch+0xe1=0A>>>>>> slee= pq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c= VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_progra= m+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100650 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0x= e=0A>>>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A= >>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nf= ssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb= =0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100652 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_int= ernal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>> 918 100653 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhto= vp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 10065= 4 nfsd=20=20=20=20=20=20=20 nfsd: service mi_switch+0xe1=0A>>>>>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock= +0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_p= rogram+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>= > fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100655 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_loc= k+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+= 0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline= +0xe=0A>>>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>> vo= p_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d= =0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>= nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread_start+0x= b=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> 918 100657 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xa= b _vn_lock+0x43=0A>>>>>> zfs_fhtovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfs= d_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>> nfssvc_program+0x554 svc_run_in= ternal+0xc77=0A>>>>>> svc_thread_start+0xb=0A>>>>>> fork_exit+0x9a fork_t= rampoline+0xe=0A>>>>>> 918 100658 nfsd nfsd: service mi_sw= itch+0xe1=0A>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A= >>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>> zfs_fht= ovp+0x38d=0A>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>> svc_thread= _start+0xb=0A>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>> =0A>>>>>>= Lo=C3=AFc Blot,=0A>>>>>> UNIX Systems, Network and Security Engineer=0A>= >>>>> http://www.unix-experience.fr=0A>>>>>> =0A>>>>>> 15 d=C3=A9cembre 2= 014 13:29 "Lo=C3=AFc Blot"=0A>>>>>> =0A>>>>= >> a=0A>>>>>> =C3=A9crit:=0A>>>>>>> Hmmm...=0A>>>>>>> now i'm experiencin= g a deadlock.=0A>>>>>>> =0A>>>>>>> 0 918 915 0 21 0 12352 3372 zfs D - 1:= 48.64 nfsd: server=0A>>>>>>> (nfsd)=0A>>>>>>> =0A>>>>>>> the only issue w= as to reboot the server, but after rebooting=0A>>>>>>> deadlock arrives a= second time when i=0A>>>>>>> start my jails over NFS.=0A>>>>>>> =0A>>>>>= >> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX Systems, = Network and Security Engineer=0A>>>>>>> http://www.unix-experience.fr=0A>= >>>>>> =0A>>>>>>> 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot"=0A>>>>>>> = =0A>>>>>>> a=0A>>>>>>> =C3=A9crit:=0A>>>>>>= > =0A>>>>>>> Hi Rick,=0A>>>>>>> after talking with my N+1, NFSv4 is requi= red on our=0A>>>>>>> infrastructure.=0A>>>>>>> I tried to upgrade NFSv4+Z= FS=0A>>>>>>> server from 9.3 to 10.1, i hope this will resolve some=0A>>>= >>>> issues...=0A>>>>>>> =0A>>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3= =AFc Blot,=0A>>>>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>= > http://www.unix-experience.fr=0A>>>>>>> =0A>>>>>>> 10 d=C3=A9cembre 201= 4 15:36 "Lo=C3=AFc Blot"=0A>>>>>>> =0A>>>>>= >> a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A>>>>>>> Hi Rick,=0A>>>>>>> thanks= for your suggestion.=0A>>>>>>> For my locking bug, rpc.lockd is stucked = in rpcrecv state on=0A>>>>>>> the=0A>>>>>>> server. kill -9 doesn't affec= t the=0A>>>>>>> process, it's blocked.... (State: Ds)=0A>>>>>>> =0A>>>>>>= > for the performances=0A>>>>>>> =0A>>>>>>> NFSv3: 60Mbps=0A>>>>>>> NFSv4= : 45Mbps=0A>>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>= > UNIX Systems, Network and Security Engineer=0A>>>>>>> http://www.unix-e= xperience.fr=0A>>>>>>> =0A>>>>>>> 10 d=C3=A9cembre 2014 13:56 "Rick Mackl= em" =0A>>>>>>> a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> = =0A>>>>>>>> Loic Blot wrote:=0A>>>>>>>> =0A>>>>>>>>> Hi Rick,=0A>>>>>>>>>= I'm trying NFSv3.=0A>>>>>>>>> Some jails are starting very well but now = i have an issue=0A>>>>>>>>> with=0A>>>>>>>>> lockd=0A>>>>>>>>> after some= minutes:=0A>>>>>>>>> =0A>>>>>>>>> nfs server 10.10.X.8:/jails: lockd not= responding=0A>>>>>>>>> nfs server 10.10.X.8:/jails lockd is alive again= =0A>>>>>>>>> =0A>>>>>>>>> I look at mbuf, but i seems there is no problem= .=0A>>>>>>>> =0A>>>>>>>> Well, if you need locks to be visible across mul= tiple=0A>>>>>>>> clients,=0A>>>>>>>> then=0A>>>>>>>> I'm afraid you are s= tuck with using NFSv4 and the=0A>>>>>>>> performance=0A>>>>>>>> you=0A>>>= >>>>> get=0A>>>>>>>> from it. (There is no way to do file handle affinity= for=0A>>>>>>>> NFSv4=0A>>>>>>>> because=0A>>>>>>>> the read and write op= s are buried in the compound RPC and=0A>>>>>>>> not=0A>>>>>>>> easily=0A>= >>>>>>> recognized.)=0A>>>>>>>> =0A>>>>>>>> If the locks don't need to be= visible across multiple=0A>>>>>>>> clients,=0A>>>>>>>> I'd=0A>>>>>>>> su= ggest trying the "nolockd" option with nfsv3.=0A>>>>>>>> =0A>>>>>>>>> Her= e is my rc.conf on server:=0A>>>>>>>>> =0A>>>>>>>>> nfs_server_enable=3D"= YES"=0A>>>>>>>>> nfsv4_server_enable=3D"YES"=0A>>>>>>>>> nfsuserd_enable= =3D"YES"=0A>>>>>>>>> nfsd_server_flags=3D"-u -t -n 256"=0A>>>>>>>>> mount= d_enable=3D"YES"=0A>>>>>>>>> mountd_flags=3D"-r"=0A>>>>>>>>> nfsuserd_fla= gs=3D"-usertimeout 0 -force 20"=0A>>>>>>>>> rpcbind_enable=3D"YES"=0A>>>>= >>>>> rpc_lockd_enable=3D"YES"=0A>>>>>>>>> rpc_statd_enable=3D"YES"=0A>>>= >>>>>> =0A>>>>>>>>> Here is the client:=0A>>>>>>>>> =0A>>>>>>>>> nfsuserd= _enable=3D"YES"=0A>>>>>>>>> nfsuserd_flags=3D"-usertimeout 0 -force 20"= =0A>>>>>>>>> nfscbd_enable=3D"YES"=0A>>>>>>>>> rpc_lockd_enable=3D"YES"= =0A>>>>>>>>> rpc_statd_enable=3D"YES"=0A>>>>>>>>> =0A>>>>>>>>> Have you g= ot an idea ?=0A>>>>>>>>> =0A>>>>>>>>> Regards,=0A>>>>>>>>> =0A>>>>>>>>> L= o=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, Network and Security Engineer=0A= >>>>>>>>> http://www.unix-experience.fr=0A>>>>>>>>> =0A>>>>>>>>> 9 d=C3= =A9cembre 2014 04:31 "Rick Macklem" =0A>>>>>>>>> a= =0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>>> Loic Blot wrote:=0A>>>>>>>>>> =0A>>= >>>>>>>>> Hi rick,=0A>>>>>>>>>>> =0A>>>>>>>>>>> I waited 3 hours (no lag = at jail launch) and now I do:=0A>>>>>>>>>>> sysrc=0A>>>>>>>>>>> memcached= _flags=3D"-v -m 512"=0A>>>>>>>>>>> Command was very very slow...=0A>>>>>>= >>>>> =0A>>>>>>>>>>> Here is a dd over NFS:=0A>>>>>>>>>>> =0A>>>>>>>>>>> = 601062912 bytes transferred in 21.060679 secs (28539579=0A>>>>>>>>>>> byt= es/sec)=0A>>>>>>>>>> =0A>>>>>>>>>> Can you try the same read using an NFS= v3 mount?=0A>>>>>>>>>> (If it runs much faster, you have probably been bi= tten by=0A>>>>>>>>>> the=0A>>>>>>>>>> ZFS=0A>>>>>>>>>> "sequential vs ran= dom" read heuristic which I've been told=0A>>>>>>>>>> things=0A>>>>>>>>>>= NFS is doing "random" reads without file handle affinity.=0A>>>>>>>>>> F= ile=0A>>>>>>>>>> handle affinity is very hard to do for NFSv4, so it isn'= t=0A>>>>>>>>>> done.)=0A>>>>>>>> =0A>>>>>>>> I was actually suggesting th= at you try the "dd" over nfsv3=0A>>>>>>>> to=0A>>>>>>>> see=0A>>>>>>>> ho= w=0A>>>>>>>> the performance compared with nfsv4. If you do that, please= =0A>>>>>>>> post=0A>>>>>>>> the=0A>>>>>>>> comparable results.=0A>>>>>>>>= =0A>>>>>>>> Someday I would like to try and get ZFS's sequential vs=0A>>= >>>>>> random=0A>>>>>>>> read=0A>>>>>>>> heuristic modified and any info = on what difference in=0A>>>>>>>> performance=0A>>>>>>>> that=0A>>>>>>>> m= ight make for NFS would be useful.=0A>>>>>>>> =0A>>>>>>>> rick=0A>>>>>>>>= =0A>>>>>>>>>> rick=0A>>>>>>>>>> =0A>>>>>>>>>>> This is quite slow...=0A>= >>>>>>>>>> =0A>>>>>>>>>>> You can found some nfsstat below (command isn't= finished=0A>>>>>>>>>>> yet)=0A>>>>>>>>>>> =0A>>>>>>>>>>> nfsstat -c -w 1= =0A>>>>>>>>>>> =0A>>>>>>>>>>> GtAttr Lookup Rdlink Read Write Rename Acce= ss Rddir=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 4 0 0 0 0 0 16 0=0A>= >>>>>>>>>> 2 0 0 0 0 0 17 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0= 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 = 0=0A>>>>>>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>= >> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>>>>>>>> 4 0 0 0 0 0 3 0=0A>>>>>>>>>>> 0 0 0 0 0 0 3 0=0A>>>>>= >>>>>> 37 10 0 8 0 0 14 1=0A>>>>>>>>>>> 18 16 0 4 1 2 4 0=0A>>>>>>>>>>> 7= 8 91 0 82 6 12 30 0=0A>>>>>>>>>>> 19 18 0 2 2 4 2 0=0A>>>>>>>>>>> 0 0 0 0= 2 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> GtAttr Lookup Rdlink= Read Write Rename Access Rddir=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>= >> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 1 0 0 0 = 0 1 0=0A>>>>>>>>>>> 4 6 0 0 6 0 3 0=0A>>>>>>>>>>> 2 0 0 0 0 0 0 0=0A>>>>>= >>>>>> 0 0 0 0=200 0 0 0=0A>>>>>>>>>>> 1 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 = 0 0 1 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>= > 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0= 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 6 108 0 0 0 0 0 0=0A>>>>= >>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> GtAtt= r Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>>>>>> 0 0 0 0 0 0 = 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>= >>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 = 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 98 54 0 86 11 0 25 0= =0A>>>>>>>>>>> 36 24 0 39 25 0 10 1=0A>>>>>>>>>>> 67 8 0 63 63 0 41 0=0A>= >>>>>>>>>> 34 0 0 35 34 0 0 0=0A>>>>>>>>>>> 75 0 0 75 77 0 0 0=0A>>>>>>>>= >>> 34 0 0 35 35 0 0 0=0A>>>>>>>>>>> 75 0 0 74 76 0 0 0=0A>>>>>>>>>>> 33 = 0 0 34 33 0 0 0=0A>>>>>>>>>>> 0 0 0 0 5 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 6= 0=0A>>>>>>>>>>> 11 0 0 0 0 0 11 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>= >>>>> 0 17 0 0 0 0 1 0=0A>>>>>>>>>>> GtAttr Lookup Rdlink Read Write Rena= me Access Rddir=0A>>>>>>>>>>> 4 5 0 0 0 0 12 0=0A>>>>>>>>>>> 2 0 0 0 0 0 = 26 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>= >>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0= 0 0 0 0=0A>>>>>>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>= >>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 4 0= 0 0 0 0 2 0=0A>>>>>>>>>>> 2 0 0 0 0 0 24 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>= > 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0= 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> GtAttr Lookup Rdlink Rea= d Write Rename Access Rddir=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0= 0 0 0 0 0 0 0=0A>>>>>>>>>>> 4 0 0 0 0 0 7 0=0A>>>>>>>>>>> 2 1 0 0 0 0 1 = 0=0A>>>>>>>>>>> 0 0 0 0 2 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>= >> 0 0 0 0 6 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>= >>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 4 6 0 = 0 0 0 3 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 2 0 0 0 0 0 0 0=0A>= >>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 = 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> GtAttr Lookup R= dlink Read Write Rename Access Rddir=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>= >>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0= 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 4 71 0 0 0 0 0 0= =0A>>>>>>>>>>> 0 1 0 0 0 0 0 0=0A>>>>>>>>>>> 2 36 0 0 0 0 1 0=0A>>>>>>>>>= >> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 = 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 1 0 0 0 0 0 1 0=0A>>>>>= >>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 79 6 0= 79 79 0 2 0=0A>>>>>>>>>>> 25 0 0 25 26 0 6 0=0A>>>>>>>>>>> 43 18 0 39 46= 0 23 0=0A>>>>>>>>>>> 36 0 0 36 36 0 31 0=0A>>>>>>>>>>> 68 1 0 66 68 0 0 = 0=0A>>>>>>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>= >>>>>>>> 36 0 0 36 36 0 0 0=0A>>>>>>>>>>> 48 0 0 48 49 0 0 0=0A>>>>>>>>>>= > 20 0 0 20 20 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 3 14 0 1= 0 0 11 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>= >>>>>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>>> 4 = 22 0 0 0 0 16 0=0A>>>>>>>>>>> 2 0 0 0 0 0 23 0=0A>>>>>>>>>>> =0A>>>>>>>>>= >> Regards,=0A>>>>>>>>>>> =0A>>>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>>>> UN= IX Systems, Network and Security Engineer=0A>>>>>>>>>>> http://www.unix-e= xperience.fr=0A>>>>>>>>>>> =0A>>>>>>>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo= =C3=AFc Blot"=0A>>>>>>>>>>> a=0A>>>>>>>>>>= > =C3=A9crit:=0A>>>>>>>>>>>> Hi Rick,=0A>>>>>>>>>>>> I stopped the jails = this week-end and started it this=0A>>>>>>>>>>>> morning,=0A>>>>>>>>>>>> = i'll=0A>>>>>>>>>>>> give you some stats this week.=0A>>>>>>>>>>>> =0A>>>>= >>>>>>>> Here is my nfsstat -m output (with your rsize/wsize=0A>>>>>>>>>>= >> tweaks)=0A>>>>>>> =0A>>>>>>> =0A>>>>>> =0A>>>>> =0A>> =0A> nfsv4,tcp,r= esvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acregm= ax=3D60,nametimeo=3D60,negna=0A>>>>>>> =0A>>>>>>>>>>>> =0A>>>>>>> =0A>>>>= >>> =0A>>>>>> =0A>>>>> =0A>> =0A> etimeo=3D60,rsize=3D32768,wsize=3D32768= ,readdirsize=3D32768,readahead=3D1,wcommitsize=3D773136,timeout=3D120,ret= ra=0A>>>>>>> =0A>>>>>>> s=3D2147483647=0A>>>>>>> =0A>>>>>>> On server sid= e my disks are on a raid controller which show a=0A>>>>>>> 512b=0A>>>>>>>= volume and write performances=0A>>>>>>> are very honest (dd if=3D/dev/ze= ro of=3D/jails/test.dd bs=3D4096=0A>>>>>>> count=3D100000000 =3D> 450MBps= )=0A>>>>>>> =0A>>>>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>= >>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>> http://www.un= ix-experience.fr=0A>>>>>>> =0A>>>>>>> 5 d=C3=A9cembre 2014 15:14 "Rick Ma= cklem" a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A>>>>>>= >> Loic Blot wrote:=0A>>>>>>>> =0A>>>>>>>>> Hi,=0A>>>>>>>>> i'm trying to= create a virtualisation environment based on=0A>>>>>>>>> jails.=0A>>>>>>= >>> Those jails are stored under a big ZFS pool on a FreeBSD=0A>>>>>>>>> = 9.3=0A>>>>>>>>> which=0A>>>>>>>>> export a NFSv4 volume. This NFSv4 volum= e was mounted on a=0A>>>>>>>>> big=0A>>>>>>>>> hypervisor (2 Xeon E5v3 + = 128GB memory and 8 ports (but=0A>>>>>>>>> only 1=0A>>>>>>>>> was=0A>>>>>>= >>> used at this time).=0A>>>>>>>>> =0A>>>>>>>>> The problem is simple, m= y hypervisors runs 6 jails (used 1%=0A>>>>>>>>> cpu=0A>>>>>>>>> and=0A>>>= >>>>>> 10GB RAM approximatively and less than 1MB bandwidth) and=0A>>>>>>= >>> works=0A>>>>>>>>> fine at start but the system slows down and after 2= -3 days=0A>>>>>>>>> become=0A>>>>>>>>> unusable. When i look at top comma= nd i see 80-100% on=0A>>>>>>>>> system=0A>>>>>>>>> and=0A>>>>>>>>> comman= ds are very very slow. Many process are tagged with=0A>>>>>>>>> nfs_cl*.= =0A>>>>>>>> =0A>>>>>>>> To be honest, I would expect the slowness to be b= ecause of=0A>>>>>>>> slow=0A>>>>>>>> response=0A>>>>>>>> from the NFSv4 s= erver, but if you do:=0A>>>>>>>> # ps axHl=0A>>>>>>>> on a client when it= is slow and post that, it would give us=0A>>>>>>>> some=0A>>>>>>>> more= =0A>>>>>>>> information on where the client side processes are sitting.= =0A>>>>>>>> If you also do something like:=0A>>>>>>>> # nfsstat -c -w 1= =0A>>>>>>>> and let it run for a while, that should show you how many=0A>= >>>>>>> RPCs=0A>>>>>>>> are=0A>>>>>>>> being done and which ones.=0A>>>>>= >>> =0A>>>>>>>> # nfsstat -m=0A>>>>>>>> will show you what your mount is = actually using.=0A>>>>>>>> The only mount option I can suggest trying is= =0A>>>>>>>> "rsize=3D32768,wsize=3D32768",=0A>>>>>>>> since some network = environments have difficulties with 64K.=0A>>>>>>>> =0A>>>>>>>> There are= a few things you can try on the NFSv4 server side,=0A>>>>>>>> if=0A>>>>>= >>> it=0A>>>>>>>> appears=0A>>>>>>>> that the clients are generating a la= rge RPC load.=0A>>>>>>>> - disabling the DRC cache for TCP by setting=0A>= >>>>>>> vfs.nfsd.cachetcp=3D0=0A>>>>>>>> - If the server is seeing a larg= e write RPC load, then=0A>>>>>>>> "sync=3Ddisabled"=0A>>>>>>>> might help= , although it does run a risk of data loss when=0A>>>>>>>> the=0A>>>>>>>>= server=0A>>>>>>>> crashes.=0A>>>>>>>> Then there are a couple of other Z= FS related things (I'm not=0A>>>>>>>> a=0A>>>>>>>> ZFS=0A>>>>>>>> guy,=0A= >>>>>>>> but these have shown up on the mailing lists).=0A>>>>>>>> - make= sure your volumes are 4K aligned and ashift=3D12 (in=0A>>>>>>>> case a= =0A>>>>>>>> drive=0A>>>>>>>> that uses 4K sectors is pretending to be 512= byte sectored)=0A>>>>>>>> - never run over 70-80% full if write performan= ce is an=0A>>>>>>>> issue=0A>>>>>>>> - use a zil on an SSD with good writ= e performance=0A>>>>>>>> =0A>>>>>>>> The only NFSv4 thing I can tell you = is that it is known that=0A>>>>>>>> ZFS's=0A>>>>>>>> algorithm for determ= ining sequential vs random I/O fails for=0A>>>>>>>> NFSv4=0A>>>>>>>> duri= ng writing and this can be a performance hit. The only=0A>>>>>>>> workaro= und=0A>>>>>>>> is to use NFSv3 mounts, since file handle affinity=0A>>>>>= >>> apparently=0A>>>>>>>> fixes=0A>>>>>>>> the problem and this is only d= one for NFSv3.=0A>>>>>>>> =0A>>>>>>>> rick=0A>>>>>>>> =0A>>>>>>>>> I saw = that there are TSO issues with igb then i'm trying to=0A>>>>>>>>> disable= =0A>>>>>>>>> it with sysctl but the situation wasn't solved.=0A>>>>>>>>> = =0A>>>>>>>>> Someone has got ideas ? I can give you more informations if= =0A>>>>>>>>> you=0A>>>>>>>>> need.=0A>>>>>>>>> =0A>>>>>>>>> Thanks in adv= ance.=0A>>>>>>>>> Regards,=0A>>>>>>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>= >>>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>>>> http://www= .unix-experience.fr=0A>>>>>>>>> _________________________________________= ______=0A>>>>>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>>>>> http:/= /lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>>>> To unsubscribe= , send any mail to=0A>>>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>= >>>> =0A>>>>>>> _______________________________________________=0A>>>>>>>= freebsd-fs@freebsd.org mailing list=0A>>>>>>> http://lists.freebsd.org/m= ailman/listinfo/freebsd-fs=0A>>>>>>> To unsubscribe, send any mail to=0A>= >>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>>>>> =0A>>>>>>> ________= _______________________________________=0A>>>>>>> freebsd-fs@freebsd.org = mailing list=0A>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-= fs=0A>>>>>>> To unsubscribe, send any mail to=0A>>>>>>> "freebsd-fs-unsub= scribe@freebsd.org"=0A>>>>>>> =0A>>>>>>> ________________________________= _______________=0A>>>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>>> h= ttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>> To unsubscr= ibe, send any mail to=0A>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>= >>>>> _______________________________________________=0A>>>>>>> freebsd-f= s@freebsd.org mailing list=0A>>>>>>> http://lists.freebsd.org/mailman/lis= tinfo/freebsd-fs=0A>>>>>>> To unsubscribe, send any mail to=0A>>>>>>> "fr= eebsd-fs-unsubscribe@freebsd.org"=0A>>>> From owner-freebsd-fs@FreeBSD.ORG Mon Dec 22 08:57:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CC406AD7 for ; Mon, 22 Dec 2014 08:57:51 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5260838D3 for ; Mon, 22 Dec 2014 08:57:50 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id EE701267F6; Mon, 22 Dec 2014 08:57:47 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id eA7kbqvI9l0p; Mon, 22 Dec 2014 08:57:41 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 3DC8E267E5; Mon, 22 Dec 2014 08:57:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1419238661; bh=tkI8y4kGJuVjfJhuS3RM5VSnZTpY3xxmdi2uTs+TmD0=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=N8wxW3w7RY6YP5xd1NGJatqnJVQf/AM8cniFDNI8QdiotgEPy71TyHhAXNR1kFQJM h6L8GTjlkfN9lMszgAlDnwz1q0NvNBgXXMH4R0tLWxRGKh/+SlHvLKNb5S0zQXRbSo WDya1gg37JU+MoSOiHT3jqBvFZ7Gv0dt4LQkmzq8= Mime-Version: 1.0 Date: Mon, 22 Dec 2014 08:57:40 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <811d455b0bcaeb43711e8108c96d4f2b@mail.unix-experience.fr> X-Mailer: RainLoop/1.7.0.203 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <9fcfcbfe720a9b56a995cd6e227b8f9f@mail.unix-experience.fr> References: <9fcfcbfe720a9b56a995cd6e227b8f9f@mail.unix-experience.fr> <2087358136.248078.1419122007097.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Dec 2014 08:57:51 -0000 Hi,=0A=0ATo clarify because of our exchanges. Here are the current sysctl= options for server:=0A=0Avfs.nfsd.enable_nobodycheck=3D0=0Avfs.nfsd.enab= le_nogroupcheck=3D0=0A=0Avfs.nfsd.maxthreads=3D200=0Avfs.nfsd.tcphighwate= r=3D10000=0Avfs.nfsd.tcpcachetimeo=3D300=0Avfs.nfsd.server_min_nfsvers=3D= 4=0A=0Akern.maxvnodes=3D10000000=0Akern.ipc.maxsockbuf=3D4194304=0Anet.in= et.tcp.sendbuf_max=3D4194304=0Anet.inet.tcp.recvbuf_max=3D4194304=0A=0Avf= s.lookup_shared=3D0=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Ne= twork and Security Engineer=0Ahttp://www.unix-experience.fr=0A=0A22 d=C3= =A9cembre 2014 09:42 "Lo=C3=AFc Blot" a = =C3=A9crit: =0A=0AHi Rick,=0Amy 5 jails runs this weekend and now i have = some stats on this monday.=0A=0AHopefully deadlock was fixed, yeah, but e= verything isn't good :(=0A=0AOn NFSv4 server (FreeBSD 10.1) system uses 3= 5% CPU=0A=0AAs i can see this is because of nfsd:=0A=0A918 root = 96 20 0 12352K 3372K rpcsvc 6 51.4H 273.68% nfsd: server (nfsd)= =0A=0AIf i look at dmesg i see:=0Anfsd server cache flooded, try increasi= ng vfs.nfsd.tcphighwater=0A=0Avfs.nfsd.tcphighwater was set to 10000, i i= ncrease it to 15000=0A=0AHere is 'nfsstat -s' output:=0A=0AServer Info:= =0AGetattr Setattr Lookup Readlink Read Write Create = Remove=0A12600652 1812 2501097 156 1386423 1983729 = 123 162067=0ARename Link Symlink Mkdir Rmdir Readdi= r RdirPlus Access=0A36762 9 0 0 0 = 3147 0 623524=0AMknod Fsstat Fsinfo PathConf Commi= t=0A0 0 0 0 328117=0AServer Ret-Failed=0A0=0AS= erver Faults=0A0=0AServer Cache Stats:=0AInprog Idem Non-idem Mi= sses=0A0 0 0 12635512=0AServer Write Gathering:=0AWriteO= ps WriteRPC Opsaved=0A1983729 1983729 0=0A=0AAnd here is 'pr= ocstat -kk' for nfsd (server)=0A=0A918 100528 nfsd nfsd: mast= er mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10= _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_run+0x1de nfsrvd_= nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c amd64_syscall+0x351 Xfast_sy= scall+0xfb =0A918 100568 nfsd nfsd: service mi_switch+0xe1= sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A918 100569 nfsd nfsd: service mi_switch+0xe1 sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 10057= 0 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100571 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe =0A918 100572 nfsd nfsd:= service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf = _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a fork_trampoline+0xe =0A918 100573 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A918 100574 nfsd nfsd: service mi_switch+0x= e1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A918 100575 nfsd nfsd: service mi_switch+0xe1 sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 1005= 76 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100577 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a fork_trampoline+0xe =0A918 100578 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A918 100579 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_t= rampoline+0xe =0A918 100580 nfsd nfsd: service mi_switch+0= xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0x= e =0A918 100581 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100= 582 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100583 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100584 nfsd nfs= d: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0x= f _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe =0A918 100585 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_s= ig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A918 100586 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A918 100587 nfsd nfsd: service mi_switch+0xe1 sleepq_= catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 10= 0588 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100589 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100590 nfsd nf= sd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0= xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a fork_trampoline+0xe =0A918 100591 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A918 100592 nfsd nfsd: service mi_switch= +0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+= 0xe =0A918 100593 nfsd nfsd: service mi_switch+0xe1 sleepq= _catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 1= 00594 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100595 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100596 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A918 100597 nfsd nfsd: service= mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait= _sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe =0A918 100598 nfsd nfsd: service mi_switc= h+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline= +0xe =0A918 100599 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 = 100600 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100601 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sle= epq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100602 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig= +0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe =0A918 100603 nfsd nfsd: servic= e mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wai= t_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fo= rk_trampoline+0xe =0A918 100604 nfsd nfsd: service mi_swit= ch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a = svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampolin= e+0xe =0A918 100605 nfsd nfsd: service mi_switch+0xe1 slee= pq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inte= rnal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918= 100606 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100607 nfsd= nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100608 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_si= g+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a fork_trampoline+0xe =0A918 100609 nfsd nfsd: servi= ce mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wa= it_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a f= ork_trampoline+0xe =0A918 100610 nfsd nfsd: service mi_swi= tch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoli= ne+0xe =0A918 100611 nfsd nfsd: service mi_switch+0xe1 sle= epq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A91= 8 100612 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100613 nfs= d nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab s= leepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100614 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb for= k_exit+0x9a fork_trampoline+0xe =0A918 100615 nfsd nfsd: serv= ice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe =0A918 100616 nfsd nfsd: service mi_sw= itch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16= a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampol= ine+0xe =0A918 100617 nfsd nfsd: service mi_switch+0xe1 sl= eepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A9= 18 100618 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e= svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100619 nf= sd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab = sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100620 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_= sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a fork_trampoline+0xe =0A918 100621 nfsd nfsd: ser= vice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_= wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= fork_trampoline+0xe =0A918 100622 nfsd nfsd: service mi_s= witch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x1= 6a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampo= line+0xe =0A918 100623 nfsd nfsd: service mi_switch+0xe1 s= leepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A= 918 100624 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_= signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87= e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100625 n= fsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab= sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100626 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait= _sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb f= ork_exit+0x9a fork_trampoline+0xe =0A918 100627 nfsd nfsd: se= rvice mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv= _wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a fork_trampoline+0xe =0A918 100628 nfsd nfsd: service mi_= switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x= 16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tramp= oline+0xe =0A918 100629 nfsd nfsd: service mi_switch+0xe1 = sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_= internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe = =0A918 100630 nfsd nfsd: service mi_switch+0xe1 sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 10063= 1 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+0= xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100632 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe =0A918 100633 nfsd nfsd:= service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf = _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a fork_trampoline+0xe =0A918 100634 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig= +0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe =0A918 100635 nfsd nfsd: service mi_switch+0x= e1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A918 100636 nfsd nfsd: service mi_switch+0xe1 sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+= 0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 1006= 37 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_th= read_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100638 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_= wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a fork_trampoline+0xe =0A918 100639 nfsd nfsd= : service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf= _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a fork_trampoline+0xe =0A918 100640 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_t= rampoline+0xe =0A918 100641 nfsd nfsd: service mi_switch+0= xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_= run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0x= e =0A918 100642 nfsd nfsd: service mi_switch+0xe1 sleepq_c= atch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100= 643 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signals= +0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100644 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq= _wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100645 nfsd nfs= d: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0x= f _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe =0A918 100646 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_s= ig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_= trampoline+0xe =0A918 100647 nfsd nfsd: service mi_switch+= 0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe =0A918 100648 nfsd nfsd: service mi_switch+0xe1 sleepq_= catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 10= 0649 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100650 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleep= q_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100651 nfsd nf= sd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0= xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a fork_trampoline+0xe =0A918 100652 nfsd nfsd: service = mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork= _trampoline+0xe =0A918 100653 nfsd nfsd: service mi_switch= +0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a sv= c_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+= 0xe =0A918 100654 nfsd nfsd: service mi_switch+0xe1 sleepq= _catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 1= 00655 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_signa= ls+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100656 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab slee= pq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100657 nfsd n= fsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+= 0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a fork_trampoline+0xe =0A918 100658 nfsd nfsd: service= mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait= _sig+0x16a svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe =0A918 100659 nfsd nfsd: service mi_switc= h+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a s= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline= +0xe =0A918 100660 nfsd nfsd: service mi_switch+0xe1 sleep= q_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 = 100661 nfsd nfsd: service mi_switch+0xe1 sleepq_catch_sign= als+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe =0A918 100662 nfsd = nfsd: service mi_switch+0xe1 sleepq_catch_signals+0xab sle= epq_wait_sig+0xf _cv_wait_sig+0x16a svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a fork_trampoline+0xe=0A---=0A=0ANow if we look at cl= ient (FreeBSD 9.3)=0A=0AWe see system was very busy and do many and many = interrupts=0A=0ACPU: 0.0% user, 0.0% nice, 37.8% system, 51.2% interrup= t, 11.0% idle=0A=0AA look at process list shows that there are many sendm= ail process in state nfstry=0A=0Anfstry 18 32:27 0.88% sendmail: Queue= runner@00:30:00 for /var/spool/clientm=0A=0AHere is 'nfsstat -c' output:= =0A=0AClient Info:=0ARpc Counts:=0AGetattr Setattr Lookup Readlink = Read Write Create Remove=0A1051347 1724 2494481 = 118 903902 1901285 162676 161899=0ARename Link Symli= nk Mkdir Rmdir Readdir RdirPlus Access=0A36744 2 = 0 114 40 3131 0 544136=0AMknod Fsst= at Fsinfo PathConf Commit=0A9 0 0 0 245= 821=0ARpc Info:=0ATimedOut Invalid X Replies Retries Requests=0A0 = 0 0 0 8356557=0ACache Info:=0AAttr Hits Misses= Lkup Hits Misses BioR Hits Misses BioW Hits Misses=0A108754455 = 491475 54229224 2437229 46814561 821723 5132123 1871871=0AB= ioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits M= isses=0A144035 118 53736 2753 27813 1 5723883= 9 544205=0A=0AIf you need more things, tell me, i let the PoC in this = state.=0A=0AThanks=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Net= work and Security Engineer=0Ahttp://www.unix-experience.fr=0A=0A21 d=C3= =A9cembre 2014 01:33 "Rick Macklem" a =C3=A9crit: = =0A=0A=0ALoic Blot wrote:=0A=0A> Hi Rick,=0A> ok, i don't need locallocks= , i haven't understand option was for that=0A> usage, i removed it.=0A> I= do more tests on monday.=0A> Thanks for the deadlock fix, for other peop= le :)=0A=0AGood. Please let us know if running with vfs.nfsd.enable_local= locks=3D0=0Agets rid of the deadlocks? (I think it fixes the one you saw.= )=0A=0AOn the performance side, you might also want to try different valu= es of=0Areadahead, if the Linux client has such a mount option. (With the= =0ANFSv4-ZFS sequential vs random I/O heuristic, I have no idea what the= =0Aoptimal readahead value would be.)=0A=0AGood luck with it and please l= et us know how it goes, rick=0Aps: I now have a patch to fix the deadlock= when vfs.nfsd.enable_locallocks=3D1=0Ais set. I'll post it for anyone wh= o is interested after I put it=0Athrough some testing.=0A=0A=0A--=0ABest = regards,=0ALo=C3=AFc BLOT,=0AUNIX systems, security and network engineer= =0Ahttp://www.unix-experience.fr=0A=0ALe jeudi 18 d=C3=A9cembre 2014 =C3= =A0 19:46 -0500, Rick Macklem a =C3=A9crit : =0A=0ALoic Blot wrote: =0A> = Hi rick,=0A> i tried to start a LXC container on Debian Squeeze from my= =0A> freebsd=0A> ZFS+NFSv4 server and i also have a deadlock on nfsd=0A> = (vfs.lookup_shared=3D0). Deadlock procs each time i launch a=0A> squeeze= =0A> container, it seems (3 tries, 3 fails).=0A=0AWell, I`ll take a look = at this `procstat -kk`, but the only thing=0AI`ve seen posted w.r.t. avoi= ding deadlocks in ZFS is to not use=0Anullfs. (I have no idea if you are = using any nullfs mounts, but=0Aif so, try getting rid of them.)=0A=0AHere= `s a high level post about the ZFS and vnode locking problem,=0Abut there= is no patch available, as far as I know.=0A=0Ahttp://docs.FreeBSD.org/cg= i/mid.cgi?54739F41.8030407=0A=0Arick=0A=0A=0A921 - D 0:00.02 nfsd:= server (nfsd)=0A=0AHere is the procstat -kk=0A=0APID TID COMM = TDNAME KSTACK=0A921 100538 nfsd nfsd: master = mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e= =0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Anfsvno_advlock+0x1= 19 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0Anfsrvd_locku+0x283 nfsrvd_d= orpc+0xec6 nfssvc_program+0x554=0Asvc_run_internal+0xc77 svc_run+0x1de nf= srvd_nfsd+0x1ca=0Anfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A921 100572 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_= start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100573 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start= +0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100574 nfsd n= fsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100575 nfsd nfsd: = service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0Afork_trampoline+0xe=0A921 100576 nfsd nfsd: servi= ce mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_c= v_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0Afork_trampoline+0xe=0A921 100577 nfsd nfsd: service = mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wai= t_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0Afork_trampoline+0xe=0A921 100578 nfsd nfsd: service mi_= switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_si= g+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afo= rk_trampoline+0xe=0A921 100579 nfsd nfsd: service mi_switc= h+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x1= 6a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_tr= ampoline+0xe=0A921 100580 nfsd nfsd: service mi_switch+0xe= 1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0A= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampol= ine+0xe=0A921 100581 nfsd nfsd: service mi_switch+0xe1=0As= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0= xe=0A921 100582 nfsd nfsd: service mi_switch+0xe1=0Asleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A= 921 100583 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 1= 00584 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100585= nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100586 nfsd= nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100587 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleep= q_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100588 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100589 nfsd nf= sd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb f= ork_exit+0x9a=0Afork_trampoline+0xe=0A921 100590 nfsd nfsd: s= ervice mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0Afork_trampoline+0xe=0A921 100591 nfsd nfsd: servi= ce mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_c= v_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0Afork_trampoline+0xe=0A921 100592 nfsd nfsd: service = mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wai= t_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0Afork_trampoline+0xe=0A921 100593 nfsd nfsd: service mi_= switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_si= g+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afo= rk_trampoline+0xe=0A921 100594 nfsd nfsd: service mi_switc= h+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x1= 6a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_tr= ampoline+0xe=0A921 100595 nfsd nfsd: service mi_switch+0xe= 1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0A= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampol= ine+0xe=0A921 100596 nfsd nfsd: service mi_switch+0xe1=0As= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0= xe=0A921 100597 nfsd nfsd: service mi_switch+0xe1=0Asleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A= 921 100598 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 1= 00599 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100600= nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100601 nfsd= nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100602 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleep= q_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100603 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100604 nfsd nf= sd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb f= ork_exit+0x9a=0Afork_trampoline+0xe=0A921 100605 nfsd nfsd: s= ervice mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0Afork_trampoline+0xe=0A921 100606 nfsd nfsd: servi= ce mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_c= v_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0Afork_trampoline+0xe=0A921 100607 nfsd nfsd: service = mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wai= t_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0Afork_trampoline+0xe=0A921 100608 nfsd nfsd: service mi_= switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_si= g+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afo= rk_trampoline+0xe=0A921 100609 nfsd nfsd: service mi_switc= h+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x1= 6a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_tr= ampoline+0xe=0A921 100610 nfsd nfsd: service mi_switch+0xe= 1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0A= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampol= ine+0xe=0A921 100611 nfsd nfsd: service mi_switch+0xe1=0As= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0= xe=0A921 100612 nfsd nfsd: service mi_switch+0xe1=0Asleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A= 921 100613 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 1= 00614 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100615= nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100616 nfsd= nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a _sleep+0x= 287 nfsmsleep+0x66 nfsv4_lock+0x9b=0Anfsrv_getlockfile+0x179 nfsrv_lockct= rl+0x21f nfsrvd_lock+0x5b1=0Anfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_= run_internal+0xc77=0Asvc_thread_start+0xb fork_exit+0x9a fork_trampoline+= 0xe=0A921 100617 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_i= nternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe= =0A921 100618 nfsd nfsd: service mi_switch+0xe1=0Asleepq_w= ait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0Anfsrvd_dorpc+0x316= nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb fork_= exit+0x9a fork_trampoline+0xe=0A921 100619 nfsd nfsd: service= mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_= wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a=0Afork_trampoline+0xe=0A921 100620 nfsd nfsd: service m= i_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_= sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A= fork_trampoline+0xe=0A921 100621 nfsd nfsd: service mi_swi= tch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0= x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_= trampoline+0xe=0A921 100622 nfsd nfsd: service mi_switch+0= xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a= =0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_tram= poline+0xe=0A921 100623 nfsd nfsd: service mi_switch+0xe1= =0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0As= vc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoli= ne+0xe=0A921 100624 nfsd nfsd: service mi_switch+0xe1=0Asl= eepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0x= e=0A921 100625 nfsd nfsd: service mi_switch+0xe1=0Asleepq_= catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A9= 21 100626 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch= _signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 10= 0627 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_sign= als+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87= e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100628 = nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc= _thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100629 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thre= ad_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100630 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq= _wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100631 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait= _sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100632 nfsd nfs= d: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+= 0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fo= rk_exit+0x9a=0Afork_trampoline+0xe=0A921 100633 nfsd nfsd: se= rvice mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0Afork_trampoline+0xe=0A921 100634 nfsd nfsd: servi= ce mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_c= v_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0Afork_trampoline+0xe=0A921 100635 nfsd nfsd: service = mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wai= t_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0Afork_trampoline+0xe=0A921 100636 nfsd nfsd: service mi_= switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_si= g+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afo= rk_trampoline+0xe=0A921 100637 nfsd nfsd: service mi_switc= h+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x1= 6a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_tr= ampoline+0xe=0A921 100638 nfsd nfsd: service mi_switch+0xe= 1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0A= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampol= ine+0xe=0A921 100639 nfsd nfsd: service mi_switch+0xe1=0As= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0= xe=0A921 100640 nfsd nfsd: service mi_switch+0xe1=0Asleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A= 921 100641 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 1= 00642 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100643= nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100644 nfsd= nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100645 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleep= q_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100646 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100647 nfsd nf= sd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb f= ork_exit+0x9a=0Afork_trampoline+0xe=0A921 100648 nfsd nfsd: s= ervice mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0Afork_trampoline+0xe=0A921 100649 nfsd nfsd: servi= ce mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_c= v_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0Afork_trampoline+0xe=0A921 100650 nfsd nfsd: service = mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wai= t_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0Afork_trampoline+0xe=0A921 100651 nfsd nfsd: service mi_= switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_si= g+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afo= rk_trampoline+0xe=0A921 100652 nfsd nfsd: service mi_switc= h+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x1= 6a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_tr= ampoline+0xe=0A921 100653 nfsd nfsd: service mi_switch+0xe= 1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0A= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampol= ine+0xe=0A921 100654 nfsd nfsd: service mi_switch+0xe1=0As= leepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0= xe=0A921 100655 nfsd nfsd: service mi_switch+0xe1=0Asleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A= 921 100656 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catc= h_signals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 1= 00657 nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_sig= nals+0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100658= nfsd nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100659 nfsd= nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100660 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleep= q_wait_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100661 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0Afork_trampoline+0xe=0A921 100662 nfsd nf= sd: service mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig= +0xf=0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb f= ork_exit+0x9a=0Afork_trampoline+0xe=0A921 100663 nfsd nfsd: s= ervice mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf= =0A_cv_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0Afork_trampoline+0xe=0A921 100664 nfsd nfsd: servi= ce mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_c= v_wait_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0Afork_trampoline+0xe=0A921 100665 nfsd nfsd: service = mi_switch+0xe1=0Asleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A_cv_wai= t_sig+0x16a=0Asvc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0Afork_trampoline+0xe=0A921 100666 nfsd nfsd: service mi_= switch+0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9= b=0Anfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8=0Anfsrvd_dorpc+0xc76=0A= nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork= _exit+0x9a fork_trampoline+0xe=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX = Systems, Network and Security Engineer=0Ahttp://www.unix-experience.fr=0A= =0A15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a=0A= =C3=A9crit: =0A=0ALoic Blot wrote:=0A=0A> For more informations, here is = procstat -kk on nfsd, if you=0A> need=0A> more=0A> hot datas, tell me.=0A= > =0A> Regards, PID TID COMM TDNAME KSTACK=0A> = 918 100529 nfsd nfsd: master mi_switch+0xe1=0A> sleepq_wa= it+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1= _APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+= 0xc77 svc_run+0x1de=0A> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x= 9c=0A> amd64_syscall+0x351=0A=0AWell, most of the threads are stuck like = this one, waiting for=0Aa=0Avnode=0Alock in ZFS. All of them appear to be= in zfs_fhtovp().=0AI`m not a ZFS guy, so I can`t help much. I`ll try cha= nging the=0Asubject line=0Ato include ZFS vnode lock, so maybe the ZFS gu= ys will take a=0Alook.=0A=0AThe only thing I`ve seen suggested is trying:= =0Asysctl vfs.lookup_shared=3D0=0Ato disable shared vop_lookup()s. Appare= ntly zfs_lookup()=0Adoesn`t=0Aobey the vnode locking rules for lookup and= rename, according=0Ato=0Athe posting I saw.=0A=0AI`ve added a couple of = comments about the other threads below,=0Abut=0Athey are all either waiti= ng for an RPC request or waiting for=0Athe=0Athreads stuck on the ZFS vno= de lock to complete.=0A=0Arick=0A=0A> 918 100564 nfsd nfsd: s= ervice mi_switch+0xe1=0A> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A> _cv_wait_sig+0x16a=0A> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A> fork_trampoline+0xe=0A=0AFyi, this thread is just wait= ing for an RPC to arrive. (Normal)=0A=0A> 918 100565 nfsd nfs= d: service mi_switch+0xe1=0A> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A> _cv_wait_sig+0x16a=0A> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A> fork_trampoline+0xe=0A> 918 100566 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A> _cv_wait_sig+0x16a=0A> svc_run_internal+0x87e svc_thread_s= tart+0xb fork_exit+0x9a=0A> fork_trampoline+0xe=0A> 918 100567 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A> _cv_wait_sig+0x16a=0A> svc_run_internal+0x87e svc_thr= ead_start+0xb fork_exit+0x9a=0A> fork_trampoline+0xe=0A> 918 100568 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_catch_signals+0xab= sleepq_wait_sig+0xf=0A> _cv_wait_sig+0x16a=0A> svc_run_internal+0x87e sv= c_thread_start+0xb fork_exit+0x9a=0A> fork_trampoline+0xe=0A> 918 100569 = nfsd nfsd: service mi_switch+0xe1=0A> sleepq_catch_signals= +0xab sleepq_wait_sig+0xf=0A> _cv_wait_sig+0x16a=0A> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A> fork_trampoline+0xe=0A> 918 10= 0570 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A> _cv_wait_sig+0x16a=0A> svc_run_interna= l+0x87e svc_thread_start+0xb fork_exit+0x9a=0A> fork_trampoline+0xe=0A> 9= 18 100571 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wai= t+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A> nfsrvd_dorpc+0x316= nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_start+0xb for= k_exit+0x9a fork_trampoline+0xe=0A> 918 100572 nfsd nfsd: ser= vice mi_switch+0xe1=0A> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 n= fsv4_lock+0x9b=0A> nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8=0A> nfsr= vd_dorpc+0xc76=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_th= read_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A=0AThis one (and = a few others) are waiting for the nfsv4_lock.=0AThis=0Ahappens=0Abecause = other threads are stuck with RPCs in progress. (ie. The=0Aones=0Awaiting = on the vnode lock in zfs_fhtovp().)=0AFor these, the RPC needs to lock ou= t other threads to do the=0Aoperation,=0Aso it waits for the nfsv4_lock()= which can exclusively lock the=0ANFSv4=0Adata structures once all other = nfsd threads complete their RPCs=0Ain=0Aprogress.=0A=0A> 918 100573 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a _sleep+0= x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A> nfsrvd_dorpc+0x316 nfssvc_program= +0x554 svc_run_internal+0xc77=0A> svc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe=0A=0ASame as above.=0A=0A> 918 100574 nfsd n= fsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockm= gr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> z= fs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_start+0x= b=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100575 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> = zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x9= 17=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_start+0= xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100576 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>= zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x= 917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_start+= 0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100577 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= > zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_start= +0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100578 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorp= c+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_st= art+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100579 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d = __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x4= 3=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_s= tart+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100580 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d= __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x= 43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_do= rpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread_= start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100581 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15= d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0= x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_d= orpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thread= _start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100582 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x1= 5d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_= dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_threa= d_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100583 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x= 15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock= +0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd= _dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thre= ad_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100584 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+0= x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_loc= k+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_thr= ead_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100585 nfsd = nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk+= 0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lo= ck+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_th= read_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100586 nfsd= nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleeplk= +0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_l= ock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_t= hread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100587 nfs= d nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_= lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nf= srvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc_= thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100588 nf= sd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 n= fsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> svc= _thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100589 n= fsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a slee= plk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> sv= c_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100590 = nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _= vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8= nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> s= vc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100591= nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a sl= eeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab = _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc= 8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A> = svc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 10059= 2 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0x= c8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A>= svc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 1005= 93 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a = sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0xa= b _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+0= xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77=0A= > svc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 100= 594 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0x3a= sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV+0x= ab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhtovp+= 0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc77= =0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918 = 100595 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fhto= vp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc7= 7=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 918= 100596 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait+= 0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_AP= V+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0xc= 77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 91= 8 100597 nfsd nfsd: service mi_switch+0xe1=0A> sleepq_wait= +0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc_run_internal+0x= c77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_trampoline+0xe=0A> 9= 18 100598 nfsd=20=20=20=20=20=20=20 nfsd: service mi_switch+0xe1= =0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 svc= _run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_tram= poline+0xe=0A> 918 100599 nfsd nfsd: service mi_switch+0xe= 1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlock= +0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 sv= c_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_tra= mpoline+0xe=0A> 918 100600 nfsd nfsd: service mi_switch+0x= e1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdloc= k+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_f= htovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 s= vc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_tr= ampoline+0xe=0A> 918 100601 nfsd nfsd: service mi_switch+0= xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno_= fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554 = svc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_t= rampoline+0xe=0A> 918 100602 nfsd nfsd: service mi_switch+= 0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvno= _fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x554= svc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork_= trampoline+0xe=0A> 918 100603 nfsd nfsd: service mi_switch= +0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_std= lock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsvn= o_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fork= _trampoline+0xe=0A> 918 100604 nfsd nfsd: service mi_switc= h+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x5= 54 svc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a for= k_trampoline+0xe=0A> 918 100605 nfsd nfsd: service mi_swit= ch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nfs= vno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0x= 554 svc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a fo= rk_trampoline+0xe=0A> 918 100606 nfsd nfsd: service mi_swi= tch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> nf= svno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+0= x554 svc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a f= ork_trampoline+0xe=0A> 918 100607 nfsd nfsd: service mi_sw= itch+0xe1=0A> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A> vop= _stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A> zfs_fhtovp+0x38d=0A> n= fsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A> svc_thread_start+0xb=0A> fork_exit+0x9a = fork_trampoline+0xe=0A=0ALots more waiting for the ZFS vnode lock in zfs_= fhtovp().=0A=0A=0A918 100608 nfsd nfsd: service mi_switch+= 0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0Anfs= rv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1=0Anfsrvd_dorp= c+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0x= b fork_exit+0x9a fork_trampoline+0xe=0A918 100609 nfsd nfsd: = service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0= x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_pr= ogram+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9= a fork_trampoline+0xe=0A918 100610 nfsd nfsd: service mi_s= witch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0Avop_st= dlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Anfsvno_advlock+0x119 nfsrv_= dolocal+0x84 nfsrv_lockctrl+0x14ad=0Anfsrvd_locku+0x283 nfsrvd_dorpc+0xec= 6 nfssvc_program+0x554=0Asvc_run_internal+0xc77 svc_thread_start+0xb fork= _exit+0x9a=0Afork_trampoline+0xe=0A918 100611 nfsd nfsd: serv= ice mi_switch+0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv= 4_lock+0x9b=0Anfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0x= c77=0Asvc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A918 10061= 2 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a _sl= eep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0Anfsrvd_dorpc+0x316 nfssvc_prog= ram+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb fork_exit+0x9a fo= rk_trampoline+0xe=0A918 100613 nfsd nfsd: service mi_switc= h+0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0An= fsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_threa= d_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A918 100614 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsms= leep+0x66 nfsv4_lock+0x9b=0Anfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_r= un_internal+0xc77=0Asvc_thread_start+0xb fork_exit+0x9a fork_trampoline+0= xe=0A918 100615 nfsd nfsd: service mi_switch+0xe1=0Asleepq= _wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0Anfsrvd_dorpc+0x3= 16 nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb for= k_exit+0x9a fork_trampoline+0xe=0A918 100616 nfsd nfsd: servi= ce mi_switch+0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4= _lock+0x9b=0Anfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc= 77=0Asvc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A918 100617= nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a _sle= ep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0Anfsrvd_dorpc+0x316 nfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb fork_exit+0x9a for= k_trampoline+0xe=0A918 100618 nfsd nfsd: service mi_switch= +0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0Anf= srvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread= _start+0xb fork_exit+0x9a fork_trampoline+0xe=0A918 100619 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs= _fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A= nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork= _exit+0x9a fork_trampoline+0xe=0A918 100620 nfsd nfsd: servic= e mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d= =0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progra= m+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fo= rk_trampoline+0xe=0A918 100621 nfsd nfsd: service mi_switc= h+0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0An= fsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_threa= d_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A918 100622 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azf= s_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100623 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfs= v4_lock+0x9b=0Anfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0= xc77=0Asvc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A918 1006= 24 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sl= eeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _v= n_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread= _start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 100625 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __l= ockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Az= fs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100626 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38= d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a f= ork_trampoline+0xe=0A918 100627 nfsd nfsd: service mi_swit= ch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_ru= n_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+= 0xe=0A918 100628 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc7= 7=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 10062= 9 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_= start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 100630 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azf= s_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100631 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38= d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a f= ork_trampoline+0xe=0A918 100632 nfsd nfsd: service mi_swit= ch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_ru= n_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+= 0xe=0A918 100633 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc7= 7=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 10063= 4 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_= start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 100635 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azf= s_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100636 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38= d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a f= ork_trampoline+0xe=0A918 100637 nfsd nfsd: service mi_swit= ch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_ru= n_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+= 0xe=0A918 100638 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc7= 7=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 10063= 9 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_= start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 100640 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azf= s_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100641 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38= d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a f= ork_trampoline+0xe=0A918 100642 nfsd nfsd: service mi_swit= ch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_ru= n_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+= 0xe=0A918 100643 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc7= 7=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 10064= 4 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_= start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 100645 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azf= s_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100646 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38= d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a f= ork_trampoline+0xe=0A918 100647 nfsd nfsd: service mi_swit= ch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_ru= n_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+= 0xe=0A918 100648 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc7= 7=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 10064= 9 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_= start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 100650 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azf= s_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100651 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38= d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a f= ork_trampoline+0xe=0A918 100652 nfsd nfsd: service mi_swit= ch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_ru= n_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+= 0xe=0A918 100653 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc7= 7=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 10065= 4 nfsd nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sle= eplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn= _lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrv= d_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_= start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A918 100655 nfsd = nfsd: service mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lo= ckmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azf= s_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0Anfssvc_program+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Af= ork_exit+0x9a fork_trampoline+0xe=0A918 100656 nfsd nfsd: ser= vice mi_switch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x= 902=0Avop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38= d=0Anfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_progr= am+0x554 svc_run_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a f= ork_trampoline+0xe=0A918 100657 nfsd nfsd: service mi_swit= ch+0xe1=0Asleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhto= vp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_ru= n_internal+0xc77=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+= 0xe=0A918 100658 nfsd nfsd: service mi_switch+0xe1=0Asleep= q_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0Avop_stdlock+0x3c VOP_LOC= K1_APV+0xab _vn_lock+0x43=0Azfs_fhtovp+0x38d=0Anfsvno_fhtovp+0x7c nfsd_fh= tovp+0xc8 nfsrvd_dorpc+0x917=0Anfssvc_program+0x554 svc_run_internal+0xc7= 7=0Asvc_thread_start+0xb=0Afork_exit+0x9a fork_trampoline+0xe=0A=0ALo=C3= =AFc Blot,=0AUNIX Systems, Network and Security Engineer=0Ahttp://www.uni= x-experience.fr=0A=0A15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot"=0A=0Aa=0A=C3=A9crit: =0A=0AHmmm...=0Anow i'm exper= iencing a deadlock.=0A=0A0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd= : server=0A(nfsd)=0A=0Athe only issue was to reboot the server, but after= rebooting=0Adeadlock arrives a second time when i=0Astart my jails over = NFS.=0A=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Secur= ity Engineer=0Ahttp://www.unix-experience.fr=0A=0A15 d=C3=A9cembre 2014 1= 0:07 "Lo=C3=AFc Blot"=0A=0Aa=0A=C3=A9crit:= =0A=0AHi Rick,=0Aafter talking with my N+1, NFSv4 is required on our=0Ain= frastructure.=0AI tried to upgrade NFSv4+ZFS=0Aserver from 9.3 to 10.1, i= hope this will resolve some=0Aissues...=0A=0ARegards,=0A=0ALo=C3=AFc Blo= t,=0AUNIX Systems, Network and Security Engineer=0Ahttp://www.unix-experi= ence.fr=0A=0A10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot"=0A=0Aa=0A=C3=A9crit:=0A=0AHi Rick,=0Athanks for your sugge= stion.=0AFor my locking bug, rpc.lockd is stucked in rpcrecv state on=0At= he=0Aserver. kill -9 doesn't affect the=0Aprocess, it's blocked.... (Stat= e: Ds)=0A=0Afor the performances=0A=0ANFSv3: 60Mbps=0ANFSv4: 45Mbps=0AReg= ards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security Engineer= =0Ahttp://www.unix-experience.fr=0A=0A10 d=C3=A9cembre 2014 13:56 "Rick M= acklem" =0Aa=0A=C3=A9crit:=0A=0A=0ALoic Blot wrote:= =0A=0A> Hi Rick,=0A> I'm trying NFSv3.=0A> Some jails are starting very w= ell but now i have an issue=0A> with=0A> lockd=0A> after some minutes:=0A= > =0A> nfs server 10.10.X.8:/jails: lockd not responding=0A> nfs server 1= 0.10.X.8:/jails lockd is alive again=0A> =0A> I look at mbuf, but i seems= there is no problem.=0A=0AWell, if you need locks to be visible across m= ultiple=0Aclients,=0Athen=0AI'm afraid you are stuck with using NFSv4 and= the=0Aperformance=0Ayou=0Aget=0Afrom it. (There is no way to do file han= dle affinity for=0ANFSv4=0Abecause=0Athe read and write ops are buried in= the compound RPC and=0Anot=0Aeasily=0Arecognized.)=0A=0AIf the locks don= 't need to be visible across multiple=0Aclients,=0AI'd=0Asuggest trying t= he "nolockd" option with nfsv3.=0A=0A> Here is my rc.conf on server:=0A> = =0A> nfs_server_enable=3D"YES"=0A> nfsv4_server_enable=3D"YES"=0A> nfsuse= rd_enable=3D"YES"=0A> nfsd_server_flags=3D"-u -t -n 256"=0A> mountd_enabl= e=3D"YES"=0A> mountd_flags=3D"-r"=0A> nfsuserd_flags=3D"-usertimeout 0 -f= orce 20"=0A> rpcbind_enable=3D"YES"=0A> rpc_lockd_enable=3D"YES"=0A> rpc_= statd_enable=3D"YES"=0A> =0A> Here is the client:=0A> =0A> nfsuserd_enabl= e=3D"YES"=0A> nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A> nfscbd_enab= le=3D"YES"=0A> rpc_lockd_enable=3D"YES"=0A> rpc_statd_enable=3D"YES"=0A> = =0A> Have you got an idea ?=0A> =0A> Regards,=0A> =0A> Lo=C3=AFc Blot,=0A= > UNIX Systems, Network and Security Engineer=0A> http://www.unix-experie= nce.fr=0A> =0A> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" =0A> a=0A> =C3=A9crit: =0A>> Loic Blot wrote:=0A>> =0A>>> Hi rick,= =0A>>> =0A>>> I waited 3 hours (no lag at jail launch) and now I do:=0A>>= > sysrc=0A>>> memcached_flags=3D"-v -m 512"=0A>>> Command was very very s= low...=0A>>> =0A>>> Here is a dd over NFS:=0A>>> =0A>>> 601062912 bytes t= ransferred in 21.060679 secs (28539579=0A>>> bytes/sec)=0A>> =0A>> Can yo= u try the same read using an NFSv3 mount?=0A>> (If it runs much faster, y= ou have probably been bitten by=0A>> the=0A>> ZFS=0A>> "sequential vs ran= dom" read heuristic which I've been told=0A>> things=0A>> NFS is doing "r= andom" reads without file handle affinity.=0A>> File=0A>> handle affinity= is very hard to do for NFSv4, so it isn't=0A>> done.)=0A=0AI was actuall= y suggesting that you try the "dd" over nfsv3=0Ato=0Asee=0Ahow=0Athe perf= ormance compared with nfsv4. If you do that, please=0Apost=0Athe=0Acompar= able results.=0A=0ASomeday I would like to try and get ZFS's sequential v= s=0Arandom=0Aread=0Aheuristic modified and any info on what difference in= =0Aperformance=0Athat=0Amight make for NFS would be useful.=0A=0Arick=0A= =0A=0A=0A=0Arick=0A=0A=0AThis is quite slow...=0A=0AYou can found some nf= sstat below (command isn't finished=0Ayet)=0A=0Anfsstat -c -w 1=0A=0AGtAt= tr Lookup Rdlink Read Write Rename Access Rddir=0A0 0 0 0 0 0 0 0=0A4 0 0= 0 0 0 16 0=0A2 0 0 0 0 0 17 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 = 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 4 0 0 0 0 4 0=0A0 0 0 0 0 0 0 0=0A0 0 0= 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A4 0 0 0 0 0 3 0=0A0 0 0 = 0 0 0 3 0=0A37 10 0 8 0 0 14 1=0A18 16 0 4 1 2 4 0=0A78 91 0 82 6 12 30 0= =0A19 18 0 2 2 4 2 0=0A0 0 0 0 2 0 0 0=0A0 0 0 0 0 0 0 0=0AGtAttr Lookup = Rdlink Read Write Rename Access Rddir=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0= =0A0 0 0 0 0 0 0 0=0A0 1 0 0 0 0 1 0=0A4 6 0 0 6 0 3 0=0A2 0 0 0 0 0 0 0= =0A0 0 0 0 0 0 0 0=0A1 0 0 0 0 0 0 0=0A0 0 0 0 1 0 0 0=0A0 0 0 0 0 0 0 0= =0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0= =0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A6 108 0 0 0 0 0 = 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0AGtAttr Lookup Rdlink Read Write R= ename Access Rddir=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0= =0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0= =0A98 54 0 86 11 0 25 0=0A36 24 0 39 25 0 10 1=0A67 8 0 63 63 0 41 0=0A34= 0 0 35 34 0 0 0=0A75 0 0 75 77 0 0 0=0A34 0 0 35 35 0 0 0=0A75 0 0 74 76= 0 0 0=0A33 0 0 34 33 0 0 0=0A0 0 0 0 5 0 0 0=0A0 0 0 0 0 0 6 0=0A11 0 0 = 0 0 0 11 0=0A0 0 0 0 0 0 0 0=0A0 17 0 0 0 0 1 0=0AGtAttr Lookup Rdlink Re= ad Write Rename Access Rddir=0A4 5 0 0 0 0 12 0=0A2 0 0 0 0 0 26 0=0A0 0 = 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0= 0 0 0 0 0=0A0 4 0 0 0 0 4 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 = 0 0 0 0 0=0A4 0 0 0 0 0 2 0=0A2 0 0 0 0 0 24 0=0A0 0 0 0 0 0 0 0=0A0 0 0 = 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0= 0 0 0 0=0A0 0 0 0 0 0 0 0=0AGtAttr Lookup Rdlink Read Write Rename Acces= s Rddir=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A4 0 0 0 0 0 7 0=0A2 1 0 0 0= 0 1 0=0A0 0 0 0 2 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 6 0 0 0=0A0 0 0 0 0 = 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0= 0 0=0A0 0 0 0 0 0 0 0=0A4 6 0 0 0 0 3 0=0A0 0 0 0 0 0 0 0=0A2 0 0 0 0 0 = 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0= 0=0AGtAttr Lookup Rdlink Read Write Rename Access Rddir=0A0 0 0 0 0 0 0 = 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0= =0A4 71 0 0 0 0 0 0=0A0 1 0 0 0 0 0 0=0A2 36 0 0 0 0 1 0=0A0 0 0 0 0 0 0 = 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A1 0 0 0 0 0 1 0= =0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A79 6 0 79 79 0 2 0=0A25 0 0 25 26 = 0 6 0=0A43 18 0 39 46 0 23 0=0A36 0 0 36 36 0 31 0=0A68 1 0 66 68 0 0 0= =0AGtAttr Lookup Rdlink Read Write Rename Access Rddir=0A36 0 0 36 36 0 0= 0=0A48 0 0 48 49 0 0 0=0A20 0 0 20 20 0 0 0=0A0 0 0 0 0 0 0 0=0A3 14 0 1= 0 0 11 0=0A0 0 0 0 0 0 0 0=0A0 0 0 0 0 0 0 0=0A0 4 0 0 0 0 4 0=0A0 0 0 0= 0 0 0 0=0A4 22 0 0 0 0 16 0=0A2 0 0 0 0 0 23 0=0A=0ARegards,=0A=0ALo=C3= =AFc Blot,=0AUNIX Systems, Network and Security Engineer=0Ahttp://www.uni= x-experience.fr=0A=0A8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot"=0A a=0A=C3=A9crit: =0A> Hi Rick,=0A> I stopped the = jails this week-end and started it this=0A> morning,=0A> i'll=0A> give yo= u some stats this week.=0A> =0A> Here is my nfsstat -m output (with your = rsize/wsize=0A> tweaks)=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A= =0A=0A=0A=0A=0A=0Anfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdi= rmax=3D60,acregmin=3D5,acregmax=3D60,nametimeo=3D60,negna =0A=0A=0A=0A=0A= =0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A= =0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0Aetimeo=3D60,rsize=3D32768= ,wsize=3D32768,readdirsize=3D32768,readahead=3D1,wcommitsize=3D773136,tim= eout=3D120,retra =0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0As=3D2147483647=0A=0A= On server side my disks are on a raid controller which show a=0A512b=0Avo= lume and write performances=0Aare very honest (dd if=3D/dev/zero of=3D/ja= ils/test.dd bs=3D4096=0Acount=3D100000000 =3D> 450MBps)=0A=0ARegards,=0A= =0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security Engineer=0Ahttp:/= /www.unix-experience.fr=0A=0A5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a=0A=C3=A9crit:=0A=0A=0ALoic Blot wrote:=0A=0A=0AHi,= =0Ai'm trying to create a virtualisation environment based on=0Ajails.=0A= Those jails are stored under a big ZFS pool on a FreeBSD=0A9.3=0Awhich=0A= export a NFSv4 volume. This NFSv4 volume was mounted on a=0Abig=0Ahypervi= sor (2 Xeon E5v3 + 128GB memory and 8 ports (but=0Aonly 1=0Awas=0Aused at= this time).=0A=0AThe problem is simple, my hypervisors runs 6 jails (use= d 1%=0Acpu=0Aand=0A10GB RAM approximatively and less than 1MB bandwidth) = and=0Aworks=0Afine at start but the system slows down and after 2-3 days= =0Abecome=0Aunusable. When i look at top command i see 80-100% on=0Asyste= m=0Aand=0Acommands are very very slow. Many process are tagged with=0Anfs= _cl*.=0A=0A=0ATo be honest, I would expect the slowness to be because of= =0Aslow=0Aresponse=0Afrom the NFSv4 server, but if you do:=0A# ps axHl=0A= on a client when it is slow and post that, it would give us=0Asome=0Amore= =0Ainformation on where the client side processes are sitting.=0AIf you a= lso do something like:=0A# nfsstat -c -w 1=0Aand let it run for a while, = that should show you how many=0ARPCs=0Aare=0Abeing done and which ones.= =0A=0A# nfsstat -m=0Awill show you what your mount is actually using.=0AT= he only mount option I can suggest trying is=0A"rsize=3D32768,wsize=3D327= 68",=0Asince some network environments have difficulties with 64K.=0A=0AT= here are a few things you can try on the NFSv4 server side,=0Aif=0Ait=0Aa= ppears=0Athat the clients are generating a large RPC load.=0A- disabling = the DRC cache for TCP by setting=0Avfs.nfsd.cachetcp=3D0=0A- If the serve= r is seeing a large write RPC load, then=0A"sync=3Ddisabled"=0Amight help= , although it does run a risk of data loss when=0Athe=0Aserver=0Acrashes.= =0AThen there are a couple of other ZFS related things (I'm not=0Aa=0AZFS= =0Aguy,=0Abut these have shown up on the mailing lists).=0A- make sure yo= ur volumes are 4K aligned and ashift=3D12 (in=0Acase a=0Adrive=0Athat use= s 4K sectors is pretending to be 512byte sectored)=0A- never run over 70-= 80% full if write performance is an=0Aissue=0A- use a zil on an SSD with = good write performance=0A=0AThe only NFSv4 thing I can tell you is that i= t is known that=0AZFS's=0Aalgorithm for determining sequential vs random = I/O fails for=0ANFSv4=0Aduring writing and this can be a performance hit.= The only=0Aworkaround=0Ais to use NFSv3 mounts, since file handle affini= ty=0Aapparently=0Afixes=0Athe problem and this is only done for NFSv3.=0A= =0Arick=0A=0A=0AI saw that there are TSO issues with igb then i'm trying = to=0Adisable=0Ait with sysctl but the situation wasn't solved.=0A=0ASomeo= ne has got ideas ? I can give you more informations if=0Ayou=0Aneed.=0A= =0AThanks in advance.=0ARegards,=0A=0ALo=C3=AFc Blot,=0AUNIX Systems, Net= work and Security Engineer=0Ahttp://www.unix-experience.fr=0A____________= ___________________________________=0Afreebsd-fs@freebsd.org mailing list= =0Ahttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0ATo unsubscribe,= send any mail to=0A"freebsd-fs-unsubscribe@freebsd.org"=0A=0A=0A=0A=0A__= _____________________________________________=0Afreebsd-fs@freebsd.org ma= iling list=0Ahttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0ATo un= subscribe, send any mail to=0A"freebsd-fs-unsubscribe@freebsd.org"=0A=0A_= ______________________________________________=0Afreebsd-fs@freebsd.org m= ailing list=0Ahttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0ATo u= nsubscribe, send any mail to=0A"freebsd-fs-unsubscribe@freebsd.org"=0A=0A= _______________________________________________=0Afreebsd-fs@freebsd.org = mailing list=0Ahttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0ATo = unsubscribe, send any mail to=0A"freebsd-fs-unsubscribe@freebsd.org"=0A__= _____________________________________________=0Afreebsd-fs@freebsd.org ma= iling list=0Ahttp://lists.freebsd.org/mailman/listinfo/freebsd-fs=0ATo un= subscribe, send any mail to=0A"freebsd-fs-unsubscribe@freebsd.org"=0A=0A= =0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A__________________________________= _____________=0Afreebsd-fs@freebsd.org mailing list=0Ahttp://lists.freebs= d.org/mailman/listinfo/freebsd-fs=0ATo unsubscribe, send any mail to "fre= ebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Dec 22 23:20:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 73A5BAB4 for ; Mon, 22 Dec 2014 23:20:14 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id EE17964A7C for ; Mon, 22 Dec 2014 23:20:13 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ag0FAP2lmFSDaFve/2dsb2JhbABbg1hYBIMAw0MKhSZKAoEsAQEBAQF9hAwBAQEDAQEBARcBCAQnIAsFFhgCAg0ZAikBCSYGCAIFBAEaAgSIAwgNuhSVfAEBAQEBBQEBAQEBAQEBARmBIY4AAQEbATMHgi07EYEwBYlHiAiDHoMjMII0gjODQIQugzkigX8fgW4gMQEBBYEFOX4BAQE X-IronPort-AV: E=Sophos;i="5.07,627,1413259200"; d="scan'208";a="180375957" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 22 Dec 2014 18:20:03 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 3C95DE7956; Mon, 22 Dec 2014 18:20:03 -0500 (EST) Date: Mon, 22 Dec 2014 18:20:03 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <1479765128.1118136.1419290403230.JavaMail.root@uoguelph.ca> In-Reply-To: <811d455b0bcaeb43711e8108c96d4f2b@mail.unix-experience.fr> Subject: Re: ZFS vnode lock deadlock in zfs_fhtovp was: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Dec 2014 23:20:14 -0000 Loic Blot wrote: > Hi, >=20 > To clarify because of our exchanges. Here are the current sysctl > options for server: >=20 > vfs.nfsd.enable_nobodycheck=3D0 > vfs.nfsd.enable_nogroupcheck=3D0 >=20 > vfs.nfsd.maxthreads=3D200 > vfs.nfsd.tcphighwater=3D10000 > vfs.nfsd.tcpcachetimeo=3D300 > vfs.nfsd.server_min_nfsvers=3D4 >=20 > kern.maxvnodes=3D10000000 > kern.ipc.maxsockbuf=3D4194304 > net.inet.tcp.sendbuf_max=3D4194304 > net.inet.tcp.recvbuf_max=3D4194304 >=20 > vfs.lookup_shared=3D0 >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 22 d=C3=A9cembre 2014 09:42 "Lo=C3=AFc Blot" a > =C3=A9crit: >=20 > Hi Rick, > my 5 jails runs this weekend and now i have some stats on this > monday. >=20 > Hopefully deadlock was fixed, yeah, but everything isn't good :( >=20 > On NFSv4 server (FreeBSD 10.1) system uses 35% CPU >=20 > As i can see this is because of nfsd: >=20 > 918 root 96 20 0 12352K 3372K rpcsvc 6 51.4H > 273.68% nfsd: server (nfsd) >=20 > If i look at dmesg i see: > nfsd server cache flooded, try increasing vfs.nfsd.tcphighwater >=20 Well, you have a couple of choices: 1 - Use NFSv4.1 (add "minorversion=3D1" to your mount options). (NFSv4.1 avoids use of the DRC and instead uses something called sessions. See below.) OR > vfs.nfsd.tcphighwater was set to 10000, i increase it to 15000 >=20 2 - Bump vfs.nfsd.tcphighwater way up, until you no longer see "nfs server cache flooded" messages. (I think Garrett Wollman uses 100000. (You may still see quite a bit of CPU overheads.) OR 3 - Set vfs.nfsd.cachetcp=3D0 (which disables the DRC and gets rid of the CPU overheads). However, there is a risk of data corruption if you have a client->server network partitioning of a moderate duration, because a non-idempotent RPC may get redone, becasue the client times out waiting for a reply. If a non-idempotent RPC gets done twice on the server, data corruption can happen. (The DRC provides improved correctness, but does add overhead.) If #1 works for you, it is the preferred solution, since Sessions in NFSv4.1 solves the correctness problem in a good, space bound way. A session basically has N (usually 32 or 64) slots and only allows one outstanding RPC/slot. As such, it can cache the previous reply for each slot (32 or 64 of them) and guarantee "exactly once" RPC semantics. rick > Here is 'nfsstat -s' output: >=20 > Server Info: > Getattr Setattr Lookup Readlink Read Write Create > Remove > 12600652 1812 2501097 156 1386423 1983729 123 > 162067 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > Access > 36762 9 0 0 0 3147 0 > 623524 > Mknod Fsstat Fsinfo PathConf Commit > 0 0 0 0 328117 > Server Ret-Failed > 0 > Server Faults > 0 > Server Cache Stats: > Inprog Idem Non-idem Misses > 0 0 0 12635512 > Server Write Gathering: > WriteOps WriteRPC Opsaved > 1983729 1983729 0 >=20 > And here is 'procstat -kk' for nfsd (server) >=20 > 918 100528 nfsd nfsd: master mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10 > _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_run+0x1de > nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > amd64_syscall+0x351 Xfast_syscall+0xfb > 918 100568 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100569 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100570 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100571 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100572 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100573 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100574 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100575 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100576 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100577 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100578 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100579 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100580 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100581 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100582 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100583 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100584 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100585 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100586 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100587 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100588 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100589 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100590 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100591 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100592 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100593 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100594 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100595 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100596 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100597 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100598 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100599 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100600 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100601 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100602 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100603 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100604 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100605 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100606 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100607 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100608 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100609 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100610 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100611 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100612 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100613 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100614 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100615 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100616 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100617 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100618 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100619 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100620 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100621 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100622 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100623 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100624 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100625 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100626 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100627 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100628 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100629 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100630 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100631 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100632 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100633 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100634 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100635 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100636 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100637 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100638 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100639 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100640 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100641 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100642 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100643 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100644 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100645 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100646 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100647 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100648 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100649 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100650 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100651 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100652 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100653 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100654 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100655 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100656 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100657 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100658 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100659 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100660 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100661 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100662 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > --- >=20 > Now if we look at client (FreeBSD 9.3) >=20 > We see system was very busy and do many and many interrupts >=20 > CPU: 0.0% user, 0.0% nice, 37.8% system, 51.2% interrupt, 11.0% > idle >=20 > A look at process list shows that there are many sendmail process in > state nfstry >=20 > nfstry 18 32:27 0.88% sendmail: Queue runner@00:30:00 for > /var/spool/clientm >=20 > Here is 'nfsstat -c' output: >=20 > Client Info: > Rpc Counts: > Getattr Setattr Lookup Readlink Read Write Create > Remove > 1051347 1724 2494481 118 903902 1901285 162676 > 161899 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus > Access > 36744 2 0 114 40 3131 0 > 544136 > Mknod Fsstat Fsinfo PathConf Commit > 9 0 0 0 245821 > Rpc Info: > TimedOut Invalid X Replies Retries Requests > 0 0 0 0 8356557 > Cache Info: > Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits > Misses > 108754455 491475 54229224 2437229 46814561 821723 5132123 > 1871871 > BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits > Misses > 144035 118 53736 2753 27813 1 57238839 > 544205 >=20 > If you need more things, tell me, i let the PoC in this state. >=20 > Thanks >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 21 d=C3=A9cembre 2014 01:33 "Rick Macklem" a =C3= =A9crit: >=20 >=20 > Loic Blot wrote: >=20 > > Hi Rick, > > ok, i don't need locallocks, i haven't understand option was for > > that > > usage, i removed it. > > I do more tests on monday. > > Thanks for the deadlock fix, for other people :) >=20 > Good. Please let us know if running with vfs.nfsd.enable_locallocks=3D0 > gets rid of the deadlocks? (I think it fixes the one you saw.) >=20 > On the performance side, you might also want to try different values > of > readahead, if the Linux client has such a mount option. (With the > NFSv4-ZFS sequential vs random I/O heuristic, I have no idea what the > optimal readahead value would be.) >=20 > Good luck with it and please let us know how it goes, rick > ps: I now have a patch to fix the deadlock when > vfs.nfsd.enable_locallocks=3D1 > is set. I'll post it for anyone who is interested after I put it > through some testing. >=20 >=20 > -- > Best regards, > Lo=C3=AFc BLOT, > UNIX systems, security and network engineer > http://www.unix-experience.fr >=20 > Le jeudi 18 d=C3=A9cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a =C3=A9c= rit : >=20 > Loic Blot wrote: > > Hi rick, > > i tried to start a LXC container on Debian Squeeze from my > > freebsd > > ZFS+NFSv4 server and i also have a deadlock on nfsd > > (vfs.lookup_shared=3D0). Deadlock procs each time i launch a > > squeeze > > container, it seems (3 tries, 3 fails). >=20 > Well, I`ll take a look at this `procstat -kk`, but the only thing > I`ve seen posted w.r.t. avoiding deadlocks in ZFS is to not use > nullfs. (I have no idea if you are using any nullfs mounts, but > if so, try getting rid of them.) >=20 > Here`s a high level post about the ZFS and vnode locking problem, > but there is no patch available, as far as I know. >=20 > http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407 >=20 > rick >=20 >=20 > 921 - D 0:00.02 nfsd: server (nfsd) >=20 > Here is the procstat -kk >=20 > PID TID COMM TDNAME KSTACK > 921 100538 nfsd nfsd: master mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca > nfssvc_nfsd+0x107 sys_nfssvc+0x9c > 921 100572 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100573 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100574 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100575 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100576 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100577 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100578 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100579 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100580 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100581 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100582 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100583 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100584 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100585 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100586 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100587 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100588 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100589 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100590 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100591 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100592 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100593 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100594 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100595 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100596 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100597 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100598 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100599 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100600 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100601 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100602 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100603 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100604 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100605 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100606 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100607 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100608 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100609 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100610 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100611 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100612 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100613 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100614 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100615 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100616 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 921 100617 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100618 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 921 100619 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100620 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100621 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100622 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100623 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100624 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100625 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100626 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100627 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100628 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100629 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100630 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100631 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100632 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100633 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100634 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100635 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100636 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100637 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100638 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100639 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100640 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100641 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100642 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100643 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100644 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100645 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100646 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100647 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100648 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100649 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100650 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100651 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100652 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100653 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100654 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100655 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100656 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100657 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100658 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100659 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100660 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100661 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100662 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100663 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100664 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100665 nfsd nfsd: service mi_switch+0xe1 > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > _cv_wait_sig+0x16a > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 921 100666 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > nfsrvd_dorpc+0xc76 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" a > =C3=A9crit: >=20 > Loic Blot wrote: >=20 > > For more informations, here is procstat -kk on nfsd, if you > > need > > more > > hot datas, tell me. > >=20 > > Regards, PID TID COMM TDNAME KSTACK > > 918 100529 nfsd nfsd: master mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+0x1de > > nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c > > amd64_syscall+0x351 >=20 > Well, most of the threads are stuck like this one, waiting for > a > vnode > lock in ZFS. All of them appear to be in zfs_fhtovp(). > I`m not a ZFS guy, so I can`t help much. I`ll try changing the > subject line > to include ZFS vnode lock, so maybe the ZFS guys will take a > look. >=20 > The only thing I`ve seen suggested is trying: > sysctl vfs.lookup_shared=3D0 > to disable shared vop_lookup()s. Apparently zfs_lookup() > doesn`t > obey the vnode locking rules for lookup and rename, according > to > the posting I saw. >=20 > I`ve added a couple of comments about the other threads below, > but > they are all either waiting for an RPC request or waiting for > the > threads stuck on the ZFS vnode lock to complete. >=20 > rick >=20 > > 918 100564 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe >=20 > Fyi, this thread is just waiting for an RPC to arrive. (Normal) >=20 > > 918 100565 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 918 100566 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 918 100567 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 918 100568 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 918 100569 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 918 100570 nfsd nfsd: service mi_switch+0xe1 > > sleepq_catch_signals+0xab sleepq_wait_sig+0xf > > _cv_wait_sig+0x16a > > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a > > fork_trampoline+0xe > > 918 100571 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > > 918 100572 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > nfsrv_setclient+0xbd nfsrvd_setclientid+0x3c8 > > nfsrvd_dorpc+0xc76 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe >=20 > This one (and a few others) are waiting for the nfsv4_lock. > This > happens > because other threads are stuck with RPCs in progress. (ie. The > ones > waiting on the vnode lock in zfs_fhtovp().) > For these, the RPC needs to lock out other threads to do the > operation, > so it waits for the nfsv4_lock() which can exclusively lock the > NFSv4 > data structures once all other nfsd threads complete their RPCs > in > progress. >=20 > > 918 100573 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe >=20 > Same as above. >=20 > > 918 100574 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100575 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100576 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100577 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100578 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100579 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100580 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100581 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100582 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100583 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100584 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100585 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100586 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100587 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100588 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100589 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100590 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100591 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100592 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100593 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100594 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100595 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100596 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100597 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100598 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100599 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100600 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100601 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100602 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100603 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100604 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100605 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100606 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe > > 918 100607 nfsd nfsd: service mi_switch+0xe1 > > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > > zfs_fhtovp+0x38d > > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > > nfssvc_program+0x554 svc_run_internal+0xc77 > > svc_thread_start+0xb > > fork_exit+0x9a fork_trampoline+0xe >=20 > Lots more waiting for the ZFS vnode lock in zfs_fhtovp(). >=20 >=20 > 918 100608 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f nfsrvd_lock+0x5b1 > nfsrvd_dorpc+0xec6 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100609 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100610 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad > nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554 > svc_run_internal+0xc77 svc_thread_start+0xb fork_exit+0x9a > fork_trampoline+0xe > 918 100611 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100612 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100613 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100614 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100615 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100616 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100617 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100618 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100619 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100620 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100621 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100622 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100623 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b > nfsrvd_dorpc+0x316 nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe > 918 100624 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100625 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100626 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100627 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100628 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100629 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100630 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100631 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100632 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100633 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100634 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100635 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100636 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100637 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100638 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100639 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100640 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100641 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100642 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100643 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100644 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100645 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100646 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100647 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100648 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100649 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100650 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100651 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100652 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100653 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100654 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100655 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100656 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100657 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe > 918 100658 nfsd nfsd: service mi_switch+0xe1 > sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902 > vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43 > zfs_fhtovp+0x38d > nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917 > nfssvc_program+0x554 svc_run_internal+0xc77 > svc_thread_start+0xb > fork_exit+0x9a fork_trampoline+0xe >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot" > > a > =C3=A9crit: >=20 > Hmmm... > now i'm experiencing a deadlock. >=20 > 0 918 915 0 21 0 12352 3372 zfs D - 1:48.64 nfsd: server > (nfsd) >=20 > the only issue was to reboot the server, but after rebooting > deadlock arrives a second time when i > start my jails over NFS. >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 15 d=C3=A9cembre 2014 10:07 "Lo=C3=AFc Blot" > > a > =C3=A9crit: >=20 > Hi Rick, > after talking with my N+1, NFSv4 is required on our > infrastructure. > I tried to upgrade NFSv4+ZFS > server from 9.3 to 10.1, i hope this will resolve some > issues... >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 10 d=C3=A9cembre 2014 15:36 "Lo=C3=AFc Blot" > > a > =C3=A9crit: >=20 > Hi Rick, > thanks for your suggestion. > For my locking bug, rpc.lockd is stucked in rpcrecv state on > the > server. kill -9 doesn't affect the > process, it's blocked.... (State: Ds) >=20 > for the performances >=20 > NFSv3: 60Mbps > NFSv4: 45Mbps > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 10 d=C3=A9cembre 2014 13:56 "Rick Macklem" > a > =C3=A9crit: >=20 >=20 > Loic Blot wrote: >=20 > > Hi Rick, > > I'm trying NFSv3. > > Some jails are starting very well but now i have an issue > > with > > lockd > > after some minutes: > >=20 > > nfs server 10.10.X.8:/jails: lockd not responding > > nfs server 10.10.X.8:/jails lockd is alive again > >=20 > > I look at mbuf, but i seems there is no problem. >=20 > Well, if you need locks to be visible across multiple > clients, > then > I'm afraid you are stuck with using NFSv4 and the > performance > you > get > from it. (There is no way to do file handle affinity for > NFSv4 > because > the read and write ops are buried in the compound RPC and > not > easily > recognized.) >=20 > If the locks don't need to be visible across multiple > clients, > I'd > suggest trying the "nolockd" option with nfsv3. >=20 > > Here is my rc.conf on server: > >=20 > > nfs_server_enable=3D"YES" > > nfsv4_server_enable=3D"YES" > > nfsuserd_enable=3D"YES" > > nfsd_server_flags=3D"-u -t -n 256" > > mountd_enable=3D"YES" > > mountd_flags=3D"-r" > > nfsuserd_flags=3D"-usertimeout 0 -force 20" > > rpcbind_enable=3D"YES" > > rpc_lockd_enable=3D"YES" > > rpc_statd_enable=3D"YES" > >=20 > > Here is the client: > >=20 > > nfsuserd_enable=3D"YES" > > nfsuserd_flags=3D"-usertimeout 0 -force 20" > > nfscbd_enable=3D"YES" > > rpc_lockd_enable=3D"YES" > > rpc_statd_enable=3D"YES" > >=20 > > Have you got an idea ? > >=20 > > Regards, > >=20 > > Lo=C3=AFc Blot, > > UNIX Systems, Network and Security Engineer > > http://www.unix-experience.fr > >=20 > > 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" > > a > > =C3=A9crit: > >> Loic Blot wrote: > >>=20 > >>> Hi rick, > >>>=20 > >>> I waited 3 hours (no lag at jail launch) and now I do: > >>> sysrc > >>> memcached_flags=3D"-v -m 512" > >>> Command was very very slow... > >>>=20 > >>> Here is a dd over NFS: > >>>=20 > >>> 601062912 bytes transferred in 21.060679 secs (28539579 > >>> bytes/sec) > >>=20 > >> Can you try the same read using an NFSv3 mount? > >> (If it runs much faster, you have probably been bitten by > >> the > >> ZFS > >> "sequential vs random" read heuristic which I've been told > >> things > >> NFS is doing "random" reads without file handle affinity. > >> File > >> handle affinity is very hard to do for NFSv4, so it isn't > >> done.) >=20 > I was actually suggesting that you try the "dd" over nfsv3 > to > see > how > the performance compared with nfsv4. If you do that, please > post > the > comparable results. >=20 > Someday I would like to try and get ZFS's sequential vs > random > read > heuristic modified and any info on what difference in > performance > that > might make for NFS would be useful. >=20 > rick >=20 >=20 >=20 >=20 > rick >=20 >=20 > This is quite slow... >=20 > You can found some nfsstat below (command isn't finished > yet) >=20 > nfsstat -c -w 1 >=20 > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 0 0 0 0 0 0 0 0 > 4 0 0 0 0 0 16 0 > 2 0 0 0 0 0 17 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 4 0 0 0 0 4 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 4 0 0 0 0 0 3 0 > 0 0 0 0 0 0 3 0 > 37 10 0 8 0 0 14 1 > 18 16 0 4 1 2 4 0 > 78 91 0 82 6 12 30 0 > 19 18 0 2 2 4 2 0 > 0 0 0 0 2 0 0 0 > 0 0 0 0 0 0 0 0 > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 1 0 0 0 0 1 0 > 4 6 0 0 6 0 3 0 > 2 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 1 0 0 0 0 0 0 0 > 0 0 0 0 1 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 6 108 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 98 54 0 86 11 0 25 0 > 36 24 0 39 25 0 10 1 > 67 8 0 63 63 0 41 0 > 34 0 0 35 34 0 0 0 > 75 0 0 75 77 0 0 0 > 34 0 0 35 35 0 0 0 > 75 0 0 74 76 0 0 0 > 33 0 0 34 33 0 0 0 > 0 0 0 0 5 0 0 0 > 0 0 0 0 0 0 6 0 > 11 0 0 0 0 0 11 0 > 0 0 0 0 0 0 0 0 > 0 17 0 0 0 0 1 0 > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 4 5 0 0 0 0 12 0 > 2 0 0 0 0 0 26 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 4 0 0 0 0 4 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 4 0 0 0 0 0 2 0 > 2 0 0 0 0 0 24 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 4 0 0 0 0 0 7 0 > 2 1 0 0 0 0 1 0 > 0 0 0 0 2 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 6 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 4 6 0 0 0 0 3 0 > 0 0 0 0 0 0 0 0 > 2 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 4 71 0 0 0 0 0 0 > 0 1 0 0 0 0 0 0 > 2 36 0 0 0 0 1 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 1 0 0 0 0 0 1 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 79 6 0 79 79 0 2 0 > 25 0 0 25 26 0 6 0 > 43 18 0 39 46 0 23 0 > 36 0 0 36 36 0 31 0 > 68 1 0 66 68 0 0 0 > GtAttr Lookup Rdlink Read Write Rename Access Rddir > 36 0 0 36 36 0 0 0 > 48 0 0 48 49 0 0 0 > 20 0 0 20 20 0 0 0 > 0 0 0 0 0 0 0 0 > 3 14 0 1 0 0 11 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 4 0 0 0 0 4 0 > 0 0 0 0 0 0 0 0 > 4 22 0 0 0 0 16 0 > 2 0 0 0 0 0 23 0 >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot" > a > =C3=A9crit: > > Hi Rick, > > I stopped the jails this week-end and started it this > > morning, > > i'll > > give you some stats this week. > >=20 > > Here is my nfsstat -m output (with your rsize/wsize > > tweaks) >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 > nfsv4,tcp,resvport,hard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin= =3D5,acregmax=3D60,nametimeo=3D60,negna >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 > etimeo=3D60,rsize=3D32768,wsize=3D32768,readdirsize=3D32768,readahead=3D1= ,wcommitsize=3D773136,timeout=3D120,retra >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 > s=3D2147483647 >=20 > On server side my disks are on a raid controller which show a > 512b > volume and write performances > are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096 > count=3D100000000 =3D> 450MBps) >=20 > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr >=20 > 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a > =C3=A9crit: >=20 >=20 > Loic Blot wrote: >=20 >=20 > Hi, > i'm trying to create a virtualisation environment based on > jails. > Those jails are stored under a big ZFS pool on a FreeBSD > 9.3 > which > export a NFSv4 volume. This NFSv4 volume was mounted on a > big > hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but > only 1 > was > used at this time). >=20 > The problem is simple, my hypervisors runs 6 jails (used 1% > cpu > and > 10GB RAM approximatively and less than 1MB bandwidth) and > works > fine at start but the system slows down and after 2-3 days > become > unusable. When i look at top command i see 80-100% on > system > and > commands are very very slow. Many process are tagged with > nfs_cl*. >=20 >=20 > To be honest, I would expect the slowness to be because of > slow > response > from the NFSv4 server, but if you do: > # ps axHl > on a client when it is slow and post that, it would give us > some > more > information on where the client side processes are sitting. > If you also do something like: > # nfsstat -c -w 1 > and let it run for a while, that should show you how many > RPCs > are > being done and which ones. >=20 > # nfsstat -m > will show you what your mount is actually using. > The only mount option I can suggest trying is > "rsize=3D32768,wsize=3D32768", > since some network environments have difficulties with 64K. >=20 > There are a few things you can try on the NFSv4 server side, > if > it > appears > that the clients are generating a large RPC load. > - disabling the DRC cache for TCP by setting > vfs.nfsd.cachetcp=3D0 > - If the server is seeing a large write RPC load, then > "sync=3Ddisabled" > might help, although it does run a risk of data loss when > the > server > crashes. > Then there are a couple of other ZFS related things (I'm not > a > ZFS > guy, > but these have shown up on the mailing lists). > - make sure your volumes are 4K aligned and ashift=3D12 (in > case a > drive > that uses 4K sectors is pretending to be 512byte sectored) > - never run over 70-80% full if write performance is an > issue > - use a zil on an SSD with good write performance >=20 > The only NFSv4 thing I can tell you is that it is known that > ZFS's > algorithm for determining sequential vs random I/O fails for > NFSv4 > during writing and this can be a performance hit. The only > workaround > is to use NFSv3 mounts, since file handle affinity > apparently > fixes > the problem and this is only done for NFSv3. >=20 > rick >=20 >=20 > I saw that there are TSO issues with igb then i'm trying to > disable > it with sysctl but the situation wasn't solved. >=20 > Someone has got ideas ? I can give you more informations if > you > need. >=20 > Thanks in advance. > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org" >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org" >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 >=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Tue Dec 23 11:57:41 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 79BB1497 for ; Tue, 23 Dec 2014 11:57:41 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4733412C3 for ; Tue, 23 Dec 2014 11:57:41 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBNBvfi1073163 for ; Tue, 23 Dec 2014 11:57:41 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 193389] [panic] ufs_dirbad: /: bad dir Date: Tue, 23 Dec 2014 11:57:38 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mikej@mikej.com X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Dec 2014 11:57:41 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193389 --- Comment #5 from mikej@mikej.com --- (In reply to mikej from comment #3) > I am also getting this error under current. > > FreeBSD d620 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r275582: Mon Dec 8 > 02:36:47 UTC 2014 root@grind.freebsd.org:/usr/obj/usr/src/sys/GENERIC > i386 > > panic: ufs_dirbad: /: bad dir ino 8668611 at offset 12288: mangled entry > > http://mail.mikej.com/core.txt.0 > http://mail.mikej.com/info.0 > > http://mail.mikej.com/core.txt.1 > http://mail.mikej.com/info.1 > > http://mail.mikej.com/smartctl-a.ada0 > http://mail.mikej.com/dmesg.d620 > > I am getting DMA errors on the device though, not sure if this is a driver > or disk problem. This is a SSD device, the laptop had been running with a > Seagate Momentus under windows and linux without issue. > > I will swap drives tonight and see if the problem is isolated to the SSD or > not and report back and perform any other suggested tasks for trouble > shooting. > > Now this is so odd I will mention it but I can't fathom why it would matter, > but all my panics have always happened immediately after running "man". So > far no panics while running X, firefox, and a lot of other applications. > > Thanks. I have spent some time with this and isolated my problem to the SSD drive that I was using. I have not determined why the SSD drive had caused an issue, but after installing an WD3200BEVT-00AORTO and letting the laptop churn for days I have not been able to reproduce my original problem. Lastly, find / -xdev -inum 8668611 -print did always produce the fault, the issue was in man page directories (I don't remember the exact path). Consider this resolved for me. --mikej -- You are receiving this mail because: You are on the CC list for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 23 22:00:28 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 26166AD0 for ; Tue, 23 Dec 2014 22:00:28 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0E17416BD for ; Tue, 23 Dec 2014 22:00:28 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBNM0R0J024019 for ; Tue, 23 Dec 2014 22:00:27 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 193389] [panic] ufs_dirbad: /: bad dir Date: Tue, 23 Dec 2014 22:00:26 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mckusick@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Dec 2014 22:00:28 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193389 --- Comment #6 from Kirk McKusick --- Thanks for your followup. I will close this bug. -- You are receiving this mail because: You are on the CC list for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 23 22:03:07 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 43CC9CAD for ; Tue, 23 Dec 2014 22:03:07 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2BD9217B9 for ; Tue, 23 Dec 2014 22:03:07 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sBNM37JR053703 for ; Tue, 23 Dec 2014 22:03:07 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 193389] [panic] ufs_dirbad: /: bad dir Date: Tue, 23 Dec 2014 22:03:06 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mckusick@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: mckusick@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: resolution assigned_to bug_status Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Dec 2014 22:03:07 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193389 Kirk McKusick changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|--- |Works As Intended Assignee|freebsd-bugs@FreeBSD.org |mckusick@FreeBSD.org Status|New |Closed --- Comment #7 from Kirk McKusick --- Submitter has determined that the panic was caused by hardware problems with disk on his system. -- You are receiving this mail because: You are on the CC list for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 24 15:28:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 74BA317C; Wed, 24 Dec 2014 15:28:52 +0000 (UTC) Received: from mail.ijs.si (mail.ijs.si [IPv6:2001:1470:ff80::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 28A2564401; Wed, 24 Dec 2014 15:28:52 +0000 (UTC) Received: from amavis-proxy-ori.ijs.si (localhost [IPv6:::1]) by mail.ijs.si (Postfix) with ESMTP id 3k6yxX6SCjzGp; Wed, 24 Dec 2014 16:28:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ijs.si; h= user-agent:message-id:organization:subject:subject:from:from :date:date:content-transfer-encoding:content-type:content-type :mime-version:received:received:received:received; s=jakla4; t= 1419434926; x=1422026927; bh=AdSxaW0Jx29GDq+moR3L+2VCK9OwudIHpaV jiDvsXdU=; b=GalfMDWgY3L+6gyjBQPc+vODdnQSBjKPo2SVJ/lYxAbCL6+8NTf BBL5wrRQDMVlyTAc1w8JQ6yNglrBT9pwPULm0Md/d48azMObv+YDAuRHqolx4vsz Mn3Tustvg3dWXof02VuDUcgwwmUYL/TFCbqpc6CtVYbYifXcuhU+0ZEY= X-Virus-Scanned: amavisd-new at ijs.si Received: from mail.ijs.si ([IPv6:::1]) by amavis-proxy-ori.ijs.si (mail.ijs.si [IPv6:::1]) (amavisd-new, port 10012) with ESMTP id kGI-yGiaAkcO; Wed, 24 Dec 2014 16:28:46 +0100 (CET) Received: from mildred.ijs.si (mailbox.ijs.si [IPv6:2001:1470:ff80::143:1]) by mail.ijs.si (Postfix) with ESMTP; Wed, 24 Dec 2014 16:28:46 +0100 (CET) Received: from neli.ijs.si (neli.ijs.si [IPv6:2001:1470:ff80:88:21c:c0ff:feb1:8c91]) by mildred.ijs.si (Postfix) with ESMTP id 3k6yxV1FWtznq; Wed, 24 Dec 2014 16:28:46 +0100 (CET) Received: from neli.ijs.si ([2001:1470:ff80:88:21c:c0ff:feb1:8c91]) by neli.ijs.si with HTTP (HTTP/1.1 POST); Wed, 24 Dec 2014 16:28:46 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Wed, 24 Dec 2014 16:28:46 +0100 From: Mark Martinec To: stable@FreeBSD.org, freebsd-fs@freebsd.org Subject: zpool upgrade - Assertion failed: ... =?UTF-8?Q?libzfs/common/lib?= =?UTF-8?Q?zfs=5Fconfig=2Ec=2C=20line=20=32=35=30?= Organization: J. Stefan Institute Message-ID: X-Sender: Mark.Martinec+freebsd@ijs.si User-Agent: Roundcube Webmail/1.0.3 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Dec 2014 15:28:52 -0000 Upgraded a fairly recent 10-STABLE to yesterday's version on two hosts (amd64). Upgrade went fine, things work, but checking for ZFS pool upgrades on one host fails, while the other is fine: # zpool upgrade This system supports ZFS pool feature flags. All pools are formatted using feature flags. Some supported features are not enabled on the following pools. Once a feature is enabled the pool may become incompatible with software that does not support the feature. See zpool-features(7) for details. POOL FEATURE --------------- big large_blocks Assertion failed: (nvlist_lookup_nvlist(config, "feature_stats", &features) == 0), file /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_config.c, line 250. Abort trap Mark From owner-freebsd-fs@FreeBSD.ORG Wed Dec 24 15:46:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AB5E6820; Wed, 24 Dec 2014 15:46:50 +0000 (UTC) Received: from mail.ijs.si (mail.ijs.si [IPv6:2001:1470:ff80::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5D1BB646ED; Wed, 24 Dec 2014 15:46:50 +0000 (UTC) Received: from amavis-proxy-ori.ijs.si (localhost [IPv6:::1]) by mail.ijs.si (Postfix) with ESMTP id 3k6zLJ2dtqzLR; Wed, 24 Dec 2014 16:46:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ijs.si; h= user-agent:message-id:references:in-reply-to:organization :subject:subject:from:from:date:date:content-transfer-encoding :content-type:content-type:mime-version:received:received :received:received; s=jakla4; t=1419436005; x=1422028006; bh=/md 6pY/paroNsWmIl7+hIeDJZXAzAH106XOfCsoZK64=; b=aKH6i5jTrrQz5EnRf5+ 9QzSySobrk8DflFKMjp6iZZ+jBSFdWlr+GMWa6xaogiujZ0By7z5ohZfIziuM4l/ MRNcrVK0f0cdizbMquIYNAupgFN5ImdcaEaIszAU0mC7GW/q0FZj8mIZ/rsi0h21 9yZ20rSfL2q1yt6croRaqf0o= X-Virus-Scanned: amavisd-new at ijs.si Received: from mail.ijs.si ([IPv6:::1]) by amavis-proxy-ori.ijs.si (mail.ijs.si [IPv6:::1]) (amavisd-new, port 10012) with ESMTP id l-T0IK2canW5; Wed, 24 Dec 2014 16:46:45 +0100 (CET) Received: from mildred.ijs.si (mailbox.ijs.si [IPv6:2001:1470:ff80::143:1]) by mail.ijs.si (Postfix) with ESMTP; Wed, 24 Dec 2014 16:46:45 +0100 (CET) Received: from neli.ijs.si (neli.ijs.si [IPv6:2001:1470:ff80:88:21c:c0ff:feb1:8c91]) by mildred.ijs.si (Postfix) with ESMTP id 3k6zLF2lPwzqv; Wed, 24 Dec 2014 16:46:45 +0100 (CET) Received: from neli.ijs.si ([2001:1470:ff80:88:21c:c0ff:feb1:8c91]) by neli.ijs.si with HTTP (HTTP/1.1 POST); Wed, 24 Dec 2014 16:46:45 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Wed, 24 Dec 2014 16:46:45 +0100 From: Mark Martinec To: stable@freebsd.org, freebsd-fs@freebsd.org Subject: Re: zpool upgrade - Assertion failed: ... =?UTF-8?Q?libzfs/common?= =?UTF-8?Q?/libzfs=5Fconfig=2Ec=2C=20line=20=32=35=30?= Organization: J. Stefan Institute In-Reply-To: References: Message-ID: <9d15b37204202b4b48a65cc12357612b@mailbox.ijs.si> X-Sender: Mark.Martinec+freebsd@ijs.si User-Agent: Roundcube Webmail/1.0.3 Cc: owner-freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Dec 2014 15:46:50 -0000 P.S. Seems this was due to one external (removable) disk (with its own pool) not being connected to the host. After making it available, the 'zpool upgrade' command completed normally, listing all pools. Mark > Upgraded a fairly recent 10-STABLE to yesterday's version on two hosts > (amd64). Upgrade went fine, things work, but checking for ZFS pool > upgrades on one host fails, while the other is fine: > > # zpool upgrade > This system supports ZFS pool feature flags. > > All pools are formatted using feature flags. > > > Some supported features are not enabled on the following pools. Once a > feature is enabled the pool may become incompatible with software > that does not support the feature. See zpool-features(7) for details. > > POOL FEATURE > --------------- > big > large_blocks > Assertion failed: (nvlist_lookup_nvlist(config, "feature_stats", > &features) == 0), file > /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_config.c, > line 250. > Abort trap From owner-freebsd-fs@FreeBSD.ORG Wed Dec 24 17:27:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A0EFA260; Wed, 24 Dec 2014 17:27:43 +0000 (UTC) Received: from smtprelay06.ispgateway.de (smtprelay06.ispgateway.de [80.67.31.103]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5E7833FD9; Wed, 24 Dec 2014 17:27:43 +0000 (UTC) Received: from [78.35.134.164] (helo=fabiankeil.de) by smtprelay06.ispgateway.de with esmtpsa (TLSv1.2:AES128-GCM-SHA256:128) (Exim 4.84) (envelope-from ) id 1Y3pig-0005R8-1D; Wed, 24 Dec 2014 18:27:34 +0100 Date: Wed, 24 Dec 2014 18:27:34 +0100 From: Fabian Keil Subject: Re: zpool upgrade - Assertion failed: ... libzfs/common/libzfs_config.c, line 250 Message-ID: <2879070c.21bdfb3c@fabiankeil.de> In-Reply-To: <9d15b37204202b4b48a65cc12357612b@mailbox.ijs.si> References: <9d15b37204202b4b48a65cc12357612b@mailbox.ijs.si> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/qkIFF4WeWe74ptf829rm/E_"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Cc: freebsd-fs@freebsd.org, stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Dec 2014 17:27:43 -0000 --Sig_/qkIFF4WeWe74ptf829rm/E_ Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Mark Martinec wrote: > P.S. Seems this was due to one external (removable) disk (with its > own pool) not being connected to the host. After making it available, > the 'zpool upgrade' command completed normally, listing all pools. That's a known issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D182248 Fabian --Sig_/qkIFF4WeWe74ptf829rm/E_ Content-Type: application/pgp-signature Content-Description: OpenPGP digital signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlSa94QACgkQBYqIVf93VJ3gNQCdExZWUwm5nZRXxsAvboJITqlF LSYAn0u1BFQVsrF7AuI5gldKj2Yz30bp =7Bgl -----END PGP SIGNATURE----- --Sig_/qkIFF4WeWe74ptf829rm/E_-- From owner-freebsd-fs@FreeBSD.ORG Wed Dec 24 23:15:06 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 61EA8D1E for ; Wed, 24 Dec 2014 23:15:06 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 265FE313B for ; Wed, 24 Dec 2014 23:15:05 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtkEAExIm1SDaFve/2dsb2JhbABcg1hYBIMAw1SFcYEoAQEBAQF9hDaBCwINGQJfE4gsDaROj0SVSwEBAQEGAQEBAQEBGASBIY4dgyOBQQWJS4gJhkGNCoM5IoQMIDEBgUR+AQEB X-IronPort-AV: E=Sophos;i="5.07,639,1413259200"; d="scan'208";a="180869386" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 24 Dec 2014 18:14:59 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 89FC1E7956 for ; Wed, 24 Dec 2014 18:14:58 -0500 (EST) Date: Wed, 24 Dec 2014 18:14:58 -0500 (EST) From: Rick Macklem To: FreeBSD Filesystems Message-ID: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> Subject: RFC: new NFS mount option to work around Solaris server bug MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Dec 2014 23:15:06 -0000 Hi, https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 This bug report discusses a bug in the Solaris NFS server that is tickled by the way FreeBSD currently does exclusive open in the NFS client. FreeBSD sets both the mtime to the time of the client and also the file's mode using a Setattr RPC done after an exclusive create of the file in an exclusive open. Jason (jnaughto@ee.ryerson.ca) was able to figure out that the server bug can be avoided if the mtime is set to the server's time (xxx_TOSERVER option in the Setattr RPC request). I'd like to propose a new mount option that forces the FreeBSD client to use xxx_TOSERVER for setting times, mostly to be used as a work around for this Solaris server bug. 1 - Does doing this make sense to others? 2 - If the answer to one is "Yes", then what do you think the option should be called? useservertime useservtm ust OR ?? Thanks in advance for your comments, rick From owner-freebsd-fs@FreeBSD.ORG Wed Dec 24 23:34:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 36FD4FEA for ; Wed, 24 Dec 2014 23:34:22 +0000 (UTC) Received: from mail-yk0-x22b.google.com (mail-yk0-x22b.google.com [IPv6:2607:f8b0:4002:c07::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E587F3447 for ; Wed, 24 Dec 2014 23:34:21 +0000 (UTC) Received: by mail-yk0-f171.google.com with SMTP id 142so4193530ykq.16 for ; Wed, 24 Dec 2014 15:34:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=p/izNLnULJMeCB/4Q9/Wej9zP3bvqnqImYuuYgY4RZA=; b=utvYoc5tenPyPpBWXlwm8hxTH76s7cWnbYVC4ajqD3eekCuh2TfK262FSJzhkkOPck s60Pzy242aIvhdRzXlwW070mtmjWLB4JsOGKrrgwVn8+hIloSpYl1QOEf9ai4qC5lCZi 5bi5obVM52mbEtem8l0lB/DVWemZS6u3xwW1Gp0rzanKiVNB35rxI5eAN0CTJPGoSmFb /RL1gtWh9YK6H4nNDMPp+f7kBQTavBdnOe6CqU/4OtIf/FCr72FmE2DcN9yjtaHSaOaH kesPaSI+T6H+0EDj54zCm2rum6JEq7MaQs7OS5AlumO3nkZWQlayQ9By8YLT7gkizcIv 8gmw== MIME-Version: 1.0 X-Received: by 10.236.43.239 with SMTP id l75mr11341076yhb.28.1419464061167; Wed, 24 Dec 2014 15:34:21 -0800 (PST) Received: by 10.170.90.131 with HTTP; Wed, 24 Dec 2014 15:34:21 -0800 (PST) In-Reply-To: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> References: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> Date: Wed, 24 Dec 2014 15:34:21 -0800 Message-ID: Subject: Re: RFC: new NFS mount option to work around Solaris server bug From: Mehmet Erol Sanliturk To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Dec 2014 23:34:22 -0000 On Wed, Dec 24, 2014 at 3:14 PM, Rick Macklem wrote: > Hi, > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 > > This bug report discusses a bug in the Solaris NFS server > that is tickled by the way FreeBSD currently does exclusive > open in the NFS client. > FreeBSD sets both the mtime to the time of the client and > also the file's mode using a Setattr RPC done after an > exclusive create of the file in an exclusive open. > > Jason (jnaughto@ee.ryerson.ca) was able to figure out that > the server bug can be avoided if the mtime is set to the > server's time (xxx_TOSERVER option in the Setattr RPC request). > > I'd like to propose a new mount option that forces the > FreeBSD client to use xxx_TOSERVER for setting times, > mostly to be used as a work around for this Solaris server bug. > 1 - Does doing this make sense to others? > 2 - If the answer to one is "Yes", then what do you think > the option should be called? > useservertime > useservtm > ust > OR ?? > > Thanks in advance for your comments, rick > Many years ago , I had worked for a while in Banyan Network Operating System and Novell Netware . When a client connected to those systems , time in client computer was set to server time . In FreeBSD , for NFS clients , using Server time will be very useful for files stored into server . Otherwise , during compilations with make , it is displaying message "There is a time drift between client and server ... " . When there is no NTP usage by clients or NTP may synchronize times from different NTP servers , their times may be different , and these differences may prevent correct work of time based processes on files stored into server . The "use_server_time" is more readable ( mostly clients are connected through fstab , two character more is not a significant burden ) . Thank you very much . Mehmet Erol Sanliturk From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 00:28:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9478D680 for ; Thu, 25 Dec 2014 00:28:15 +0000 (UTC) Received: from mail-yk0-x236.google.com (mail-yk0-x236.google.com [IPv6:2607:f8b0:4002:c07::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4EFB619B6 for ; Thu, 25 Dec 2014 00:28:15 +0000 (UTC) Received: by mail-yk0-f182.google.com with SMTP id 131so4233407ykp.13 for ; Wed, 24 Dec 2014 16:28:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=m/wqQp51y5xNaTWfCzih4oqcMR74GRVWJ85SOtCEnZU=; b=W0HGHODVQuQkYGkJQXXlYsv9YuMT3G4wRMd8i/o3fadctqLPdBVHFJ6mgDVfFw8SN0 WXT3+XicqsoXFlL0BqIdf6NVUKYzJRx6VMHU2gOt1aWgMuosVOX6LW7keZkwrBNX8DY9 DT0T7tL9l0ft6VhZAvTFxVBr2xIMTD2I3wOR+1C7YJuSNlRStcMBzX558c0msW1Htrsg b5vB1IOuu01L7BdbgLJfZLW60+1Wn7ZBdtM1FizGUOwE6WKWFilGig0C7KruI0Gm8MEJ dY1BxRK6eVTKnFyo+q2Swk4HshHcatcfMz424Ybv3PyT3xwFVFR5vcIcX7fDuYcpP//i H7OA== MIME-Version: 1.0 X-Received: by 10.170.98.138 with SMTP id p132mr18818052yka.46.1419467294458; Wed, 24 Dec 2014 16:28:14 -0800 (PST) Received: by 10.170.90.131 with HTTP; Wed, 24 Dec 2014 16:28:14 -0800 (PST) In-Reply-To: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> References: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> Date: Wed, 24 Dec 2014 16:28:14 -0800 Message-ID: Subject: Re: RFC: new NFS mount option to work around Solaris server bug From: Mehmet Erol Sanliturk To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 00:28:15 -0000 On Wed, Dec 24, 2014 at 3:14 PM, Rick Macklem wrote: > Hi, > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 > > This bug report discusses a bug in the Solaris NFS server > that is tickled by the way FreeBSD currently does exclusive > open in the NFS client. > FreeBSD sets both the mtime to the time of the client and > also the file's mode using a Setattr RPC done after an > exclusive create of the file in an exclusive open. > > Jason (jnaughto@ee.ryerson.ca) was able to figure out that > the server bug can be avoided if the mtime is set to the > server's time (xxx_TOSERVER option in the Setattr RPC request). > > I'd like to propose a new mount option that forces the > FreeBSD client to use xxx_TOSERVER for setting times, > mostly to be used as a work around for this Solaris server bug. > 1 - Does doing this make sense to others? > 2 - If the answer to one is "Yes", then what do you think > the option should be called? > useservertime > useservtm > ust > OR ?? > > Thanks in advance for your comments, rick > Actually , this parameter should be defined ALSO in NFS server in rc.conf definition , because , to the NFS server , many different client operating systems may be connected . Setting a parameter in only FreeBSD clients will not solve COMMON time usage problem , because any change in FreeBSD client will not be available in other operating system client definitions . In that way , NFS server will set the file time irrespective of client time . If this causes an inconvenience in other operating system clients , they may use an NTP server defined in the NFS server but , unfortunately , it seems that a FreeBSD server is NOT able to be used as an NTP server with respect to FreeBSD Handbook 29.11. Clock Synchronization with NTP where it seems that ntpd is only a CLIENT to an external NTP server . At this point , it seems that there is a necessity to enable FreeBSD to define an NTP server by itself like an NFS server . Thank you very much . Mehmet Erol Sanliturk From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 01:04:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5DB96CDB for ; Thu, 25 Dec 2014 01:04:04 +0000 (UTC) Received: from mail-wi0-f182.google.com (mail-wi0-f182.google.com [209.85.212.182]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E81F61F7B for ; Thu, 25 Dec 2014 01:04:03 +0000 (UTC) Received: by mail-wi0-f182.google.com with SMTP id h11so14641030wiw.3 for ; Wed, 24 Dec 2014 17:03:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=xL35f6NIOfzJ+ml/+4ydG9mzw5MqbLhOoLAmyAnnpMw=; b=Fu5h937AKubJKfIw0kVxSDWvUKJAyJzsN2VvDbLYhlIcVbdAtIsvFMNm64haC1qtd6 uO7FwPMPD+2PV0bCoaCtrwZku77hA6OfLH3xUUbmVZL8nu3YeW27zecZt8dvZABHwfXN zQyiQLLoRXdzVYdSOMmKFlNqWmooybLFfdgGrJeh12smDEzAZHLpAQTM47HmIWDxxDBM oQMK4ShmCGMYef2LrLJCbvyIqoXZA8CMQRJH+L5lMrcZeviN2NQorG2v+PPJ8CUK2BGp 51JuNqchnY1bvDGZAGoyIsswhFCRGqxuuhRxPpVtDGNamQoESrWNj1COpwPI75V8isTF alGQ== X-Gm-Message-State: ALoCoQnVrELWLTqwhIr9TRHtDEZ3fCpouiE/oQ5br7MdT/3lwyvRlEYYV5GWkt1yCVOb5LbqNxzq X-Received: by 10.180.21.178 with SMTP id w18mr55087551wie.78.1419469435667; Wed, 24 Dec 2014 17:03:55 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id gu5sm15483633wib.24.2014.12.24.17.03.54 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Dec 2014 17:03:54 -0800 (PST) Message-ID: <549B626D.4010009@multiplay.co.uk> Date: Thu, 25 Dec 2014 01:03:41 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zpool upgrade - Assertion failed: ... libzfs/common/libzfs_config.c, line 250 References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 01:04:04 -0000 This is likely a missing, offline or otherwise unavailable pool. If so its a long standing bug I've been trying to find some time to look at. Regards Steve On 24/12/2014 15:28, Mark Martinec wrote: > Upgraded a fairly recent 10-STABLE to yesterday's version on two hosts > (amd64). Upgrade went fine, things work, but checking for ZFS pool > upgrades on one host fails, while the other is fine: > > > # zpool upgrade > This system supports ZFS pool feature flags. > > All pools are formatted using feature flags. > > > Some supported features are not enabled on the following pools. Once a > feature is enabled the pool may become incompatible with software > that does not support the feature. See zpool-features(7) for details. > > POOL FEATURE > --------------- > big > large_blocks > Assertion failed: (nvlist_lookup_nvlist(config, "feature_stats", > &features) == 0), file > /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_config.c, > line 250. > Abort trap > > > Mark > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 02:05:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AE8D07EB for ; Thu, 25 Dec 2014 02:05:38 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 7529F358C for ; Thu, 25 Dec 2014 02:05:38 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AsIEAJ5wm1SDaFve/2dsb2JhbABcg1hYBIMAw1eFcQKBJgEBAQEBfYQMAQEBAwEjVgUWGAICDRkCWQYTiCQIDbJHlUgBAQEBAQEEAQEBAQEBAQEBFQSBIY4dNAeCaIFBBYlLiAmGQY0KgzkihAwgMQGBRH4BAQE X-IronPort-AV: E=Sophos;i="5.07,640,1413259200"; d="scan'208";a="179104863" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 24 Dec 2014 21:05:36 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id CA886E7956; Wed, 24 Dec 2014 21:05:36 -0500 (EST) Date: Wed, 24 Dec 2014 21:05:36 -0500 (EST) From: Rick Macklem To: Mehmet Erol Sanliturk Message-ID: <1808783041.2132657.1419473136814.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: new NFS mount option to work around Solaris server bug MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 02:05:38 -0000 Mehmet Erol Sanliturk wrote: > > > > > > On Wed, Dec 24, 2014 at 3:14 PM, Rick Macklem < rmacklem@uoguelph.ca > > wrote: > > > Hi, > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 > > This bug report discusses a bug in the Solaris NFS server > that is tickled by the way FreeBSD currently does exclusive > open in the NFS client. > FreeBSD sets both the mtime to the time of the client and > also the file's mode using a Setattr RPC done after an > exclusive create of the file in an exclusive open. > > Jason ( jnaughto@ee.ryerson.ca ) was able to figure out that > the server bug can be avoided if the mtime is set to the > server's time (xxx_TOSERVER option in the Setattr RPC request). > > I'd like to propose a new mount option that forces the > FreeBSD client to use xxx_TOSERVER for setting times, > mostly to be used as a work around for this Solaris server bug. > 1 - Does doing this make sense to others? > 2 - If the answer to one is "Yes", then what do you think > the option should be called? > useservertime > useservtm > ust > OR ?? > > Thanks in advance for your comments, rick > > > > > Actually , this parameter should be defined ALSO in NFS server in > rc.conf definition , because , to the NFS server , many different > client operating systems may be connected . Setting a parameter in > only FreeBSD clients will not solve COMMON time usage problem , > because any change in FreeBSD client will not be available in other > operating system client definitions . > > > In that way , NFS server will set the file time irrespective of > client time . > Well, the NFS protocol Setattr (which sets times among other attributes) arguments includes a flag that: - Can be set to xxx_TOCLIENT and provides a time from the client. OR - Can be set to xxx_TOSERVER. The server is expected to do whichever the client specifies. So, I don't think having a flag to override this on the server side would be appropriate, since it would be contrary to the protocol specification. rick > If this causes an inconvenience in other operating system clients , > they may use an NTP server defined in the NFS server but , > unfortunately , it seems that a FreeBSD server is NOT able to be > used as an NTP server with respect to FreeBSD Handbook 29.11. Clock > Synchronization with NTP where it seems that ntpd is only a CLIENT > to an external NTP server . > > > At this point , it seems that there is a necessity to enable FreeBSD > to define an NTP server by itself like an NFS server . > > > > > Thank you very much . > > > Mehmet Erol Sanliturk > > > > > > From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 02:18:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0981DFD2 for ; Thu, 25 Dec 2014 02:18:55 +0000 (UTC) Received: from mail-wi0-f178.google.com (mail-wi0-f178.google.com [209.85.212.178]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9C400370E for ; Thu, 25 Dec 2014 02:18:54 +0000 (UTC) Received: by mail-wi0-f178.google.com with SMTP id em10so14656978wid.11 for ; Wed, 24 Dec 2014 18:18:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=5FvDLsSFfNrpKIOVHOZEbw/jaF0RH2vuJUsvZ9l0WL4=; b=fOsI7dqsppZ08IGU809Bcx3EQPiYo0z3Mm0eW+HnKYx7Qem9Hy7zaDf6Iyah7/wEbd XQ1gxU1GExBmuj2h0YdgUkiSSIcyouqGQjoTLVI61mvAW8MTtyE4RqbDaGp6hP0A4mY8 dC3oqxJVPDWwv28C6JBygIvyXI3KuPUeV7uqfFU8n1HQFmK9sE78zbctRkDMwL40uNDu zOJdZznuhe2E2qf0VnuVV7eCMy2jDok0pScJHxZvlxYKpXQmOn+xQH+o2v2DGJt8RELM aXO+BA6wWrhDVjm6Xq7YTyIFK3/lpZDF7uZ7CGERkhxPRcGCC7fG1M9Z4TRwE4Fl/1oW 7chQ== X-Gm-Message-State: ALoCoQniaL26NASscM8gOT6soywUwi3R6HvHzgnDR1JmfP/YmFUD2MakWWYhAJFCujF7bkg+8INp X-Received: by 10.194.175.202 with SMTP id cc10mr69512647wjc.27.1419473927592; Wed, 24 Dec 2014 18:18:47 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id o16sm139315wjw.7.2014.12.24.18.18.46 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Dec 2014 18:18:46 -0800 (PST) Message-ID: <549B73F9.5080800@multiplay.co.uk> Date: Thu, 25 Dec 2014 02:18:33 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zpool upgrade - Assertion failed: ... libzfs/common/libzfs_config.c, line 250 References: <9d15b37204202b4b48a65cc12357612b@mailbox.ijs.si> <2879070c.21bdfb3c@fabiankeil.de> In-Reply-To: <2879070c.21bdfb3c@fabiankeil.de> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 02:18:55 -0000 On 24/12/2014 17:27, Fabian Keil wrote: > Mark Martinec wrote: > >> P.S. Seems this was due to one external (removable) disk (with its >> own pool) not being connected to the host. After making it available, >> the 'zpool upgrade' command completed normally, listing all pools. > That's a known issue: > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=182248 Thanks for that link Fabian, didn't know there was an existing PR for it. I've just committed a fix for this: https://svnweb.freebsd.org/changeset/base/276194 From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 07:22:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A9B44F92 for ; Thu, 25 Dec 2014 07:22:22 +0000 (UTC) Received: from mail-ie0-x229.google.com (mail-ie0-x229.google.com [IPv6:2607:f8b0:4001:c03::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6EFB33FD6 for ; Thu, 25 Dec 2014 07:22:22 +0000 (UTC) Received: by mail-ie0-f169.google.com with SMTP id y20so8504727ier.0 for ; Wed, 24 Dec 2014 23:22:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=e7liYhF/0lRt6jw14zNCrDg5tqHQxnzApGrC8HYGwpo=; b=vK2FLIV6Mh8MzUNAmoNQ4uf7fTx1teg9vlrcmHwa2hmlLILKr258uqPqBz7gYsOLuR NUMmW+fa6YxU7VBn8AUCfsyHGhWghSXJbgp8qIyE/wuQX2tPfZqdtUbo4jCpPL+ZVRno wuIiSWmi9g40bJOiHNIjmik4wCmKKGMJHWQbkohXctPNDdYyXNroCBxQcfGaTWVVl0gK mA3fIlNVOJ4kwAJIcqTdS8FAirDMTp4AvACmd95G2RviADSEub6ZPXkHxPRm/8ccFMoB 8ysxK101EbsC+rCYYKDEYNQxCwB/Uqh44YUhP6YxDRlf//0uDu+Sw+RQCVrk+j1AqC8l HDiw== MIME-Version: 1.0 X-Received: by 10.50.43.169 with SMTP id x9mr1567114igl.28.1419492141785; Wed, 24 Dec 2014 23:22:21 -0800 (PST) Received: by 10.64.17.195 with HTTP; Wed, 24 Dec 2014 23:22:21 -0800 (PST) Reply-To: araujo@FreeBSD.org In-Reply-To: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> References: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> Date: Thu, 25 Dec 2014 15:22:21 +0800 Message-ID: Subject: Re: RFC: new NFS mount option to work around Solaris server bug From: Marcelo Araujo To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 07:22:22 -0000 2014-12-25 7:14 GMT+08:00 Rick Macklem : > > useservertime > This one is more readable. Best Regards, -- -- Marcelo Araujo (__)araujo@FreeBSD.org \\\'',)http://www.FreeBSD.org \/ \ ^ Power To Server. .\. /_) From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 10:16:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B1F762FF for ; Thu, 25 Dec 2014 10:16:15 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 254393F11 for ; Thu, 25 Dec 2014 10:16:14 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id sBPAG5Om067525 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 25 Dec 2014 12:16:05 +0200 (EET) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.9.2 kib.kiev.ua sBPAG5Om067525 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id sBPAG46k067524; Thu, 25 Dec 2014 12:16:04 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Thu, 25 Dec 2014 12:16:04 +0200 From: Konstantin Belousov To: Rick Macklem Subject: Re: RFC: new NFS mount option to work around Solaris server bug Message-ID: <20141225101604.GC1754@kib.kiev.ua> References: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <962761800.2101281.1419462898537.JavaMail.root@uoguelph.ca> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 10:16:15 -0000 On Wed, Dec 24, 2014 at 06:14:58PM -0500, Rick Macklem wrote: > Hi, > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 > > This bug report discusses a bug in the Solaris NFS server > that is tickled by the way FreeBSD currently does exclusive > open in the NFS client. > FreeBSD sets both the mtime to the time of the client and > also the file's mode using a Setattr RPC done after an > exclusive create of the file in an exclusive open. > > Jason (jnaughto@ee.ryerson.ca) was able to figure out that > the server bug can be avoided if the mtime is set to the > server's time (xxx_TOSERVER option in the Setattr RPC request). > > I'd like to propose a new mount option that forces the > FreeBSD client to use xxx_TOSERVER for setting times, > mostly to be used as a work around for this Solaris server bug. > 1 - Does doing this make sense to others? > 2 - If the answer to one is "Yes", then what do you think > the option should be called? > useservertime > useservtm > ust > OR ?? What are drawbacks of unconditionally using the workaround ? From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 14:34:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1DD6459A; Thu, 25 Dec 2014 14:34:12 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id B8A9664BF7; Thu, 25 Dec 2014 14:34:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AsQEAGIfnFSDaFve/2dsb2JhbABcg1hYBIMAw1uFcQKBHQEBAQEBfYQNAQUjBFIbDgoCAg0ZAlkGE4gsDbJYlTEBAQEBAQUBAQEBAQEBARYEgSGNcxoVNAeCaIFBBYlLiAmGQYUYiysihAwgMQGBA0F+AQEB X-IronPort-AV: E=Sophos;i="5.07,643,1413259200"; d="scan'208";a="180978788" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 25 Dec 2014 09:34:01 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id E5A25B3F02; Thu, 25 Dec 2014 09:34:01 -0500 (EST) Date: Thu, 25 Dec 2014 09:34:01 -0500 (EST) From: Rick Macklem To: Konstantin Belousov Message-ID: <364990673.2236008.1419518041925.JavaMail.root@uoguelph.ca> In-Reply-To: <20141225101604.GC1754@kib.kiev.ua> Subject: Re: RFC: new NFS mount option to work around Solaris server bug MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: FreeBSD Filesystems , Christian Corti , Jason Naughton X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 14:34:12 -0000 Kostik wrote: > On Wed, Dec 24, 2014 at 06:14:58PM -0500, Rick Macklem wrote: > > Hi, > > > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193128 > > > > This bug report discusses a bug in the Solaris NFS server > > that is tickled by the way FreeBSD currently does exclusive > > open in the NFS client. > > FreeBSD sets both the mtime to the time of the client and > > also the file's mode using a Setattr RPC done after an > > exclusive create of the file in an exclusive open. > > > > Jason (jnaughto@ee.ryerson.ca) was able to figure out that > > the server bug can be avoided if the mtime is set to the > > server's time (xxx_TOSERVER option in the Setattr RPC request). > > > > I'd like to propose a new mount option that forces the > > FreeBSD client to use xxx_TOSERVER for setting times, > > mostly to be used as a work around for this Solaris server bug. > > 1 - Does doing this make sense to others? > > 2 - If the answer to one is "Yes", then what do you think > > the option should be called? > > useservertime > > useservtm > > ust > > OR ?? > > What are drawbacks of unconditionally using the workaround ? > Interesting question. I took a look at the history of this and the story is kinda interesting (and a little long, I'm afraid). If you don't want to read the story, there is a summary statement at the end, which is the important part. First off, there is this rather weird comment in the NFS client (which has been around for ages and the new client inherited it from the old one): 1664 /* 1665 * We are normally called with only a partially initialized 1666 * VAP. Since the NFSv3 spec says that server may use the 1667 * file attributes to store the verifier, the spec requires 1668 * us to do a SETATTR RPC. FreeBSD servers store the verifier 1669 * in atime, but we can't really assume that all servers will 1670 * so we ensure that our SETATTR sets both atime and mtime. 1671 */ Now, I'm not sure why the client did this, although my guess is that it was a workaround for a broken server. If you are interested, here's a clip from RFC-1813 that describes that the creation verifier must be on stable storage. (I would expect servers to do this, so the client shouldn't need to set it, but...) If the file does not exist, the server creates the file and stores the verifier in stable storage. For file systems that do not provide a mechanism for the storage of arbitrary file attributes, the server may use one or more elements of the file metadata to store the verifier. The verifier must be stored in stable storage to prevent erroneous failure on retransmission of the request. The code after the above comment sets both va_atime and va_mtime to the current filesystem timestamp. Now, prior to r245508 in head (r247502 in stable/9 for the old client), the NFS client code did: if (vap->va_mtime.tv_sec != time_second) - use xx_TOCLIENT and specify the time else - use xx_TOSERVER --> So, for FreeBSD9.0 and earlier, the NFS client would normally specify xxx_TOSERVER. (There was probably a very slight chance the clock would tick to the next second and generate xx_TOCLIENT.) r245508 changed this to: if ((vap->va_aflags & VA_NULL_UTIMES) == 0) - use xx_TOCLIENT else - use xx_TOSERVER I think this was to fix setutimes() and deal with higher than 1sec vfs timestamp resolutions. --> As such, more recent NFS clients have been sending xx_TOCLIENT. Summary: Maybe the best fix for this is just to have the NFS client exclusive create set VA_NULL_UTIMES when setting the times, so it will again use xx_TOSERVER for this case? In other words, I think the code should use VA_UTIMES_NULL for setutimes() etc, but maybe it should just always use xx_TOSERVER for the exclusive create, since that is what happened up until around FreeBSD9.0. (This would avoid yet another mount option, yippee.) I'll take a look at the FreeBSD server, to see how it stores the create verifier these days, but I have no idea what all the other NFS servers out there do. Thanks for the good question Kostik, rick From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 14:39:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9BC0C97B; Thu, 25 Dec 2014 14:39:08 +0000 (UTC) Received: from mail-wi0-x22c.google.com (mail-wi0-x22c.google.com [IPv6:2a00:1450:400c:c05::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1F3D164C59; Thu, 25 Dec 2014 14:39:08 +0000 (UTC) Received: by mail-wi0-f172.google.com with SMTP id n3so15568999wiv.17; Thu, 25 Dec 2014 06:39:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=9l06avgz/LPv9kFOoL4n2buKoW2X1WMjU/BYno/RlFg=; b=AE3aMlBE4Mz7+UBjIjFSilJDGlG4WlS/uXW3OpqjB731XvSu1IoGlaKyzGKk2B32+4 aicQMbV9wS8qEzEwYaFF3QIpC2uOhToeduoO0ojbvRzBEyWMB2bhQ2YZgNN0OCV+PB3p 4J7ElRiumwYs36NvU//hyRgHXgOdHYvQU7uYEnFw+B/3y4BVWG1GWnilm9xMDtusNOW0 AY38a4MrPrDcxRdLI7QUnFZ2aEoVWuNf+Lgvl3PjA4Pk4FEpkR091hYRx2vsa2WrRrmg 5w2S/7PoVT3WorJsMEnjgqbfqSZaOYpBQGE7TDVFRf/vYj0shdd2BxOaJ+Bh6mgdgZRR txWQ== MIME-Version: 1.0 X-Received: by 10.194.187.79 with SMTP id fq15mr53426458wjc.2.1419518346409; Thu, 25 Dec 2014 06:39:06 -0800 (PST) Received: by 10.27.137.70 with HTTP; Thu, 25 Dec 2014 06:39:06 -0800 (PST) Date: Thu, 25 Dec 2014 16:39:06 +0200 Message-ID: Subject: LSI SAS 9300-8i weird ZFS checksum errors From: George Kontostanos To: "freebsd-fs@freebsd.org" , freebsd-hardware@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 14:39:08 -0000 Hello, list and Merry Christmas to all I am facing some weird checksum errors during scrub. The configuration is the following: Board: Supermicro Motherboard X10DRi-T4+ ( http://www.supermicro.com/products/motherboard/xeon/c600/x10dri-t4_.cfm) Controller: LSI SAS 9300-8i ( http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9300-8i.aspx) HDD: 21X6TB Western Digital WD60EFRX HDD: 2XIntel SATA 600GB Solid-State Drive SSDSC2BB600G401 DC S3500 (SWAP, ZIL, CACHE) Chassis: Supermicro 847BE1C-R1K28LPB 4U Storage Chassis RAM: 64 GB I installed initially FreeBSD 10.1-RELEASE created one pool consistent by 3 X7disk VDEVs in RAIDZ3. I used NFS to start copying some data. After copying around 3TB I initiated a scrub. The result was the following: http://pastebin.com/rswgCY2A and http://pastebin.com/DQ2urGXk I tried to flash the controller but the LSI utility did not recognize the controller. I installed FreeBSD 9.3-RELEASE and used LSI's mpslsi3 driver. I was able to flash the latest bios and firmware that way. LSI Corporation SAS3 Flash Utility Version 07.00.00.00 (2014.08.14) Copyright (c) 2008-2014 LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS3008(C0) Controller Number : 0 Controller : SAS3008(C0) PCI Address : 00:82:00:00 SAS Address : 500605b-0-06ce-27e0 NVDATA Version (Default) : 06.03.00.05 NVDATA Version (Persistent) : 06.03.00.05 Firmware Product ID : 0x2221 (IT) Firmware Version : 06.00.00.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9300-8i BIOS Version : 08.13.00.00 UEFI BSD Version : 02.00.00.00 FCODE Version : N/A Board Name : SAS9300-8i Board Assembly : H3-25573-00E Board Tracer Number : SV32928040 I recreated the pool again and started writing data via NFS again. After 3 TB of data I started a scrub and I am still getting checksum errors though there are no messages regarding the drives anymore in /var/log/messages pool: Pool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub in progress since Thu Dec 25 08:46:21 2014 2.28T scanned out of 5.54T at 816M/s, 1h9m to go 11.9M repaired, 41.26% done config: NAME STATE READ WRITE CKSUM Pool ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 gpt/WD-WX41D94RN5A3 ONLINE 0 0 15 (repairing) gpt/WD-WX41D948YE1U ONLINE 0 0 14 (repairing) gpt/WD-WX41D94RN879 ONLINE 0 0 16 (repairing) gpt/WD-WX21D947NC83 ONLINE 0 0 24 (repairing) gpt/WD-WX21D947NT77 ONLINE 0 0 15 (repairing) gpt/WD-WX41D948YAKV ONLINE 0 0 19 (repairing) gpt/WD-WX21D9421SCV ONLINE 0 0 20 (repairing) raidz3-1 ONLINE 0 0 0 gpt/WD-WX21D9421F6F ONLINE 0 0 16 (repairing) gpt/WD-WX41D948YPN4 ONLINE 0 0 14 (repairing) gpt/WD-WX21D947NE2K ONLINE 0 0 22 (repairing) gpt/WD-WX41D948Y2PX ONLINE 0 0 19 (repairing) gpt/WD-WX41D94RNAX7 ONLINE 0 0 17 (repairing) gpt/WD-WX21D947N1RP ONLINE 0 0 12 (repairing) gpt/WD-WX21D94216X7 ONLINE 0 0 20 (repairing) raidz3-2 ONLINE 0 0 0 gpt/WD-WX41D948YAHP ONLINE 0 0 25 (repairing) gpt/WD-WX21D947N06F ONLINE 0 0 18 (repairing) gpt/WD-WX21D947N3T1 ONLINE 0 0 21 (repairing) gpt/WD-WX41D94RNT7D ONLINE 0 0 5 (repairing) gpt/WD-WX41D948Y9VV ONLINE 0 0 18 (repairing) gpt/WD-WX41D94RNS62 ONLINE 0 0 24 (repairing) gpt/WD-WX21D9421ZP9 ONLINE 0 0 28 (repairing) logs mirror-3 ONLINE 0 0 0 gpt/zil0 ONLINE 0 0 0 gpt/zil1 ONLINE 0 0 0 cache gpt/cache0 ONLINE 0 0 0 gpt/cache1 ONLINE 0 0 0 errors: No known data errors This is really driving me crazy since smartmon tools do not display any errors on the drives. Any suggestions are most welcomed!!! Thank you for your time, -- George Kontostanos --- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 19:31:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8E87C376 for ; Thu, 25 Dec 2014 19:31:33 +0000 (UTC) Received: from mail-wi0-f181.google.com (mail-wi0-f181.google.com [209.85.212.181]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2339664798 for ; Thu, 25 Dec 2014 19:31:32 +0000 (UTC) Received: by mail-wi0-f181.google.com with SMTP id r20so15920843wiv.8 for ; Thu, 25 Dec 2014 11:31:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=8zCkVobR2PIPpQs5SHUsqYmgoiHhabpDKEUcoNiQnTU=; b=Eky25uJmWhAy7r7qPQp8g//ahLavb/eLa2PI4t3iD2zJoyvs203GQO8arjAKtuPUj4 z7epLpHDv+OBAl63sTtX0a2X2J7OwG32+0xvvbDgXzSmFQpvBW1CfQVLUV/hgEq8SeBZ +wFiAtUbIFfloW/g0sYaOjyCpodh583YFFq0r4KgKLt2ViTWkT/xO0YF3uhJBjVDgJCc vU8eRlxjW3DTKn3zq/QUiucK5FUrXCrmnPh1Eof7F7ExVYqJYdOfaymnqy0hq2AiAlE7 WlIxDgkoxfqSstinueOcqWpd2UoLaHXg3ZHgmjIxVwYo+DEsi02XxRREHyiJ6ULlQ3Mz WG+w== X-Gm-Message-State: ALoCoQnt0YbcWwEcAEIljTC2JAbmOJZHUb+esNlQ/l67SWikgutZ0hJVcZkmWIo7oBonNH24pdRE X-Received: by 10.194.79.199 with SMTP id l7mr76187099wjx.136.1419535885084; Thu, 25 Dec 2014 11:31:25 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id gf6sm36132249wjc.11.2014.12.25.11.31.24 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Dec 2014 11:31:24 -0800 (PST) Message-ID: <549C65FF.4010702@multiplay.co.uk> Date: Thu, 25 Dec 2014 19:31:11 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: LSI SAS 9300-8i weird ZFS checksum errors References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 19:31:33 -0000 On 25/12/2014 14:39, George Kontostanos wrote: > Hello, list and Merry Christmas to all > > I am facing some weird checksum errors during scrub. The configuration is > the following: > > Board: Supermicro Motherboard X10DRi-T4+ ( > http://www.supermicro.com/products/motherboard/xeon/c600/x10dri-t4_.cfm) > Controller: LSI SAS 9300-8i ( > http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9300-8i.aspx) > HDD: 21X6TB Western Digital WD60EFRX > HDD: 2XIntel SATA 600GB Solid-State Drive SSDSC2BB600G401 DC S3500 > (SWAP, ZIL, CACHE) > Chassis: Supermicro 847BE1C-R1K28LPB 4U Storage Chassis > RAM: 64 GB > > I installed initially FreeBSD 10.1-RELEASE created one pool consistent by 3 > X7disk VDEVs in RAIDZ3. I used NFS to start copying some data. After > copying around 3TB I initiated a scrub. > The result was the following: http://pastebin.com/rswgCY2A and > http://pastebin.com/DQ2urGXk > > I tried to flash the controller but the LSI utility did not recognize the > controller. I installed FreeBSD 9.3-RELEASE and used LSI's mpslsi3 driver. > I was able to flash the latest bios and firmware that way. > > LSI Corporation SAS3 Flash Utility > Version 07.00.00.00 (2014.08.14) > Copyright (c) 2008-2014 LSI Corporation. All rights reserved > > Adapter Selected is a LSI SAS: SAS3008(C0) > > Controller Number : 0 > Controller : SAS3008(C0) > PCI Address : 00:82:00:00 > SAS Address : 500605b-0-06ce-27e0 > NVDATA Version (Default) : 06.03.00.05 > NVDATA Version (Persistent) : 06.03.00.05 > Firmware Product ID : 0x2221 (IT) > Firmware Version : 06.00.00.00 > NVDATA Vendor : LSI > NVDATA Product ID : SAS9300-8i > BIOS Version : 08.13.00.00 > UEFI BSD Version : 02.00.00.00 > FCODE Version : N/A > Board Name : SAS9300-8i > Board Assembly : H3-25573-00E > Board Tracer Number : SV32928040 > > I recreated the pool again and started writing data via NFS again. After 3 > TB of data I started a scrub and I am still getting checksum errors though > there are no messages regarding the drives anymore in /var/log/messages > > pool: Pool > state: ONLINE > status: One or more devices has experienced an unrecoverable error. An > attempt was made to correct the error. Applications are unaffected. > action: Determine if the device needs to be replaced, and clear the errors > using 'zpool clear' or replace the device with 'zpool replace'. > see: http://illumos.org/msg/ZFS-8000-9P > > scan: scrub in progress since Thu Dec 25 08:46:21 2014 > 2.28T scanned out of 5.54T at 816M/s, 1h9m to go > 11.9M repaired, 41.26% done > config: > > NAME STATE READ WRITE CKSUM > Pool ONLINE 0 0 0 > raidz3-0 ONLINE 0 0 0 > gpt/WD-WX41D94RN5A3 ONLINE 0 0 15 (repairing) > gpt/WD-WX41D948YE1U ONLINE 0 0 14 (repairing) > gpt/WD-WX41D94RN879 ONLINE 0 0 16 (repairing) > gpt/WD-WX21D947NC83 ONLINE 0 0 24 (repairing) > gpt/WD-WX21D947NT77 ONLINE 0 0 15 (repairing) > gpt/WD-WX41D948YAKV ONLINE 0 0 19 (repairing) > gpt/WD-WX21D9421SCV ONLINE 0 0 20 (repairing) > raidz3-1 ONLINE 0 0 0 > gpt/WD-WX21D9421F6F ONLINE 0 0 16 (repairing) > gpt/WD-WX41D948YPN4 ONLINE 0 0 14 (repairing) > gpt/WD-WX21D947NE2K ONLINE 0 0 22 (repairing) > gpt/WD-WX41D948Y2PX ONLINE 0 0 19 (repairing) > gpt/WD-WX41D94RNAX7 ONLINE 0 0 17 (repairing) > gpt/WD-WX21D947N1RP ONLINE 0 0 12 (repairing) > gpt/WD-WX21D94216X7 ONLINE 0 0 20 (repairing) > raidz3-2 ONLINE 0 0 0 > gpt/WD-WX41D948YAHP ONLINE 0 0 25 (repairing) > gpt/WD-WX21D947N06F ONLINE 0 0 18 (repairing) > gpt/WD-WX21D947N3T1 ONLINE 0 0 21 (repairing) > gpt/WD-WX41D94RNT7D ONLINE 0 0 5 (repairing) > gpt/WD-WX41D948Y9VV ONLINE 0 0 18 (repairing) > gpt/WD-WX41D94RNS62 ONLINE 0 0 24 (repairing) > gpt/WD-WX21D9421ZP9 ONLINE 0 0 28 (repairing) > logs > mirror-3 ONLINE 0 0 0 > gpt/zil0 ONLINE 0 0 0 > gpt/zil1 ONLINE 0 0 0 > cache > gpt/cache0 ONLINE 0 0 0 > gpt/cache1 ONLINE 0 0 0 > > errors: No known data errors > > This is really driving me crazy since smartmon tools do not display any > errors on the drives. > > Any suggestions are most welcomed!!! > Check for bad hardware, first guess would be memory, next would be hotswap backplane. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 21:03:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3551D8C3 for ; Thu, 25 Dec 2014 21:03:10 +0000 (UTC) Received: from mail-wg0-x232.google.com (mail-wg0-x232.google.com [IPv6:2a00:1450:400c:c00::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A8C5F32C8 for ; Thu, 25 Dec 2014 21:03:09 +0000 (UTC) Received: by mail-wg0-f50.google.com with SMTP id a1so13413491wgh.23 for ; Thu, 25 Dec 2014 13:03:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=F/1wT3uuznvh/8XszdnNb7R0T/DnLoSoMqeuWcAFPrU=; b=hz6Yj042Ur9dg3XMbvTnRmwEFXaklPn9WMHIONpDSQG05ZwFfwdBuspfcc+1ThFXmr o+5bscz1u10TZQItZGdJVLw85mq7jfI5BdwLxqp0+XsPs2N8nKfRdkEIAoTUa5yMvQR4 sTDLDzaOzwvUfltr6jOPeg22iY5GXiMMHekJVAajhKEXMTxYbcw0G9XFCDsWccPhvmOr 2K9sAoa7LlFbbqkq31PTAaxtCLsBumsr2EYjOGBAHFVuul+YbNwsDZ8PBZtnbi3KNHGz qvzWv5u/kjkSu3svsvxkyt1nStbS16p5RwB7yxS6io8wEZdJ0hBQMLmvVFLTMmcwhAdT EgKw== MIME-Version: 1.0 X-Received: by 10.181.13.242 with SMTP id fb18mr63339606wid.1.1419541388069; Thu, 25 Dec 2014 13:03:08 -0800 (PST) Received: by 10.27.137.70 with HTTP; Thu, 25 Dec 2014 13:03:08 -0800 (PST) In-Reply-To: <549C65FF.4010702@multiplay.co.uk> References: <549C65FF.4010702@multiplay.co.uk> Date: Thu, 25 Dec 2014 23:03:08 +0200 Message-ID: Subject: Re: LSI SAS 9300-8i weird ZFS checksum errors From: George Kontostanos To: Steven Hartland Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 21:03:10 -0000 On Thu, Dec 25, 2014 at 9:31 PM, Steven Hartland wrote: > > On 25/12/2014 14:39, George Kontostanos wrote: > >> Hello, list and Merry Christmas to all >> >> I am facing some weird checksum errors during scrub. The configuration is >> the following: >> >> Board: Supermicro Motherboard X10DRi-T4+ ( >> http://www.supermicro.com/products/motherboard/xeon/c600/x10dri-t4_.cfm) >> Controller: LSI SAS 9300-8i ( >> http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9300-8i.aspx) >> HDD: 21X6TB Western Digital WD60EFRX >> HDD: 2XIntel SATA 600GB Solid-State Drive SSDSC2BB600G401 DC S3500 >> (SWAP, ZIL, CACHE) >> Chassis: Supermicro 847BE1C-R1K28LPB 4U Storage Chassis >> RAM: 64 GB >> >> I installed initially FreeBSD 10.1-RELEASE created one pool consistent by >> 3 >> X7disk VDEVs in RAIDZ3. I used NFS to start copying some data. After >> copying around 3TB I initiated a scrub. >> The result was the following: http://pastebin.com/rswgCY2A and >> http://pastebin.com/DQ2urGXk >> >> I tried to flash the controller but the LSI utility did not recognize the >> controller. I installed FreeBSD 9.3-RELEASE and used LSI's mpslsi3 driver. >> I was able to flash the latest bios and firmware that way. >> >> LSI Corporation SAS3 Flash Utility >> Version 07.00.00.00 (2014.08.14) >> Copyright (c) 2008-2014 LSI Corporation. All rights reserved >> >> Adapter Selected is a LSI SAS: SAS3008(C0) >> >> Controller Number : 0 >> Controller : SAS3008(C0) >> PCI Address : 00:82:00:00 >> SAS Address : 500605b-0-06ce-27e0 >> NVDATA Version (Default) : 06.03.00.05 >> NVDATA Version (Persistent) : 06.03.00.05 >> Firmware Product ID : 0x2221 (IT) >> Firmware Version : 06.00.00.00 >> NVDATA Vendor : LSI >> NVDATA Product ID : SAS9300-8i >> BIOS Version : 08.13.00.00 >> UEFI BSD Version : 02.00.00.00 >> FCODE Version : N/A >> Board Name : SAS9300-8i >> Board Assembly : H3-25573-00E >> Board Tracer Number : SV32928040 >> >> I recreated the pool again and started writing data via NFS again. After 3 >> TB of data I started a scrub and I am still getting checksum errors though >> there are no messages regarding the drives anymore in /var/log/messages >> >> pool: Pool >> state: ONLINE >> status: One or more devices has experienced an unrecoverable error. An >> attempt was made to correct the error. Applications are unaffected. >> action: Determine if the device needs to be replaced, and clear the errors >> using 'zpool clear' or replace the device with 'zpool replace'. >> see: http://illumos.org/msg/ZFS-8000-9P >> >> scan: scrub in progress since Thu Dec 25 08:46:21 2014 >> 2.28T scanned out of 5.54T at 816M/s, 1h9m to go >> 11.9M repaired, 41.26% done >> config: >> >> NAME STATE READ WRITE CKSUM >> Pool ONLINE 0 0 0 >> raidz3-0 ONLINE 0 0 0 >> gpt/WD-WX41D94RN5A3 ONLINE 0 0 15 (repairing) >> gpt/WD-WX41D948YE1U ONLINE 0 0 14 (repairing) >> gpt/WD-WX41D94RN879 ONLINE 0 0 16 (repairing) >> gpt/WD-WX21D947NC83 ONLINE 0 0 24 (repairing) >> gpt/WD-WX21D947NT77 ONLINE 0 0 15 (repairing) >> gpt/WD-WX41D948YAKV ONLINE 0 0 19 (repairing) >> gpt/WD-WX21D9421SCV ONLINE 0 0 20 (repairing) >> raidz3-1 ONLINE 0 0 0 >> gpt/WD-WX21D9421F6F ONLINE 0 0 16 (repairing) >> gpt/WD-WX41D948YPN4 ONLINE 0 0 14 (repairing) >> gpt/WD-WX21D947NE2K ONLINE 0 0 22 (repairing) >> gpt/WD-WX41D948Y2PX ONLINE 0 0 19 (repairing) >> gpt/WD-WX41D94RNAX7 ONLINE 0 0 17 (repairing) >> gpt/WD-WX21D947N1RP ONLINE 0 0 12 (repairing) >> gpt/WD-WX21D94216X7 ONLINE 0 0 20 (repairing) >> raidz3-2 ONLINE 0 0 0 >> gpt/WD-WX41D948YAHP ONLINE 0 0 25 (repairing) >> gpt/WD-WX21D947N06F ONLINE 0 0 18 (repairing) >> gpt/WD-WX21D947N3T1 ONLINE 0 0 21 (repairing) >> gpt/WD-WX41D94RNT7D ONLINE 0 0 5 (repairing) >> gpt/WD-WX41D948Y9VV ONLINE 0 0 18 (repairing) >> gpt/WD-WX41D94RNS62 ONLINE 0 0 24 (repairing) >> gpt/WD-WX21D9421ZP9 ONLINE 0 0 28 (repairing) >> logs >> mirror-3 ONLINE 0 0 0 >> gpt/zil0 ONLINE 0 0 0 >> gpt/zil1 ONLINE 0 0 0 >> cache >> gpt/cache0 ONLINE 0 0 0 >> gpt/cache1 ONLINE 0 0 0 >> >> errors: No known data errors >> >> This is really driving me crazy since smartmon tools do not display any >> errors on the drives. >> >> Any suggestions are most welcomed!!! >> >> Check for bad hardware, first guess would be memory, next would be > hotswap backplane. > > Regards > Steve > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > Hi Steve, Memory looks good in memtest. I am not sure what you mean regarding hotswap backplane. -- George Kontostanos --- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 21:37:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5BAF44B3 for ; Thu, 25 Dec 2014 21:37:30 +0000 (UTC) Received: from mail-wi0-f176.google.com (mail-wi0-f176.google.com [209.85.212.176]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D4E2A3726 for ; Thu, 25 Dec 2014 21:37:29 +0000 (UTC) Received: by mail-wi0-f176.google.com with SMTP id ex7so15996603wid.15 for ; Thu, 25 Dec 2014 13:37:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type; bh=fqU1Acs80SPYMg1yfiBDt70errxMMJksx655V0Fcyi0=; b=Na+yc7Gp4CmQtWnYLbDIdonhwjc0NrljEQrxSH51X460ih2oMomB+2aE7zfZo8SAib C/B8Kh710t82gsYJT7CrXq6Vwps3tUp0aDvVcGMUq2oM3meH5n8TfGmSiUZE54k481z7 ZCD8YUklIdIuOB+ChJVaKBUKWDFgsn0Cx4c1EIHa9yzosAcI/uWDpy+ibsh6wdymtLDQ 1c7tvff4VBz0obY/yyAL0cl01PFtepUxbBWAZzK58O6XooPwHuKBGjtmem/I3v6+O5W6 N3q2HGFLgvWTRnMeypN19UAanNFCy1dO+dfVlj4T+WrAtwB0JFjASzRY0OmACVVw/9hm q+Hg== X-Gm-Message-State: ALoCoQnhjwvalsgs2BEUMP71tOxIX8Fii7IDgJA+6/EE8ZTSzVsX0RBflpJA4VFZYXFiKKLobZMG X-Received: by 10.181.8.66 with SMTP id di2mr62392095wid.49.1419543448299; Thu, 25 Dec 2014 13:37:28 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id dr3sm26048709wib.4.2014.12.25.13.37.27 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Dec 2014 13:37:27 -0800 (PST) Message-ID: <549C838B.1070302@multiplay.co.uk> Date: Thu, 25 Dec 2014 21:37:15 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: George Kontostanos Subject: Re: LSI SAS 9300-8i weird ZFS checksum errors References: <549C65FF.4010702@multiplay.co.uk> In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 21:37:30 -0000 On 25/12/2014 21:03, George Kontostanos wrote: > > > On Thu, Dec 25, 2014 at 9:31 PM, Steven Hartland > > wrote: > > > On 25/12/2014 14:39, George Kontostanos wrote: > > Hello, list and Merry Christmas to all > > I am facing some weird checksum errors during scrub. The > configuration is > the following: > > Board: Supermicro Motherboard X10DRi-T4+ ( > http://www.supermicro.com/products/motherboard/xeon/c600/x10dri-t4_.cfm) > Controller: LSI SAS 9300-8i ( > http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9300-8i.aspx) > HDD: 21X6TB Western Digital WD60EFRX > HDD: 2XIntel SATA 600GB Solid-State Drive > SSDSC2BB600G401 DC S3500 > (SWAP, ZIL, CACHE) > Chassis: Supermicro 847BE1C-R1K28LPB 4U Storage Chassis > RAM: 64 GB > > I installed initially FreeBSD 10.1-RELEASE created one pool > consistent by 3 > X7disk VDEVs in RAIDZ3. I used NFS to start copying some data. > After > copying around 3TB I initiated a scrub. > The result was the following: http://pastebin.com/rswgCY2A and > http://pastebin.com/DQ2urGXk > > I tried to flash the controller but the LSI utility did not > recognize the > controller. I installed FreeBSD 9.3-RELEASE and used LSI's > mpslsi3 driver. > I was able to flash the latest bios and firmware that way. > > LSI Corporation SAS3 Flash Utility > Version 07.00.00.00 (2014.08.14) > Copyright (c) 2008-2014 LSI Corporation. All rights reserved > > Adapter Selected is a LSI SAS: SAS3008(C0) > > Controller Number : 0 > Controller : SAS3008(C0) > PCI Address : 00:82:00:00 > SAS Address : 500605b-0-06ce-27e0 > NVDATA Version (Default) : 06.03.00.05 > NVDATA Version (Persistent) : 06.03.00.05 > Firmware Product ID : 0x2221 (IT) > Firmware Version : 06.00.00.00 > NVDATA Vendor : LSI > NVDATA Product ID : SAS9300-8i > BIOS Version : 08.13.00.00 > UEFI BSD Version : 02.00.00.00 > FCODE Version : N/A > Board Name : SAS9300-8i > Board Assembly : H3-25573-00E > Board Tracer Number : SV32928040 > > I recreated the pool again and started writing data via NFS > again. After 3 > TB of data I started a scrub and I am still getting checksum > errors though > there are no messages regarding the drives anymore in > /var/log/messages > > pool: Pool > state: ONLINE > status: One or more devices has experienced an unrecoverable > error. An > attempt was made to correct the error. Applications are > unaffected. > action: Determine if the device needs to be replaced, and > clear the errors > using 'zpool clear' or replace the device with 'zpool replace'. > see: http://illumos.org/msg/ZFS-8000-9P > > scan: scrub in progress since Thu Dec 25 08:46:21 2014 > 2.28T scanned out of 5.54T at 816M/s, 1h9m to go > 11.9M repaired, 41.26% done > config: > > NAME STATE READ WRITE CKSUM > Pool ONLINE 0 0 0 > raidz3-0 ONLINE 0 0 0 > gpt/WD-WX41D94RN5A3 ONLINE 0 0 15 (repairing) > gpt/WD-WX41D948YE1U ONLINE 0 0 14 (repairing) > gpt/WD-WX41D94RN879 ONLINE 0 0 16 (repairing) > gpt/WD-WX21D947NC83 ONLINE 0 0 24 (repairing) > gpt/WD-WX21D947NT77 ONLINE 0 0 15 (repairing) > gpt/WD-WX41D948YAKV ONLINE 0 0 19 (repairing) > gpt/WD-WX21D9421SCV ONLINE 0 0 20 (repairing) > raidz3-1 ONLINE 0 0 0 > gpt/WD-WX21D9421F6F ONLINE 0 0 16 (repairing) > gpt/WD-WX41D948YPN4 ONLINE 0 0 14 (repairing) > gpt/WD-WX21D947NE2K ONLINE 0 0 22 (repairing) > gpt/WD-WX41D948Y2PX ONLINE 0 0 19 (repairing) > gpt/WD-WX41D94RNAX7 ONLINE 0 0 17 (repairing) > gpt/WD-WX21D947N1RP ONLINE 0 0 12 (repairing) > gpt/WD-WX21D94216X7 ONLINE 0 0 20 (repairing) > raidz3-2 ONLINE 0 0 0 > gpt/WD-WX41D948YAHP ONLINE 0 0 25 (repairing) > gpt/WD-WX21D947N06F ONLINE 0 0 18 (repairing) > gpt/WD-WX21D947N3T1 ONLINE 0 0 21 (repairing) > gpt/WD-WX41D94RNT7D ONLINE 0 0 5 (repairing) > gpt/WD-WX41D948Y9VV ONLINE 0 0 18 (repairing) > gpt/WD-WX41D94RNS62 ONLINE 0 0 24 (repairing) > gpt/WD-WX21D9421ZP9 ONLINE 0 0 28 (repairing) > logs > mirror-3 ONLINE 0 0 0 > gpt/zil0 ONLINE 0 0 0 > gpt/zil1 ONLINE 0 0 0 > cache > gpt/cache0 ONLINE 0 0 0 > gpt/cache1 ONLINE 0 0 0 > > errors: No known data errors > > This is really driving me crazy since smartmon tools do not > display any > errors on the drives. > > Any suggestions are most welcomed!!! > > Check for bad hardware, first guess would be memory, next would be > hotswap backplane. > > Regards > Steve > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org > " > > > Hi Steve, > > Memory looks good in memtest. I am not sure what you mean > regarding hotswap backplane. How are the disks attached? The most common way is your controller being attached to a hotswap backplane, which you then plug the disks into. Unfortunately these backplanes are one of the most common sources of issues, especially at higher speeds and even more so if they aren't direct passthrough i.e. they are actually expanders which processing of their own. You report the chassis is a 847BE1C-R1K28LPB which includes such expanders, specifically BPN-SAS3-846EL1 and BPN-SAS3-826EL1. If this is how you are connecting the disk I would strongly advise eliminating this from the equation by connecting the disks direct to the LSI controller. You can also check to see if there are any firmware updates for the expanders. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu Dec 25 23:47:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E247DD0A for ; Thu, 25 Dec 2014 23:47:10 +0000 (UTC) Received: from vps.rulingia.com (vps.rulingia.com [103.243.244.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps.rulingia.com", Issuer "CAcert Class 3 Root" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 914092876 for ; Thu, 25 Dec 2014 23:47:09 +0000 (UTC) Received: from server.rulingia.com (c220-239-242-83.belrs5.nsw.optusnet.com.au [220.239.242.83]) by vps.rulingia.com (8.14.9/8.14.9) with ESMTP id sBPNbmWl036116 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Fri, 26 Dec 2014 10:37:54 +1100 (AEDT) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.9/8.14.9) with ESMTP id sBPNbgx6004733 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Fri, 26 Dec 2014 10:37:43 +1100 (AEDT) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.9/8.14.9/Submit) id sBPNbglm004732 for freebsd-fs@freebsd.org; Fri, 26 Dec 2014 10:37:42 +1100 (AEDT) (envelope-from peter) Date: Fri, 26 Dec 2014 10:37:42 +1100 From: Peter Jeremy To: freebsd-fs@freebsd.org Subject: "panic: len 0" on NFS read Message-ID: <20141225233742.GA3385@server.rulingia.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="x+6KMIRAuhnl3hBn" Content-Disposition: inline X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 23:47:11 -0000 --x+6KMIRAuhnl3hBn Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Whilst trying to debug a RPC issue with a NFS tunneling tool, I mounted a NFS filesystem onto the same host and got a panic when I tried to access it. I'm running FreeBSD/amd64 10-stable r276177. I mounted the filesystem with: # mount -o udp,nfsv3 $(hostname):/tank/src92 /dist (/tank/src92 and / are ZFS) And then ran: $ grep zzzz /dist/* And got: panic: len 0 cpuid =3D 3 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0861448= f30 kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe0861448fe0 vpanic() at vpanic+0x126/frame 0xfffffe0861449020 kassert_panic() at kassert_panic+0x139/frame 0xfffffe0861449090 nfsm_mbufuio() at nfsm_mbufuio+0x9c/frame 0xfffffe08614490f0 nfsrpc_read() at nfsrpc_read+0x584/frame 0xfffffe08614492d0 ncl_readrpc() at ncl_readrpc+0xa5/frame 0xfffffe08614493e0 ncl_doio() at ncl_doio+0x228/frame 0xfffffe0861449480 ncl_bioread() at ncl_bioread+0xb44/frame 0xfffffe08614495f0 VOP_READ_APV() at VOP_READ_APV+0xf1/frame 0xfffffe0861449620 vn_read() at vn_read+0x211/frame 0xfffffe0861449690 vn_io_fault_doio() at vn_io_fault_doio+0x22/frame 0xfffffe08614496d0 vn_io_fault1() at vn_io_fault1+0x7c/frame 0xfffffe0861449830 vn_io_fault() at vn_io_fault+0x18b/frame 0xfffffe08614498b0 dofileread() at dofileread+0x95/frame 0xfffffe0861449900 kern_readv() at kern_readv+0x68/frame 0xfffffe0861449950 sys_read() at sys_read+0x63/frame 0xfffffe08614499a0 amd64_syscall() at amd64_syscall+0x22e/frame 0xfffffe0861449ab0 Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe0861449ab0 --- syscall (3, FreeBSD ELF64, sys_read), rip =3D 0x800fd3cba, rsp =3D 0x7f= ffffffe048, rbp =3D 0x7fffffffe090 --- I have a crashdump that looks sane and relevant bits around nfsm_mbufuio() = are: #4 0xffffffff8041e63c in nfsm_mbufuio (nd=3D0xfffffe08614491b0, uiop=3D0xf= ffffe0861449420, siz=3D0x4000) at /usr/src/sys/fs/nfs/nfs_commonsubs.c:222 (kgdb) p mp $1 =3D 0xfffff80053bab500 (kgdb) p *mp $2 =3D { m_hdr =3D { mh_next =3D 0xfffff8023433dc00,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff80053bab57c "=EF=BF=BD=EF=BF=BD=EF=BF=BD"..., mh_len =3D 0x0,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x2 },=20 =2E.. (kgdb) p *nd $4 =3D { nd_md =3D 0xfffff8005366c500,=20 nd_dpos =3D 0xfffff80562d92068 "=EF=BF=BD=EF=BF=BD=EF=BF=BD"..., =2E.. (kgdb) p *nd->nd_md $5 =3D { m_hdr =3D { mh_next =3D 0xfffff80486b05b00,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff80562d92000 "",=20 mh_len =3D 0x68,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$5.m_hdr.mh_next $11 =3D { m_hdr =3D { mh_next =3D 0xfffff8005325e400,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff80234291800 "=EF=BF=BD",=20 mh_len =3D 0x800,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$11.m_hdr.mh_next $12 =3D { m_hdr =3D { mh_next =3D 0xfffff80486b02400,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff8023453c000 "\t",=20 mh_len =3D 0x800,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$12.m_hdr.mh_next $13 =3D { m_hdr =3D { mh_next =3D 0xfffff8023433f800,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff80562d92800 "its",=20 mh_len =3D 0x800,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$13.m_hdr.mh_next $14 =3D { m_hdr =3D { mh_next =3D 0xfffff80020f36500,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff8058cb1b000 "sbconfig",=20 mh_len =3D 0x800,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$14.m_hdr.mh_next $15 =3D { m_hdr =3D { mh_next =3D 0xfffff800533d5e00,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff8041b423800 "",=20 mh_len =3D 0x800,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$15.m_hdr.mh_next $16 =3D { m_hdr =3D { mh_next =3D 0xfffff80053182600,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff8023429a800 "ilters",=20 mh_len =3D 0x800,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$16.m_hdr.mh_next $17 =3D { m_hdr =3D { mh_next =3D 0xfffff8005379b200,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff8058cb1e000 "",=20 mh_len =3D 0x800,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. (kgdb) p *$17.m_hdr.mh_next $18 =3D { m_hdr =3D { mh_next =3D 0xfffff80053bab500,=20 mh_nextpkt =3D 0x0,=20 mh_data =3D 0xfffff8058cb1c800 "\002",=20 mh_len =3D 0x760,=20 mh_type =3D 0x1,=20 mh_flags =3D 0x1 },=20 =2E.. Which is points to mp. I gather the first mbuf is NFS RPC metadata (since it's skipped). The remaining mbufs are the start of a 3.9MB binary file (an identifier database). Any suggestions as to what has gone wrong? --=20 Peter Jeremy --x+6KMIRAuhnl3hBn Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJUnJ/GXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFRUIyOTg2QzMwNjcxRTc0RTY1QzIyN0Ux NkE1OTdBMEU0QTIwQjM0AAoJEBall6Dkogs0qaUP/RjjDxHDYT2REtWcB79gnEt5 ixElylaIhpbND/uH3rqoJrKRon1WkaSX9TL1hLLT2zaktyWH2UPg3JNTSP5c3LGa iipm71i0QUeaM60y6IwnkgRNUVLRDBO8t67DN7eLQudhZkdq7ew8VoaPCw23cN94 9QSOBTPC2rnVSE0/dHagw2i1+BOmR4XkynQYp/GtWxlvaRP/7lseDE73Jk5D9f5v L1QTJabmfoA1LfgHa783T2Dvo+bxnRkeQFjUVwxMahXBbZYjXmRiWMLgC9jTZg1V +HlbEvLpE9rJDGLPBxnMSf7/SYePg3B+iV81nmxT8/6n2fOsf1SmjWiaL7riQ9OO gW1V5tDhBzeGqmwvRCjOYoZpn2yPdiVhFbPK4j+a3Bkkh0KbmgNJf0cuoNuQkGQF qfY+a4wauLSWmWseadnkNXwk2gQjoL6HXiLsN+Z8sxo+lwJM6xn8TYkkt1pBovFp fC3/CFGzuNTf5HSvVQ9I+yu+MwAQskR83jW5AL9pKs4Sphs6RsDQuZq5Knt1aLVx eZQjNOTN0G5/HYX/R4eq5E1IISk/C/F5sI5kAGr6X90jnmVuHCDSRh+pvNHQqFqD 5aUfovtZFDGPXbv3y1RMBB+EgQw5ki9xSttHQ33yPHgCpPALZs9H/D0Y6SnLST+w VIK21pknaW3pWFeT7g3k =dyix -----END PGP SIGNATURE----- --x+6KMIRAuhnl3hBn-- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 26 01:08:28 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C7DA198A; Fri, 26 Dec 2014 01:08:28 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 6CE6014CB; Fri, 26 Dec 2014 01:08:27 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AsQEAIu0nFSDaFve/2dsb2JhbABchDSDAMcIgk4CgR0BAQEBAX2EDAEBAQMBIwRSBRQCGAICDRkCWQaINwiWQpxolQgBAQEBAQEEAQEBAQEBARuBIY4iNAeCaIFBBYlLjkqFGIsrIoQMIIF2fgEBAQ X-IronPort-AV: E=Sophos;i="5.07,645,1413259200"; d="scan'208";a="181071312" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 25 Dec 2014 20:08:21 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id EF73CAEA46; Thu, 25 Dec 2014 20:08:20 -0500 (EST) Date: Thu, 25 Dec 2014 20:08:20 -0500 (EST) From: Rick Macklem To: Peter Jeremy Message-ID: <1000783981.2374019.1419556100933.JavaMail.root@uoguelph.ca> In-Reply-To: <20141225233742.GA3385@server.rulingia.com> Subject: Re: "panic: len 0" on NFS read MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org, benno@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 01:08:28 -0000 Peter Jeremy wrote: > Whilst trying to debug a RPC issue with a NFS tunneling tool, I > mounted a > NFS filesystem onto the same host and got a panic when I tried to > access it. >=20 > I'm running FreeBSD/amd64 10-stable r276177. >=20 > I mounted the filesystem with: > # mount -o udp,nfsv3 $(hostname):/tank/src92 /dist >=20 > (/tank/src92 and / are ZFS) >=20 > And then ran: > $ grep zzzz /dist/* >=20 > And got: > panic: len 0 r275941 in head changed this KASSERT to allow a 0 length mbuf, so I don't think the panic is meaningful. Maybe r275941 should be MFC'd? (I've cc'd benno, who did the commit.) rick > cpuid =3D 3 > KDB: stack backtrace: > db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame > 0xfffffe0861448f30 > kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe0861448fe0 > vpanic() at vpanic+0x126/frame 0xfffffe0861449020 > kassert_panic() at kassert_panic+0x139/frame 0xfffffe0861449090 > nfsm_mbufuio() at nfsm_mbufuio+0x9c/frame 0xfffffe08614490f0 > nfsrpc_read() at nfsrpc_read+0x584/frame 0xfffffe08614492d0 > ncl_readrpc() at ncl_readrpc+0xa5/frame 0xfffffe08614493e0 > ncl_doio() at ncl_doio+0x228/frame 0xfffffe0861449480 > ncl_bioread() at ncl_bioread+0xb44/frame 0xfffffe08614495f0 > VOP_READ_APV() at VOP_READ_APV+0xf1/frame 0xfffffe0861449620 > vn_read() at vn_read+0x211/frame 0xfffffe0861449690 > vn_io_fault_doio() at vn_io_fault_doio+0x22/frame 0xfffffe08614496d0 > vn_io_fault1() at vn_io_fault1+0x7c/frame 0xfffffe0861449830 > vn_io_fault() at vn_io_fault+0x18b/frame 0xfffffe08614498b0 > dofileread() at dofileread+0x95/frame 0xfffffe0861449900 > kern_readv() at kern_readv+0x68/frame 0xfffffe0861449950 > sys_read() at sys_read+0x63/frame 0xfffffe08614499a0 > amd64_syscall() at amd64_syscall+0x22e/frame 0xfffffe0861449ab0 > Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe0861449ab0 > --- syscall (3, FreeBSD ELF64, sys_read), rip =3D 0x800fd3cba, rsp =3D > 0x7fffffffe048, rbp =3D 0x7fffffffe090 --- >=20 > I have a crashdump that looks sane and relevant bits around > nfsm_mbufuio() are: >=20 > #4 0xffffffff8041e63c in nfsm_mbufuio (nd=3D0xfffffe08614491b0, > uiop=3D0xfffffe0861449420, siz=3D0x4000) > at /usr/src/sys/fs/nfs/nfs_commonsubs.c:222 > (kgdb) p mp > $1 =3D 0xfffff80053bab500 > (kgdb) p *mp > $2 =3D { > m_hdr =3D { > mh_next =3D 0xfffff8023433dc00, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff80053bab57c "=EF=BF=BD=EF=BF=BD=EF=BF=BD"..., > mh_len =3D 0x0, > mh_type =3D 0x1, > mh_flags =3D 0x2 > }, > ... > (kgdb) p *nd > $4 =3D { > nd_md =3D 0xfffff8005366c500, > nd_dpos =3D 0xfffff80562d92068 "=EF=BF=BD=EF=BF=BD=EF=BF=BD"..., > ... > (kgdb) p *nd->nd_md > $5 =3D { > m_hdr =3D { > mh_next =3D 0xfffff80486b05b00, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff80562d92000 "", > mh_len =3D 0x68, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$5.m_hdr.mh_next > $11 =3D { > m_hdr =3D { > mh_next =3D 0xfffff8005325e400, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff80234291800 "=EF=BF=BD", > mh_len =3D 0x800, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$11.m_hdr.mh_next > $12 =3D { > m_hdr =3D { > mh_next =3D 0xfffff80486b02400, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff8023453c000 "\t", > mh_len =3D 0x800, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$12.m_hdr.mh_next > $13 =3D { > m_hdr =3D { > mh_next =3D 0xfffff8023433f800, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff80562d92800 "its", > mh_len =3D 0x800, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$13.m_hdr.mh_next > $14 =3D { > m_hdr =3D { > mh_next =3D 0xfffff80020f36500, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff8058cb1b000 "sbconfig", > mh_len =3D 0x800, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$14.m_hdr.mh_next > $15 =3D { > m_hdr =3D { > mh_next =3D 0xfffff800533d5e00, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff8041b423800 "", > mh_len =3D 0x800, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$15.m_hdr.mh_next > $16 =3D { > m_hdr =3D { > mh_next =3D 0xfffff80053182600, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff8023429a800 "ilters", > mh_len =3D 0x800, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$16.m_hdr.mh_next > $17 =3D { > m_hdr =3D { > mh_next =3D 0xfffff8005379b200, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff8058cb1e000 "", > mh_len =3D 0x800, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... > (kgdb) p *$17.m_hdr.mh_next > $18 =3D { > m_hdr =3D { > mh_next =3D 0xfffff80053bab500, > mh_nextpkt =3D 0x0, > mh_data =3D 0xfffff8058cb1c800 "\002", > mh_len =3D 0x760, > mh_type =3D 0x1, > mh_flags =3D 0x1 > }, > ... >=20 > Which is points to mp. >=20 > I gather the first mbuf is NFS RPC metadata (since it's skipped). > The > remaining mbufs are the start of a 3.9MB binary file (an identifier > database). >=20 > Any suggestions as to what has gone wrong? >=20 > -- > Peter Jeremy >=20 From owner-freebsd-fs@FreeBSD.ORG Fri Dec 26 01:47:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 617AE7B; Fri, 26 Dec 2014 01:47:31 +0000 (UTC) Received: from vps.rulingia.com (vps.rulingia.com [103.243.244.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps.rulingia.com", Issuer "CAcert Class 3 Root" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id D11932915; Fri, 26 Dec 2014 01:47:29 +0000 (UTC) Received: from server.rulingia.com (c220-239-242-83.belrs5.nsw.optusnet.com.au [220.239.242.83]) by vps.rulingia.com (8.14.9/8.14.9) with ESMTP id sBQ1gS5B036442 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 26 Dec 2014 12:42:34 +1100 (AEDT) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.9/8.14.9) with ESMTP id sBQ1gKO2002041 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 26 Dec 2014 12:42:20 +1100 (AEDT) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.9/8.14.9/Submit) id sBQ1gKIs002040; Fri, 26 Dec 2014 12:42:20 +1100 (AEDT) (envelope-from peter) Date: Fri, 26 Dec 2014 12:42:20 +1100 From: Peter Jeremy To: Rick Macklem Subject: Re: "panic: len 0" on NFS read Message-ID: <20141226014220.GA2001@server.rulingia.com> References: <20141225233742.GA3385@server.rulingia.com> <1000783981.2374019.1419556100933.JavaMail.root@uoguelph.ca> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="1yeeQ81UyVL57Vl7" Content-Disposition: inline In-Reply-To: <1000783981.2374019.1419556100933.JavaMail.root@uoguelph.ca> X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org, benno@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 01:47:31 -0000 --1yeeQ81UyVL57Vl7 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2014-Dec-25 20:08:20 -0500, Rick Macklem wrote: >Peter Jeremy wrote: >> Whilst trying to debug a RPC issue with a NFS tunneling tool, I >> mounted a >> NFS filesystem onto the same host and got a panic when I tried to >> access it. >>=20 >> I'm running FreeBSD/amd64 10-stable r276177. >>=20 >> I mounted the filesystem with: >> # mount -o udp,nfsv3 $(hostname):/tank/src92 /dist >>=20 >> (/tank/src92 and / are ZFS) >>=20 >> And then ran: >> $ grep zzzz /dist/* >>=20 >> And got: >> panic: len 0 >r275941 in head changed this KASSERT to allow a 0 length mbuf, so I >don't think the panic is meaningful. >Maybe r275941 should be MFC'd? (I've cc'd benno, who did the commit.) Thanks. I've tried MFCing r275941 and can't reproduce the panic by following the above (though without knowing the exact reason for having a 0-byte mbuf in the chain originally, I can't be sure that I'm getting a 0-byte mbuf in the chain now). --=20 Peter Jeremy --1yeeQ81UyVL57Vl7 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJUnLz8XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFRUIyOTg2QzMwNjcxRTc0RTY1QzIyN0Ux NkE1OTdBMEU0QTIwQjM0AAoJEBall6Dkogs0QAEP/2BA5fzMDAuPmUVxN5eGdWVm khrwTYwYCmV19KDrS9TnHCwcATN62bCkryM1dvSiRjEBXQCgSLwVWP8gwOQxpdxk 4KvOyVLQkjh9FGkWO2IIV7X9CUgibTULqPU7UDwm7bNwW0duB5q/iKRFLuPwWZX2 RK8Mt2RpiM+MJOhJualOTb+zob/ijqAtSJ5OtECqztDKtMFcwJNii23d4SfCyQu3 A6wBChnXg5a9JG4OLzCleNB5RLS8OpnGR+3rPTzARKpUxMKFwUrj3gRPvCOIgc4c SJc0Ua91WiX8+zp9xkyAZ+l7gF00ZKHs0XvoDKOsUggENS8kxIQW1CJyf5Zr+hI6 bMEukO2EaSQ46wQaynbMrOJUYz4925qNUf4jRatY2bRl1Za86pz0TnXeJh0/bo0Z bKGRZkgCdWYEjIBc1tQJ96R6e5zTlni5JmpRBfYjS12cP/j46AFCyEaAASm3G3Sa dzIgTVfiVttVATG1ENiihIpMWX7bUL7+D4NCyIkjzNYiZAngnlFt1UwkLNZobxP6 YfLwRedXIL/UHXeeHzotFH6wOUT8jxwibpX8mXklzi4vRhl+pX7FnuFdkAYfEIq/ CSMFCoDTwD86+9vCvwzuQQIeVoYDKUjHjIwQdNgvilsnCiwSQ4SgpN0CSSVK9Asf 7/3VLDtqWZYfbgP7QOf7 =MxsQ -----END PGP SIGNATURE----- --1yeeQ81UyVL57Vl7-- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 26 02:04:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DC7993DB; Fri, 26 Dec 2014 02:04:56 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 929D72BFA; Fri, 26 Dec 2014 02:04:55 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtUEAKjBnFSDaFve/2dsb2JhbABchDAEgwDJVgKBHwEBAQEBfYQNAQUjVhsYAgINGQJZBhOILLMtlQUBAQEBAQEBAQEBAQEBAQEBAQEagSGOIjQHgmiBQQWJS45KhRiLKyKEDCAxgUV+AQEB X-IronPort-AV: E=Sophos;i="5.07,646,1413259200"; d="scan'208";a="179286200" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 25 Dec 2014 21:04:48 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 84C55AEA37; Thu, 25 Dec 2014 21:04:48 -0500 (EST) Date: Thu, 25 Dec 2014 21:04:48 -0500 (EST) From: Rick Macklem To: Peter Jeremy Message-ID: <1511059720.2385651.1419559488456.JavaMail.root@uoguelph.ca> In-Reply-To: <20141226014220.GA2001@server.rulingia.com> Subject: Re: "panic: len 0" on NFS read MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org, benno@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 02:04:57 -0000 Peter Jeremy wrote: > On 2014-Dec-25 20:08:20 -0500, Rick Macklem > wrote: > >Peter Jeremy wrote: > >> Whilst trying to debug a RPC issue with a NFS tunneling tool, I > >> mounted a > >> NFS filesystem onto the same host and got a panic when I tried to > >> access it. > >> > >> I'm running FreeBSD/amd64 10-stable r276177. > >> > >> I mounted the filesystem with: > >> # mount -o udp,nfsv3 $(hostname):/tank/src92 /dist > >> > >> (/tank/src92 and / are ZFS) > >> > >> And then ran: > >> $ grep zzzz /dist/* > >> > >> And got: > >> panic: len 0 > >r275941 in head changed this KASSERT to allow a 0 length mbuf, so I > >don't think the panic is meaningful. > >Maybe r275941 should be MFC'd? (I've cc'd benno, who did the > >commit.) > > Thanks. I've tried MFCing r275941 and can't reproduce the panic by > following the above (though without knowing the exact reason for > having a > 0-byte mbuf in the chain originally, I can't be sure that I'm getting > a > 0-byte mbuf in the chain now). > Well, NFSM_DISSECT() behaves a little like m_pullup(), in that it can copy data from one mbuf to another to create a large enough contiguous data area. It is conceivable that the mbuf being copied from could be reduced to m_len == 0. The code wouldn't remove this mbuf from the chain. rick > -- > Peter Jeremy > From owner-freebsd-fs@FreeBSD.ORG Fri Dec 26 10:21:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D44542BD for ; Fri, 26 Dec 2014 10:21:15 +0000 (UTC) Received: from mail-wi0-x236.google.com (mail-wi0-x236.google.com [IPv6:2a00:1450:400c:c05::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4C47E64D85 for ; Fri, 26 Dec 2014 10:21:15 +0000 (UTC) Received: by mail-wi0-f182.google.com with SMTP id h11so16799264wiw.3 for ; Fri, 26 Dec 2014 02:21:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=s6A1cCHU4PheoPGMzlZc0mSDbCQPdXva1QiQ3vU0jf4=; b=0kQTbzkJCWMBQOReLRbgWkhbwp3Cjc3eQb/xWC1TaSbHjawxSzbRiRQJQDCA9LlW7W uIZQYkYJmsqubxLQecvehozwktSG7Jzek7ksqDos0gwtTuFAO+VhiKTcGnvqlJd9PSBo PLfvH8HWxYykQ3a+YyHB+HULGI1yeYBKTXgrsjv/JtgZ5WSHiRb2VfN9n4D3RyJb0+Dl CKIbC9Z62XkP3isMv/kD5/i+TFIhrbvmYkpSNvHXvXyOIOegXnpZbGSBsvcMo/eUYTKk H+BCnBgbHq1FDtE4MT21hEUgQ6vrkhkObZkNRtHvhebuByWOdBn+ftoX1zqEj4EXwFGH RNEg== MIME-Version: 1.0 X-Received: by 10.180.205.163 with SMTP id lh3mr68713224wic.63.1419589273599; Fri, 26 Dec 2014 02:21:13 -0800 (PST) Received: by 10.27.137.70 with HTTP; Fri, 26 Dec 2014 02:21:13 -0800 (PST) In-Reply-To: <549C838B.1070302@multiplay.co.uk> References: <549C65FF.4010702@multiplay.co.uk> <549C838B.1070302@multiplay.co.uk> Date: Fri, 26 Dec 2014 12:21:13 +0200 Message-ID: Subject: Re: LSI SAS 9300-8i weird ZFS checksum errors From: George Kontostanos To: Steven Hartland Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 10:21:15 -0000 On Thu, Dec 25, 2014 at 11:37 PM, Steven Hartland wrote: > > On 25/12/2014 21:03, George Kontostanos wrote: > > > > On Thu, Dec 25, 2014 at 9:31 PM, Steven Hartland > wrote: > >> >> On 25/12/2014 14:39, George Kontostanos wrote: >> >>> Hello, list and Merry Christmas to all >>> >>> I am facing some weird checksum errors during scrub. The configuration is >>> the following: >>> >>> Board: Supermicro Motherboard X10DRi-T4+ ( >>> http://www.supermicro.com/products/motherboard/xeon/c600/x10dri-t4_.cfm) >>> Controller: LSI SAS 9300-8i ( >>> http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9300-8i.aspx >>> ) >>> HDD: 21X6TB Western Digital WD60EFRX >>> HDD: 2XIntel SATA 600GB Solid-State Drive SSDSC2BB600G401 DC >>> S3500 >>> (SWAP, ZIL, CACHE) >>> Chassis: Supermicro 847BE1C-R1K28LPB 4U Storage Chassis >>> RAM: 64 GB >>> >>> I installed initially FreeBSD 10.1-RELEASE created one pool consistent >>> by 3 >>> X7disk VDEVs in RAIDZ3. I used NFS to start copying some data. After >>> copying around 3TB I initiated a scrub. >>> The result was the following: http://pastebin.com/rswgCY2A and >>> http://pastebin.com/DQ2urGXk >>> >>> I tried to flash the controller but the LSI utility did not recognize the >>> controller. I installed FreeBSD 9.3-RELEASE and used LSI's mpslsi3 >>> driver. >>> I was able to flash the latest bios and firmware that way. >>> >>> LSI Corporation SAS3 Flash Utility >>> Version 07.00.00.00 (2014.08.14) >>> Copyright (c) 2008-2014 LSI Corporation. All rights reserved >>> >>> Adapter Selected is a LSI SAS: SAS3008(C0) >>> >>> Controller Number : 0 >>> Controller : SAS3008(C0) >>> PCI Address : 00:82:00:00 >>> SAS Address : 500605b-0-06ce-27e0 >>> NVDATA Version (Default) : 06.03.00.05 >>> NVDATA Version (Persistent) : 06.03.00.05 >>> Firmware Product ID : 0x2221 (IT) >>> Firmware Version : 06.00.00.00 >>> NVDATA Vendor : LSI >>> NVDATA Product ID : SAS9300-8i >>> BIOS Version : 08.13.00.00 >>> UEFI BSD Version : 02.00.00.00 >>> FCODE Version : N/A >>> Board Name : SAS9300-8i >>> Board Assembly : H3-25573-00E >>> Board Tracer Number : SV32928040 >>> >>> I recreated the pool again and started writing data via NFS again. After >>> 3 >>> TB of data I started a scrub and I am still getting checksum errors >>> though >>> there are no messages regarding the drives anymore in /var/log/messages >>> >>> pool: Pool >>> state: ONLINE >>> status: One or more devices has experienced an unrecoverable error. An >>> attempt was made to correct the error. Applications are unaffected. >>> action: Determine if the device needs to be replaced, and clear the >>> errors >>> using 'zpool clear' or replace the device with 'zpool replace'. >>> see: http://illumos.org/msg/ZFS-8000-9P >>> >>> scan: scrub in progress since Thu Dec 25 08:46:21 2014 >>> 2.28T scanned out of 5.54T at 816M/s, 1h9m to go >>> 11.9M repaired, 41.26% done >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> Pool ONLINE 0 0 0 >>> raidz3-0 ONLINE 0 0 0 >>> gpt/WD-WX41D94RN5A3 ONLINE 0 0 15 (repairing) >>> gpt/WD-WX41D948YE1U ONLINE 0 0 14 (repairing) >>> gpt/WD-WX41D94RN879 ONLINE 0 0 16 (repairing) >>> gpt/WD-WX21D947NC83 ONLINE 0 0 24 (repairing) >>> gpt/WD-WX21D947NT77 ONLINE 0 0 15 (repairing) >>> gpt/WD-WX41D948YAKV ONLINE 0 0 19 (repairing) >>> gpt/WD-WX21D9421SCV ONLINE 0 0 20 (repairing) >>> raidz3-1 ONLINE 0 0 0 >>> gpt/WD-WX21D9421F6F ONLINE 0 0 16 (repairing) >>> gpt/WD-WX41D948YPN4 ONLINE 0 0 14 (repairing) >>> gpt/WD-WX21D947NE2K ONLINE 0 0 22 (repairing) >>> gpt/WD-WX41D948Y2PX ONLINE 0 0 19 (repairing) >>> gpt/WD-WX41D94RNAX7 ONLINE 0 0 17 (repairing) >>> gpt/WD-WX21D947N1RP ONLINE 0 0 12 (repairing) >>> gpt/WD-WX21D94216X7 ONLINE 0 0 20 (repairing) >>> raidz3-2 ONLINE 0 0 0 >>> gpt/WD-WX41D948YAHP ONLINE 0 0 25 (repairing) >>> gpt/WD-WX21D947N06F ONLINE 0 0 18 (repairing) >>> gpt/WD-WX21D947N3T1 ONLINE 0 0 21 (repairing) >>> gpt/WD-WX41D94RNT7D ONLINE 0 0 5 (repairing) >>> gpt/WD-WX41D948Y9VV ONLINE 0 0 18 (repairing) >>> gpt/WD-WX41D94RNS62 ONLINE 0 0 24 (repairing) >>> gpt/WD-WX21D9421ZP9 ONLINE 0 0 28 (repairing) >>> logs >>> mirror-3 ONLINE 0 0 0 >>> gpt/zil0 ONLINE 0 0 0 >>> gpt/zil1 ONLINE 0 0 0 >>> cache >>> gpt/cache0 ONLINE 0 0 0 >>> gpt/cache1 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> >>> This is really driving me crazy since smartmon tools do not display any >>> errors on the drives. >>> >>> Any suggestions are most welcomed!!! >>> >>> Check for bad hardware, first guess would be memory, next would be >> hotswap backplane. >> >> Regards >> Steve >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > > Hi Steve, > > Memory looks good in memtest. I am not sure what you mean > regarding hotswap backplane. > > How are the disks attached? > > The most common way is your controller being attached to a hotswap > backplane, which you then plug the disks into. > > Unfortunately these backplanes are one of the most common sources of > issues, especially at higher speeds and even more so if they aren't direct > passthrough i.e. they are actually expanders which processing of their own. > > You report the chassis is a 847BE1C-R1K28LPB which includes such > expanders, specifically BPN-SAS3-846EL1 and BPN-SAS3-826EL1. > > If this is how you are connecting the disk I would strongly advise > eliminating this from the equation by connecting the disks direct to the > LSI controller. > > You can also check to see if there are any firmware updates for the > expanders. > > Regards > Steve > Thanks for your reply Steve. Unfortunately I am thousands of miles away from the DC. In another continent actually! I have contacted SuperMicro support to see if they do have any firmware updates. I might also need to find someone to go to the DC and physically attach the disks directly to the controller. Best! -- George Kontostanos --- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 26 14:49:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7FB11F71 for ; Fri, 26 Dec 2014 14:49:46 +0000 (UTC) Received: from smtprelay02.ispgateway.de (smtprelay02.ispgateway.de [80.67.18.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 176663609 for ; Fri, 26 Dec 2014 14:49:45 +0000 (UTC) Received: from [78.35.187.37] (helo=fabiankeil.de) by smtprelay02.ispgateway.de with esmtpsa (TLSv1.2:AES128-GCM-SHA256:128) (Exim 4.84) (envelope-from ) id 1Y4W6h-0000Ij-Gb for freebsd-fs@freebsd.org; Fri, 26 Dec 2014 15:43:11 +0100 Date: Fri, 26 Dec 2014 15:43:12 +0100 From: Fabian Keil To: Subject: Panic after vdev loss: assert: zap_update([...]) == 0 (0x6 == 0x0), [...]/zfs/dsl_scan.c, line: 41 Message-ID: <51ee5a33.776435f0@fabiankeil.de> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/xZAQEilkmebo3PF2ju4WqC4"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 14:49:46 -0000 --Sig_/xZAQEilkmebo3PF2ju4WqC4 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Yesterday I got a panic after a zpool that was being scrubbed lost its (only) vdev: [6507] GEOM_ELI: g_eli_read_done() failed (error=3D5) label/extreme.eli[REA= D(offset=3D2507528704, length=3D12800)] [6507] GEOM_ELI: g_eli_read_done() failed (error=3D5) label/extreme.eli[REA= D(offset=3D2507494912, length=3D33792)] [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 4a bb 06 00 00 05 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Retrying command [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 4a bb 06 00 00 05 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Retrying command [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 4a bb 06 00 00 05 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Retrying command [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 4a bb 06 00 00 05 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Retrying command [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 4a bb 06 00 00 05 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Error 5, Retries exhausted [6507] GEOM_ELI: g_eli_read_done() failed (error=3D5)(da1:umass-sim1:1:0:0)= : READ(10). CDB: 28 00 00 37 25 ad 00 00 6a 00=20 [6507] label/extreme.eli[READ(offset=3D4022607872, length=3D8192)](da1:uma= ss-sim1:1:0:0): CAM status: CCB request completed with an error [6507]=20 [6507] (da1:GEOM_ELIumass-sim1:1:: g_eli_read_done() failed (error=3D5)0: 0= ): label/extreme.eli[READ(offset=3D4022870016, length=3D8192)]Retrying comm= and [6507]=20 [6507] GEOM_ELI: g_eli_read_done() failed (error=3D5) label/extreme.eli[REA= D(offset=3D270336, length=3D8192)](da1:umass-sim1:1:0:0): READ(10). CDB: 28= 00 00 37 25 ad 00 00 6a 00=20 [6507]=20 [6507] GEOM_ELI(da1:umass-sim1:1:0:0): CAM status: CCB request completed wi= th an error [6507] : g_eli_read_done() failed (error=3D5)(da1: umass-sim1:1:label/extre= me.eli[READ(offset=3D2507541504, length=3D2560)]0: [6507] 0): Retrying command [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 37 25 ad 00 00 6a 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Retrying command [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 37 25 ad 00 00 6a 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Retrying command [6507] (da1:umass-sim1:1:0:0): READ(10). CDB: 28 00 00 37 25 ad 00 00 6a 00= =20 [6507] (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an er= ror [6507] (da1:umass-sim1:1:0:0): Error 5, Retries exhausted [6507] GEOM_ELI: g_eli_read_done() failed (error=3D5) label/extreme.eli[REA= D(offset=3D1850432000, length=3D54272)] [6507] da1 at umass-sim1 bus 1 scbus3 target 0 lun 0 [6507] da1: s/n 8123201007= 08 detached [6507] pass3 at umass-sim1 bus 1 scbus3 target 0 lun 0 [6507] pass3: s/n 81232010= 0708 detached [6507] (pass3:umass-sim1:1:0:0): Periph destroyed [6507] panic: solaris assert: zap_update(scn->scn_dp->dp_meta_objset, 1, "s= can", sizeof (uint64_t), (sizeof (dsl_scan_phys_t) / sizeof (uint64_t)), &s= cn->scn_phys, tx) =3D=3D 0 (0x6 =3D=3D 0x0), file: /usr/src/sys/cddl/contri= b/opensolaris/uts/common/fs/zfs/dsl_scan.c, line: 41 [6507] cpuid =3D 0 [6507] KDB: stack backtrace: [6507] db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe= 0095157520 [6507] kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe00951575d0 [6507] panic() at panic+0x1c1/frame 0xfffffe0095157690 [6507] assfail3() at assfail3+0x2f/frame 0xfffffe00951576b0 [6507] dsl_scan_sync() at dsl_scan_sync+0xa83/frame 0xfffffe0095157a00 [6507] spa_sync() at spa_sync+0x5c1/frame 0xfffffe0095157ae0 [6507] txg_sync_thread() at txg_sync_thread+0x3a6/frame 0xfffffe0095157bb0 [6507] fork_exit() at fork_exit+0x9a/frame 0xfffffe0095157bf0 [6507] fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0095157bf0 [6507] --- trap 0, rip =3D 0, rsp =3D 0xfffffe0095157cb0, rbp =3D 0 --- [6507] KDB: enter: panic The assertion in dsl_scan_sync_state() seems to expect that the pool is available and apparently spa->spa_state was still POOL_STATE_ACTIVE. Additional details: http://www.fabiankeil.de/bilder/freebsd/kernel-panic-r275748-zfs/ Fabian --Sig_/xZAQEilkmebo3PF2ju4WqC4 Content-Type: application/pgp-signature Content-Description: OpenPGP digital signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlSddAAACgkQBYqIVf93VJ22ugCfcO8r7FQhgz2c0yvRlh1V5SC9 YegAoK7ghP6yhMubF4bpQFHxJPEch34x =R2Jn -----END PGP SIGNATURE----- --Sig_/xZAQEilkmebo3PF2ju4WqC4-- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 26 23:49:33 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 89175789 for ; Fri, 26 Dec 2014 23:49:33 +0000 (UTC) Received: from mail.jrv.org (rrcs-24-73-246-106.sw.biz.rr.com [24.73.246.106]) by mx1.freebsd.org (Postfix) with ESMTP id 5700C66748 for ; Fri, 26 Dec 2014 23:49:32 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.jrv.org (Postfix) with ESMTP id 0F1FC235B3E for ; Fri, 26 Dec 2014 17:43:29 -0600 (CST) Received: from mail.jrv.org ([127.0.0.1]) by localhost (zimbra64.housenet.jrv [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id xOYnPmefxCzb for ; Fri, 26 Dec 2014 17:43:19 -0600 (CST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.jrv.org (Postfix) with ESMTP id 1BBFD235B3B for ; Fri, 26 Dec 2014 17:43:19 -0600 (CST) X-Virus-Scanned: amavisd-new at zimbra64.housenet.jrv Received: from mail.jrv.org ([127.0.0.1]) by localhost (zimbra64.housenet.jrv [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 13Utu-4Z11ld for ; Fri, 26 Dec 2014 17:43:18 -0600 (CST) Received: from [192.168.138.128] (BMX.housenet.jrv [192.168.3.140]) by mail.jrv.org (Postfix) with ESMTPSA id E5D4F235B36 for ; Fri, 26 Dec 2014 17:43:18 -0600 (CST) Message-ID: <549DF2B1.3030909@jrv.org> Date: Fri, 26 Dec 2014 17:43:45 -0600 From: "James R. Van Artsdalen" User-Agent: Mozilla/5.0 (Windows NT 5.0; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org Subject: ZFS: FreeBSD 10.1 can't import/mount FreeBSD 9 pool Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 23:49:33 -0000 FreeBSD bigtex.housenet.jrv 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #2 r273476M: Thu Oct 23 20:39:40 CDT 2014 james@bigtex.housenet.jrv:/usr/obj/usr/src/sys/GENERIC amd64 A pool created by a FreeBSD 9 system was imported into FreeBSD 10.1 but failed to create the recursive mountpoints as shown below. What's especially interesting is that the free space reported by zpool(1) and zfs(1) are wildly different, even though there are no reservations. Note that I was able to do a zpool upgrade, but that zfs upgrade failed on the children datasets. # zpool import SAS01 cannot mount '/SAS01/t03': failed to create mountpoint cannot mount '/SAS01/t04': failed to create mountpoint cannot mount '/SAS01/t05': failed to create mountpoint cannot mount '/SAS01/t06': failed to create mountpoint cannot mount '/SAS01/t07': failed to create mountpoint cannot mount '/SAS01/t08': failed to create mountpoint cannot mount '/SAS01/t12': failed to create mountpoint cannot mount '/SAS01/t13': failed to create mountpoint # zpool list SAS01 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT SAS01 43.5T 42.6T 948G - 0% 97% 1.00x ONLINE - # zfs list -p SAS01 NAME USED AVAIL REFER MOUNTPOINT SAS01 33279222543840 0 314496 /SAS01 # zpool get all SAS01 NAME PROPERTY VALUE SOURCE SAS01 size 43.5T - SAS01 capacity 97% - SAS01 altroot - default SAS01 health ONLINE - SAS01 guid 1341452135 default SAS01 version - default SAS01 bootfs - default SAS01 delegation on default SAS01 autoreplace off default SAS01 cachefile - default SAS01 failmode wait default SAS01 listsnapshots off default SAS01 autoexpand off default SAS01 dedupditto 0 default SAS01 dedupratio 1.00x - SAS01 free 948G - SAS01 allocated 42.6T - SAS01 readonly off - SAS01 comment - default SAS01 expandsize - - SAS01 freeing 0 default SAS01 fragmentation 0% - SAS01 leaked 0 default SAS01 feature@async_destroy enabled local SAS01 feature@empty_bpobj active local SAS01 feature@lz4_compress active local SAS01 feature@multi_vdev_crash_dump enabled local SAS01 feature@spacemap_histogram active local SAS01 feature@enabled_txg active local SAS01 feature@hole_birth active local SAS01 feature@extensible_dataset enabled local SAS01 feature@embedded_data active local SAS01 feature@bookmarks enabled local SAS01 feature@filesystem_limits enabled local # zfs get all SAS01 NAME PROPERTY VALUE SOURCE SAS01 type filesystem - SAS01 creation Tue Dec 23 2:51 2014 - SAS01 used 30.3T - SAS01 available 0 - SAS01 referenced 307K - SAS01 compressratio 1.00x - SAS01 mounted yes - SAS01 quota none default SAS01 reservation none default SAS01 recordsize 128K default SAS01 mountpoint /SAS01 default SAS01 sharenfs off default SAS01 checksum on default SAS01 compression off default SAS01 atime on default SAS01 devices on default SAS01 exec on default SAS01 setuid on default SAS01 readonly off default SAS01 jailed off default SAS01 snapdir hidden default SAS01 aclmode discard default SAS01 aclinherit restricted default SAS01 canmount on default SAS01 xattr off temporary SAS01 copies 1 default SAS01 version 5 - SAS01 utf8only off - SAS01 normalization none - SAS01 casesensitivity sensitive - SAS01 vscan off default SAS01 nbmand off default SAS01 sharesmb off default SAS01 refquota none default SAS01 refreservation none default SAS01 primarycache all default SAS01 secondarycache all default SAS01 usedbysnapshots 0 - SAS01 usedbydataset 307K - SAS01 usedbychildren 30.3T - SAS01 usedbyrefreservation 0 - SAS01 logbias latency default SAS01 dedup off default SAS01 mlslabel - SAS01 sync standard default SAS01 refcompressratio 1.00x - SAS01 written 307K - SAS01 logicalused 30.2T - SAS01 logicalreferenced 12K - SAS01 volmode default default SAS01 filesystem_limit none default SAS01 snapshot_limit none default SAS01 filesystem_count none default SAS01 snapshot_count none default SAS01 redundant_metadata all default # zpool status SAS01 pool: SAS01 state: ONLINE scan: scrub repaired 0 in 20h26m with 0 errors on Thu Dec 25 20:57:34 2014 config: NAME STATE READ WRITE CKSUM SAS01 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 da45 ONLINE 0 0 0 da44 ONLINE 0 0 0 da47 ONLINE 0 0 0 da43 ONLINE 0 0 0 da42 ONLINE 0 0 0 da46 ONLINE 0 0 0 da41 ONLINE 0 0 0 da40 ONLINE 0 0 0 errors: No known data errors # zfs upgrade -r SAS01 cannot set property for 'SAS01/t03': out of space cannot set property for 'SAS01/t04': out of space cannot set property for 'SAS01/t05': out of space cannot set property for 'SAS01/t06': out of space cannot set property for 'SAS01/t07': out of space cannot set property for 'SAS01/t08': out of space cannot set property for 'SAS01/t12': out of space cannot set property for 'SAS01/t13': out of space 0 filesystems upgraded 1 filesystems already at this version # From owner-freebsd-fs@FreeBSD.ORG Fri Dec 26 23:54:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 54C94906 for ; Fri, 26 Dec 2014 23:54:47 +0000 (UTC) Received: from mail-wg0-x22f.google.com (mail-wg0-x22f.google.com [IPv6:2a00:1450:400c:c00::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D7CEB66855 for ; Fri, 26 Dec 2014 23:54:46 +0000 (UTC) Received: by mail-wg0-f47.google.com with SMTP id n12so15141199wgh.34 for ; Fri, 26 Dec 2014 15:54:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:content-transfer-encoding:subject:message-id:date :to:mime-version; bh=zV+qiClw2e+3Eropc+YySebfFrToNLKkU7V3Wy6kR3A=; b=izZzfvaBu4GNNhtsnl/QTTwIXnNw+21okMTR8Yi6Vn7b6kkq3z8XwjJKDNFPlwNwyp Dfb5Mm2XHRw+qJvopDvuI7ctHQN9Ja+n++A4JTwXu+co3vgjdj/JF1c03e3egXuwktO/ JmQJg6SOlDMwWinLqawjeP55scCtORUgDsFpkwmKG+qNM9r/l7VgeUYVpqqGUyxSuPDC ksy5ERLNrDC2L3RIG+tjB0o1CzP6yqTSX8fR4bt12+ip6erXgkgnSHv4wZkS2sJ5nwza b2mqNu+wk1jUPGNvkhl9kt7ucSMC0lH9nLQvD7QtBCHPFVmvdfIxr1mpFagvEuuh5ERO G4Xw== X-Received: by 10.180.21.133 with SMTP id v5mr65745430wie.44.1419638085155; Fri, 26 Dec 2014 15:54:45 -0800 (PST) Received: from [10.0.1.108] (cpc15-stav13-2-0-cust197.17-3.cable.virginm.net. [77.100.102.198]) by mx.google.com with ESMTPSA id ju2sm6735280wid.7.2014.12.26.15.54.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 26 Dec 2014 15:54:44 -0800 (PST) From: Paul Chakravarti Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: ZFS: Mount partition from ZVOL with volmode=dev Message-Id: <91E1211B-7E84-472B-8098-630AE8C97251@gmail.com> Date: Fri, 26 Dec 2014 23:54:42 +0000 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) X-Mailer: Apple Mail (2.1993) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 23:54:47 -0000 Hello, I am using a ZVOL configured with =E2=80=98volmode=3Ddev=E2=80=99 as the = virtio disk device for a bhyve instance (which works fine) but was = trying to workout whether there was any way of mounting the underlying = partitions on the host system - the partitions don=E2=80=99t show up = under /dev/zvol as separate devices with =E2=80=98volmode=3Ddev=E2=80=99 = so was wondering is there is any other way of getting at these other = than mounting in a bhyve instance? Thanks, Paul=20= From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 00:06:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D5B27BA1 for ; Sat, 27 Dec 2014 00:06:24 +0000 (UTC) Received: from mail-wg0-f45.google.com (mail-wg0-f45.google.com [74.125.82.45]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 62B156752F for ; Sat, 27 Dec 2014 00:06:23 +0000 (UTC) Received: by mail-wg0-f45.google.com with SMTP id b13so15161491wgh.32 for ; Fri, 26 Dec 2014 16:06:14 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=v+VJ+OVrAj3AzE0VbHdSU5ylezsLPTvrCSwDOzHhRxA=; b=aV7mRLkEbKB8puGPmihqqG2rJoStntGUvuP1+WUkqUeZDyKnzn5WMjZF8U2N5Py7Xk BI9XAnorQuiQlZZqhbJeg/wmsiJW8nDvJ7BPJwgJuQck+0FzColqCCUB1SmyE/8xL0tP F1eIpR74al9cZe4jmSsfmdSnKPruCaCHq26ujWcQsrJwu2/OeIEPrjj+/yc2kIsXFayd u4Lq/lyH8TpVLqlkI/ghrFTXy09OEsTmjXI5TVBKg2CXbhWhZJs2X5neFcNDNkXti35z V5gy/bG5oUtNM7+NACv5KfY/AwiUMaLR5LEr1MknlANqJFAjuLHMktcQQKTZe5cUCtr/ oZgA== X-Gm-Message-State: ALoCoQnlwdQOGkphJ1WIES7WMaVZ/QiVUU7lCNVvaYGHhEJUdqHZK1aealalS7fb7NF5RAnjfL2A X-Received: by 10.180.81.7 with SMTP id v7mr72636062wix.74.1419638774709; Fri, 26 Dec 2014 16:06:14 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id dr3sm29904776wib.4.2014.12.26.16.06.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Dec 2014 16:06:13 -0800 (PST) Message-ID: <549DF7EB.1080308@multiplay.co.uk> Date: Sat, 27 Dec 2014 00:06:03 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS: FreeBSD 10.1 can't import/mount FreeBSD 9 pool References: <549DF2B1.3030909@jrv.org> In-Reply-To: <549DF2B1.3030909@jrv.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 00:06:24 -0000 Later versions reserve space for deletions etc, so if your volume is too full could fail in this manor. The fix would be to clear down space so this is no longer an issue. On 26/12/2014 23:43, James R. Van Artsdalen wrote: > FreeBSD bigtex.housenet.jrv 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #2 > r273476M: Thu Oct 23 20:39:40 CDT 2014 > james@bigtex.housenet.jrv:/usr/obj/usr/src/sys/GENERIC amd64 > > A pool created by a FreeBSD 9 system was imported into FreeBSD 10.1 but > failed to create the recursive mountpoints as shown below. > > What's especially interesting is that the free space reported by > zpool(1) and zfs(1) are wildly different, even though there are no > reservations. > > Note that I was able to do a zpool upgrade, but that zfs upgrade failed > on the children datasets. > > # zpool import SAS01 > cannot mount '/SAS01/t03': failed to create mountpoint > cannot mount '/SAS01/t04': failed to create mountpoint > cannot mount '/SAS01/t05': failed to create mountpoint > cannot mount '/SAS01/t06': failed to create mountpoint > cannot mount '/SAS01/t07': failed to create mountpoint > cannot mount '/SAS01/t08': failed to create mountpoint > cannot mount '/SAS01/t12': failed to create mountpoint > cannot mount '/SAS01/t13': failed to create mountpoint > # zpool list SAS01 > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > SAS01 43.5T 42.6T 948G - 0% 97% 1.00x ONLINE - > # zfs list -p SAS01 > NAME USED AVAIL REFER MOUNTPOINT > SAS01 33279222543840 0 314496 /SAS01 > # zpool get all SAS01 > NAME PROPERTY VALUE SOURCE > SAS01 size 43.5T - > SAS01 capacity 97% - > SAS01 altroot - default > SAS01 health ONLINE - > SAS01 guid 1341452135 default > SAS01 version - default > SAS01 bootfs - default > SAS01 delegation on default > SAS01 autoreplace off default > SAS01 cachefile - default > SAS01 failmode wait default > SAS01 listsnapshots off default > SAS01 autoexpand off default > SAS01 dedupditto 0 default > SAS01 dedupratio 1.00x - > SAS01 free 948G - > SAS01 allocated 42.6T - > SAS01 readonly off - > SAS01 comment - default > SAS01 expandsize - - > SAS01 freeing 0 default > SAS01 fragmentation 0% - > SAS01 leaked 0 default > SAS01 feature@async_destroy enabled local > SAS01 feature@empty_bpobj active local > SAS01 feature@lz4_compress active local > SAS01 feature@multi_vdev_crash_dump enabled local > SAS01 feature@spacemap_histogram active local > SAS01 feature@enabled_txg active local > SAS01 feature@hole_birth active local > SAS01 feature@extensible_dataset enabled local > SAS01 feature@embedded_data active local > SAS01 feature@bookmarks enabled local > SAS01 feature@filesystem_limits enabled local > # zfs get all SAS01 > NAME PROPERTY VALUE SOURCE > SAS01 type filesystem - > SAS01 creation Tue Dec 23 2:51 2014 - > SAS01 used 30.3T - > SAS01 available 0 - > SAS01 referenced 307K - > SAS01 compressratio 1.00x - > SAS01 mounted yes - > SAS01 quota none default > SAS01 reservation none default > SAS01 recordsize 128K default > SAS01 mountpoint /SAS01 default > SAS01 sharenfs off default > SAS01 checksum on default > SAS01 compression off default > SAS01 atime on default > SAS01 devices on default > SAS01 exec on default > SAS01 setuid on default > SAS01 readonly off default > SAS01 jailed off default > SAS01 snapdir hidden default > SAS01 aclmode discard default > SAS01 aclinherit restricted default > SAS01 canmount on default > SAS01 xattr off temporary > SAS01 copies 1 default > SAS01 version 5 - > SAS01 utf8only off - > SAS01 normalization none - > SAS01 casesensitivity sensitive - > SAS01 vscan off default > SAS01 nbmand off default > SAS01 sharesmb off default > SAS01 refquota none default > SAS01 refreservation none default > SAS01 primarycache all default > SAS01 secondarycache all default > SAS01 usedbysnapshots 0 - > SAS01 usedbydataset 307K - > SAS01 usedbychildren 30.3T - > SAS01 usedbyrefreservation 0 - > SAS01 logbias latency default > SAS01 dedup off default > SAS01 mlslabel - > SAS01 sync standard default > SAS01 refcompressratio 1.00x - > SAS01 written 307K - > SAS01 logicalused 30.2T - > SAS01 logicalreferenced 12K - > SAS01 volmode default default > SAS01 filesystem_limit none default > SAS01 snapshot_limit none default > SAS01 filesystem_count none default > SAS01 snapshot_count none default > SAS01 redundant_metadata all default > # zpool status SAS01 > pool: SAS01 > state: ONLINE > scan: scrub repaired 0 in 20h26m with 0 errors on Thu Dec 25 20:57:34 2014 > config: > > NAME STATE READ WRITE CKSUM > SAS01 ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > da45 ONLINE 0 0 0 > da44 ONLINE 0 0 0 > da47 ONLINE 0 0 0 > da43 ONLINE 0 0 0 > da42 ONLINE 0 0 0 > da46 ONLINE 0 0 0 > da41 ONLINE 0 0 0 > da40 ONLINE 0 0 0 > > errors: No known data errors > # zfs upgrade -r SAS01 > cannot set property for 'SAS01/t03': out of space > cannot set property for 'SAS01/t04': out of space > cannot set property for 'SAS01/t05': out of space > cannot set property for 'SAS01/t06': out of space > cannot set property for 'SAS01/t07': out of space > cannot set property for 'SAS01/t08': out of space > cannot set property for 'SAS01/t12': out of space > cannot set property for 'SAS01/t13': out of space > 0 filesystems upgraded > 1 filesystems already at this version > # > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 00:17:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 31AC8E3B for ; Sat, 27 Dec 2014 00:17:12 +0000 (UTC) Received: from mail-wi0-f180.google.com (mail-wi0-f180.google.com [209.85.212.180]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DD4A0677A1 for ; Sat, 27 Dec 2014 00:17:11 +0000 (UTC) Received: by mail-wi0-f180.google.com with SMTP id n3so17934771wiv.1 for ; Fri, 26 Dec 2014 16:17:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=WFeT9ShhAE+rU/Ckge8Zf+9qyu8oHwr99xtXIY/+MH8=; b=YNdBvCsXr1d7as58YK3UhZgQAHVZrNERS5EuNxT7cnTgpJtPbjEY75umdNF/T7dPEP 2uJsslKOymuLcn5nuYN81EI5zXFwV4/IwrLpp7yTl4MTVwDaZuMaVOhTzGKHEBTN8SLH +xN/Ihwded9hnW6ua40l0hT0h0rbz9OgZQsWaI44NeFKb+/VNl7jkAF8vwteV7PP/WUG 4v7jGsJ8+aDf+V0qLsk0P16kOAIQN7sGvv/IqfJKTyyCbRxZMcXvCibf+pri2RtuRDIM nYrCDO9THNH/hb6G5CNfCosI1saQi0NXuzlHRz4nbeNCKmsB2FM6qbSLhtPwlmAgT2Ac TjjA== X-Gm-Message-State: ALoCoQm2qQwCCzYO0voff3ff9OZgkyAdui7JTga1rJYvmrVyRfhKfUssQmcd5GYtpzX+OG/WWPYC X-Received: by 10.194.189.138 with SMTP id gi10mr88541575wjc.86.1419639009851; Fri, 26 Dec 2014 16:10:09 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id s4sm29892358wiy.13.2014.12.26.16.10.09 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Dec 2014 16:10:09 -0800 (PST) Message-ID: <549DF8D7.8000008@multiplay.co.uk> Date: Sat, 27 Dec 2014 00:09:59 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS: Mount partition from ZVOL with volmode=dev References: <91E1211B-7E84-472B-8098-630AE8C97251@gmail.com> In-Reply-To: <91E1211B-7E84-472B-8098-630AE8C97251@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 00:17:12 -0000 I cant reproduce this on HEAD r276067 zfs create -V 8192 -o volmode=dev tank/tvol root@head:src> ls -l /dev/zvol/tank/ total 0 crw-r----- 1 root operator 0x85 Dec 27 00:08 tvol Regards Steve On 26/12/2014 23:54, Paul Chakravarti wrote: > Hello, > > I am using a ZVOL configured with ‘volmode=dev’ as the virtio disk device for a bhyve instance (which works fine) but was trying to workout whether there was any way of mounting the underlying partitions on the host system - the partitions don’t show up under /dev/zvol as separate devices with ‘volmode=dev’ so was wondering is there is any other way of getting at these other than mounting in a bhyve instance? > > Thanks, Paul > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 02:34:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DBEA2EBA for ; Sat, 27 Dec 2014 02:34:33 +0000 (UTC) Received: from mail.jrv.org (adsl-70-243-84-11.dsl.austtx.swbell.net [70.243.84.11]) by mx1.freebsd.org (Postfix) with ESMTP id A51E5669BF for ; Sat, 27 Dec 2014 02:34:32 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.jrv.org (Postfix) with ESMTP id 7F8B3236014; Fri, 26 Dec 2014 20:25:31 -0600 (CST) Received: from mail.jrv.org ([127.0.0.1]) by localhost (zimbra64.housenet.jrv [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id uzq0KBBHaEOk; Fri, 26 Dec 2014 20:25:21 -0600 (CST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.jrv.org (Postfix) with ESMTP id 7B228236011; Fri, 26 Dec 2014 20:25:21 -0600 (CST) X-Virus-Scanned: amavisd-new at zimbra64.housenet.jrv Received: from mail.jrv.org ([127.0.0.1]) by localhost (zimbra64.housenet.jrv [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id FCe0bG-X0nVh; Fri, 26 Dec 2014 20:25:21 -0600 (CST) Received: from [192.168.138.128] (BMX.housenet.jrv [192.168.3.140]) by mail.jrv.org (Postfix) with ESMTPSA id 5708123600C; Fri, 26 Dec 2014 20:25:21 -0600 (CST) Message-ID: <549E18AB.8060708@jrv.org> Date: Fri, 26 Dec 2014 20:25:47 -0600 From: "James R. Van Artsdalen" User-Agent: Mozilla/5.0 (Windows NT 5.0; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: Steven Hartland Subject: Re: ZFS: FreeBSD 10.1 can't import/mount FreeBSD 9 pool References: <549DF2B1.3030909@jrv.org> <549DF7EB.1080308@multiplay.co.uk> In-Reply-To: <549DF7EB.1080308@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 02:34:33 -0000 Oops - this will break every single one of my archival pools. If there is no userland ability to enable backwards compatibility, can you tell me where it is in the source or about when it was added? On 12/26/2014 6:06 PM, Steven Hartland wrote: > Later versions reserve space for deletions etc, so if your volume is > too full could fail in this manor. > > The fix would be to clear down space so this is no longer an issue. > > On 26/12/2014 23:43, James R. Van Artsdalen wrote: >> FreeBSD bigtex.housenet.jrv 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #2 >> r273476M: Thu Oct 23 20:39:40 CDT 2014 >> james@bigtex.housenet.jrv:/usr/obj/usr/src/sys/GENERIC amd64 >> >> A pool created by a FreeBSD 9 system was imported into FreeBSD 10.1 but >> failed to create the recursive mountpoints as shown below. >> >> What's especially interesting is that the free space reported by >> zpool(1) and zfs(1) are wildly different, even though there are no >> reservations. >> >> Note that I was able to do a zpool upgrade, but that zfs upgrade failed >> on the children datasets. >> >> # zpool import SAS01 >> cannot mount '/SAS01/t03': failed to create mountpoint >> cannot mount '/SAS01/t04': failed to create mountpoint >> cannot mount '/SAS01/t05': failed to create mountpoint >> cannot mount '/SAS01/t06': failed to create mountpoint >> cannot mount '/SAS01/t07': failed to create mountpoint >> cannot mount '/SAS01/t08': failed to create mountpoint >> cannot mount '/SAS01/t12': failed to create mountpoint >> cannot mount '/SAS01/t13': failed to create mountpoint >> # zpool list SAS01 >> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH >> ALTROOT >> SAS01 43.5T 42.6T 948G - 0% 97% 1.00x ONLINE - >> # zfs list -p SAS01 >> NAME USED AVAIL REFER MOUNTPOINT >> SAS01 33279222543840 0 314496 /SAS01 >> # zpool get all SAS01 >> NAME PROPERTY VALUE >> SOURCE >> SAS01 size 43.5T - >> SAS01 capacity 97% - >> SAS01 altroot - >> default >> SAS01 health ONLINE - >> SAS01 guid 1341452135 >> default >> SAS01 version - >> default >> SAS01 bootfs - >> default >> SAS01 delegation on >> default >> SAS01 autoreplace off >> default >> SAS01 cachefile - >> default >> SAS01 failmode wait >> default >> SAS01 listsnapshots off >> default >> SAS01 autoexpand off >> default >> SAS01 dedupditto 0 >> default >> SAS01 dedupratio 1.00x - >> SAS01 free 948G - >> SAS01 allocated 42.6T - >> SAS01 readonly off - >> SAS01 comment - >> default >> SAS01 expandsize - - >> SAS01 freeing 0 >> default >> SAS01 fragmentation 0% - >> SAS01 leaked 0 >> default >> SAS01 feature@async_destroy enabled >> local >> SAS01 feature@empty_bpobj active >> local >> SAS01 feature@lz4_compress active >> local >> SAS01 feature@multi_vdev_crash_dump enabled >> local >> SAS01 feature@spacemap_histogram active >> local >> SAS01 feature@enabled_txg active >> local >> SAS01 feature@hole_birth active >> local >> SAS01 feature@extensible_dataset enabled >> local >> SAS01 feature@embedded_data active >> local >> SAS01 feature@bookmarks enabled >> local >> SAS01 feature@filesystem_limits enabled >> local >> # zfs get all SAS01 >> NAME PROPERTY VALUE SOURCE >> SAS01 type filesystem - >> SAS01 creation Tue Dec 23 2:51 2014 - >> SAS01 used 30.3T - >> SAS01 available 0 - >> SAS01 referenced 307K - >> SAS01 compressratio 1.00x - >> SAS01 mounted yes - >> SAS01 quota none default >> SAS01 reservation none default >> SAS01 recordsize 128K default >> SAS01 mountpoint /SAS01 default >> SAS01 sharenfs off default >> SAS01 checksum on default >> SAS01 compression off default >> SAS01 atime on default >> SAS01 devices on default >> SAS01 exec on default >> SAS01 setuid on default >> SAS01 readonly off default >> SAS01 jailed off default >> SAS01 snapdir hidden default >> SAS01 aclmode discard default >> SAS01 aclinherit restricted default >> SAS01 canmount on default >> SAS01 xattr off temporary >> SAS01 copies 1 default >> SAS01 version 5 - >> SAS01 utf8only off - >> SAS01 normalization none - >> SAS01 casesensitivity sensitive - >> SAS01 vscan off default >> SAS01 nbmand off default >> SAS01 sharesmb off default >> SAS01 refquota none default >> SAS01 refreservation none default >> SAS01 primarycache all default >> SAS01 secondarycache all default >> SAS01 usedbysnapshots 0 - >> SAS01 usedbydataset 307K - >> SAS01 usedbychildren 30.3T - >> SAS01 usedbyrefreservation 0 - >> SAS01 logbias latency default >> SAS01 dedup off default >> SAS01 mlslabel - >> SAS01 sync standard default >> SAS01 refcompressratio 1.00x - >> SAS01 written 307K - >> SAS01 logicalused 30.2T - >> SAS01 logicalreferenced 12K - >> SAS01 volmode default default >> SAS01 filesystem_limit none default >> SAS01 snapshot_limit none default >> SAS01 filesystem_count none default >> SAS01 snapshot_count none default >> SAS01 redundant_metadata all default >> # zpool status SAS01 >> pool: SAS01 >> state: ONLINE >> scan: scrub repaired 0 in 20h26m with 0 errors on Thu Dec 25 >> 20:57:34 2014 >> config: >> >> NAME STATE READ WRITE CKSUM >> SAS01 ONLINE 0 0 0 >> raidz2-0 ONLINE 0 0 0 >> da45 ONLINE 0 0 0 >> da44 ONLINE 0 0 0 >> da47 ONLINE 0 0 0 >> da43 ONLINE 0 0 0 >> da42 ONLINE 0 0 0 >> da46 ONLINE 0 0 0 >> da41 ONLINE 0 0 0 >> da40 ONLINE 0 0 0 >> >> errors: No known data errors >> # zfs upgrade -r SAS01 >> cannot set property for 'SAS01/t03': out of space >> cannot set property for 'SAS01/t04': out of space >> cannot set property for 'SAS01/t05': out of space >> cannot set property for 'SAS01/t06': out of space >> cannot set property for 'SAS01/t07': out of space >> cannot set property for 'SAS01/t08': out of space >> cannot set property for 'SAS01/t12': out of space >> cannot set property for 'SAS01/t13': out of space >> 0 filesystems upgraded >> 1 filesystems already at this version >> # >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 02:41:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 923E2154 for ; Sat, 27 Dec 2014 02:41:38 +0000 (UTC) Received: from mail-wg0-f41.google.com (mail-wg0-f41.google.com [74.125.82.41]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2335066AF6 for ; Sat, 27 Dec 2014 02:41:37 +0000 (UTC) Received: by mail-wg0-f41.google.com with SMTP id y19so15378710wgg.0 for ; Fri, 26 Dec 2014 18:41:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=KSYXCQ15tvLsUI/Dr2yTnd45644zEhfrzSC5hTBCd0c=; b=lz4f12WvDu43J0jw1oQDCGvbl6pLF85qpdWXQLFO8A8ghOjmwx+BmIhJ1tH1xA/DPh C6Pc6Tmu96YP2lTg01FWNyDthXKwYvNpHIxbseUyXl5n9Bx/Ikc6nNoEFHn0/62FYn5K 6UOKbXy97y8/h4L4l2KbxQhgsgv2GeGnm5TaTSI+gCnVCNJgsXHBtpnBftzo8iII72aj Tm2wwg3JhOcdQAkeiOGS1qWIJFyOkfSarFHkBTa5IW2zRXYmh2rRp1yfYi4Rjtm5VnKd ldJtX7vwqQYJA9hC/aJ0zIOF6ts8t0qVtCPBPRWilLUu1Gp+f3dHasRxHBt8WrsWecgu Z0aQ== X-Gm-Message-State: ALoCoQknrZitoNWnaxGzWaNEh0hUz8m8ahYYSSSAMK2h/lPr/Y14AwdPhqaFlK78FqKtbJWEd3Qe X-Received: by 10.194.174.72 with SMTP id bq8mr84027920wjc.120.1419648090098; Fri, 26 Dec 2014 18:41:30 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id hz9sm40642432wjb.17.2014.12.26.18.41.28 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Dec 2014 18:41:29 -0800 (PST) Message-ID: <549E1C4F.7090400@multiplay.co.uk> Date: Sat, 27 Dec 2014 02:41:19 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: "James R. Van Artsdalen" Subject: Re: ZFS: FreeBSD 10.1 can't import/mount FreeBSD 9 pool References: <549DF2B1.3030909@jrv.org> <549DF7EB.1080308@multiplay.co.uk> <549E18AB.8060708@jrv.org> In-Reply-To: <549E18AB.8060708@jrv.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 02:41:38 -0000 It was introduced by: https://svnweb.freebsd.org/base?view=revision&revision=268473 Tuning of it was added by: https://svnweb.freebsd.org/base?view=revision&revision=274674 Hope this helps. Regards Steve On 27/12/2014 02:25, James R. Van Artsdalen wrote: > Oops - this will break every single one of my archival pools. > > If there is no userland ability to enable backwards compatibility, can > you tell me where it is in the source or about when it was added? > > On 12/26/2014 6:06 PM, Steven Hartland wrote: >> Later versions reserve space for deletions etc, so if your volume is >> too full could fail in this manor. >> >> The fix would be to clear down space so this is no longer an issue. >> >> On 26/12/2014 23:43, James R. Van Artsdalen wrote: >>> FreeBSD bigtex.housenet.jrv 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #2 >>> r273476M: Thu Oct 23 20:39:40 CDT 2014 >>> james@bigtex.housenet.jrv:/usr/obj/usr/src/sys/GENERIC amd64 >>> >>> A pool created by a FreeBSD 9 system was imported into FreeBSD 10.1 but >>> failed to create the recursive mountpoints as shown below. >>> >>> What's especially interesting is that the free space reported by >>> zpool(1) and zfs(1) are wildly different, even though there are no >>> reservations. >>> >>> Note that I was able to do a zpool upgrade, but that zfs upgrade failed >>> on the children datasets. >>> >>> # zpool import SAS01 >>> cannot mount '/SAS01/t03': failed to create mountpoint >>> cannot mount '/SAS01/t04': failed to create mountpoint >>> cannot mount '/SAS01/t05': failed to create mountpoint >>> cannot mount '/SAS01/t06': failed to create mountpoint >>> cannot mount '/SAS01/t07': failed to create mountpoint >>> cannot mount '/SAS01/t08': failed to create mountpoint >>> cannot mount '/SAS01/t12': failed to create mountpoint >>> cannot mount '/SAS01/t13': failed to create mountpoint >>> # zpool list SAS01 >>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH >>> ALTROOT >>> SAS01 43.5T 42.6T 948G - 0% 97% 1.00x ONLINE - >>> # zfs list -p SAS01 >>> NAME USED AVAIL REFER MOUNTPOINT >>> SAS01 33279222543840 0 314496 /SAS01 >>> # zpool get all SAS01 >>> NAME PROPERTY VALUE >>> SOURCE >>> SAS01 size 43.5T - >>> SAS01 capacity 97% - >>> SAS01 altroot - >>> default >>> SAS01 health ONLINE - >>> SAS01 guid 1341452135 >>> default >>> SAS01 version - >>> default >>> SAS01 bootfs - >>> default >>> SAS01 delegation on >>> default >>> SAS01 autoreplace off >>> default >>> SAS01 cachefile - >>> default >>> SAS01 failmode wait >>> default >>> SAS01 listsnapshots off >>> default >>> SAS01 autoexpand off >>> default >>> SAS01 dedupditto 0 >>> default >>> SAS01 dedupratio 1.00x - >>> SAS01 free 948G - >>> SAS01 allocated 42.6T - >>> SAS01 readonly off - >>> SAS01 comment - >>> default >>> SAS01 expandsize - - >>> SAS01 freeing 0 >>> default >>> SAS01 fragmentation 0% - >>> SAS01 leaked 0 >>> default >>> SAS01 feature@async_destroy enabled >>> local >>> SAS01 feature@empty_bpobj active >>> local >>> SAS01 feature@lz4_compress active >>> local >>> SAS01 feature@multi_vdev_crash_dump enabled >>> local >>> SAS01 feature@spacemap_histogram active >>> local >>> SAS01 feature@enabled_txg active >>> local >>> SAS01 feature@hole_birth active >>> local >>> SAS01 feature@extensible_dataset enabled >>> local >>> SAS01 feature@embedded_data active >>> local >>> SAS01 feature@bookmarks enabled >>> local >>> SAS01 feature@filesystem_limits enabled >>> local >>> # zfs get all SAS01 >>> NAME PROPERTY VALUE SOURCE >>> SAS01 type filesystem - >>> SAS01 creation Tue Dec 23 2:51 2014 - >>> SAS01 used 30.3T - >>> SAS01 available 0 - >>> SAS01 referenced 307K - >>> SAS01 compressratio 1.00x - >>> SAS01 mounted yes - >>> SAS01 quota none default >>> SAS01 reservation none default >>> SAS01 recordsize 128K default >>> SAS01 mountpoint /SAS01 default >>> SAS01 sharenfs off default >>> SAS01 checksum on default >>> SAS01 compression off default >>> SAS01 atime on default >>> SAS01 devices on default >>> SAS01 exec on default >>> SAS01 setuid on default >>> SAS01 readonly off default >>> SAS01 jailed off default >>> SAS01 snapdir hidden default >>> SAS01 aclmode discard default >>> SAS01 aclinherit restricted default >>> SAS01 canmount on default >>> SAS01 xattr off temporary >>> SAS01 copies 1 default >>> SAS01 version 5 - >>> SAS01 utf8only off - >>> SAS01 normalization none - >>> SAS01 casesensitivity sensitive - >>> SAS01 vscan off default >>> SAS01 nbmand off default >>> SAS01 sharesmb off default >>> SAS01 refquota none default >>> SAS01 refreservation none default >>> SAS01 primarycache all default >>> SAS01 secondarycache all default >>> SAS01 usedbysnapshots 0 - >>> SAS01 usedbydataset 307K - >>> SAS01 usedbychildren 30.3T - >>> SAS01 usedbyrefreservation 0 - >>> SAS01 logbias latency default >>> SAS01 dedup off default >>> SAS01 mlslabel - >>> SAS01 sync standard default >>> SAS01 refcompressratio 1.00x - >>> SAS01 written 307K - >>> SAS01 logicalused 30.2T - >>> SAS01 logicalreferenced 12K - >>> SAS01 volmode default default >>> SAS01 filesystem_limit none default >>> SAS01 snapshot_limit none default >>> SAS01 filesystem_count none default >>> SAS01 snapshot_count none default >>> SAS01 redundant_metadata all default >>> # zpool status SAS01 >>> pool: SAS01 >>> state: ONLINE >>> scan: scrub repaired 0 in 20h26m with 0 errors on Thu Dec 25 >>> 20:57:34 2014 >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> SAS01 ONLINE 0 0 0 >>> raidz2-0 ONLINE 0 0 0 >>> da45 ONLINE 0 0 0 >>> da44 ONLINE 0 0 0 >>> da47 ONLINE 0 0 0 >>> da43 ONLINE 0 0 0 >>> da42 ONLINE 0 0 0 >>> da46 ONLINE 0 0 0 >>> da41 ONLINE 0 0 0 >>> da40 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> # zfs upgrade -r SAS01 >>> cannot set property for 'SAS01/t03': out of space >>> cannot set property for 'SAS01/t04': out of space >>> cannot set property for 'SAS01/t05': out of space >>> cannot set property for 'SAS01/t06': out of space >>> cannot set property for 'SAS01/t07': out of space >>> cannot set property for 'SAS01/t08': out of space >>> cannot set property for 'SAS01/t12': out of space >>> cannot set property for 'SAS01/t13': out of space >>> 0 filesystems upgraded >>> 1 filesystems already at this version >>> # >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 13:18:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C6C9EDCB for ; Sat, 27 Dec 2014 13:18:36 +0000 (UTC) Received: from mail-wg0-x22c.google.com (mail-wg0-x22c.google.com [IPv6:2a00:1450:400c:c00::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 55CA56424B for ; Sat, 27 Dec 2014 13:18:36 +0000 (UTC) Received: by mail-wg0-f44.google.com with SMTP id b13so15998885wgh.31 for ; Sat, 27 Dec 2014 05:18:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; bh=wyUcLNmth3vBpeyIUmPR6zCs2e7o86ZVtrC3Hy1I7eo=; b=Jxu8SRYBLagw2jDFBa0sqyuI8m3SWG+OkbSu7k4Co1y+ycupg1i4/mT31eLs4riQfj sSWUwgzvZ432hkqSpD6j+LbUgC/ghSMTDCFvUi9lLxYmPDek8KBPtFVzuOwl3E6KUt18 VeWFXO7bUXG6RNpmlHGgMOX/d5Fv1FT+6ni/B25BlWzpYeWTVgIVN9JecoPTFp11eqVf 8dcEAGTB7R2MgBGZbeKTEJflIK5NPM4iSk/CSOKaptALqJPzGQUvXVHrKJGNo0jXY0gt xyyc/PIk5HAgNVpiICyJLcvMEH2AQCgtGQiplYJoTPGJKlTlL1tIjMN8lKv00wv/29O8 s84Q== X-Received: by 10.194.52.37 with SMTP id q5mr88819927wjo.39.1419686314796; Sat, 27 Dec 2014 05:18:34 -0800 (PST) Received: from [10.0.1.108] (cpc15-stav13-2-0-cust197.17-3.cable.virginm.net. [77.100.102.198]) by mx.google.com with ESMTPSA id gb10sm17024971wjb.21.2014.12.27.05.18.33 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 27 Dec 2014 05:18:34 -0800 (PST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) Subject: Re: ZFS: Mount partition from ZVOL with volmode=dev From: Paul Chakravarti In-Reply-To: <91E1211B-7E84-472B-8098-630AE8C97251@gmail.com> Date: Sat, 27 Dec 2014 13:18:32 +0000 Content-Transfer-Encoding: quoted-printable Message-Id: <32BEFAB7-936E-42F0-AE75-FB978C13885C@gmail.com> References: <91E1211B-7E84-472B-8098-630AE8C97251@gmail.com> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1993) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 13:18:36 -0000 >On 26/12/2014 23:54, Paul Chakravarti wrote: >> Hello, >> >> I am using a ZVOL configured with 'volmode=3Ddev' as the virtio disk = device for >> a bhyve instance (which works fine) but was trying to workout whether = there >> was any way of mounting the underlying partitions on the host system = - the >> partitions don=E2=80=99t show up under /dev/zvol as separate devices = with >> 'volmode=3Ddev' so was wondering is there is any other way of getting = at these >> other than mounting in a bhyve instance? >> >> Thanks, Paul > I cant reproduce this on HEAD r276067 > > zfs create -V 8192 -o volmode=3Ddev tank/tvol > root at head:src> ls -l /dev/zvol/tank/ > total 0 > crw-r----- 1 root operator 0x85 Dec 27 00:08 tvol > > Regards > Steve Hi, Sorry - I should have been clearer. The zvol shows up on the host system but the partitions aren=E2=80=99t exposed to geom. On the host system: # zfs create -V10G -o volmode=3Ddev tank/vps/vm0 # zfs list -o name,used,volmode tank/vps/vm0 NAME USED VOLMODE tank/vps/vm0 10.3G dev # ls -l /dev/zvol/tank/vps/ total 0 crw-r----- 1 root operator 0x7b Dec 27 13:30 vm0 The zvol mounted on the bhyve guest: # bhyveload -c /dev/nmdm0A -m 512M -d /dev/zvol/tank/vps/vm0 vm0 # bhyve -c 2 -m 512M -A -H -P -s 0:0,hostbridge -s = 1:0,virtio-net,tap0 -s 2:0,lpc -s 3:0,virtio-blk,/dev/zvol/tank/vps/vm0 = -l com1,/dev/nmdm0A vm0 On the bhyve guest this shows up as a geom device: root@vm0:~ # geom disk list Geom name: vtbd0 Providers: 1. Name: vtbd0 Mediasize: 10737418240 (10G) Sectorsize: 512 Mode: r2w2e3 descr: (null) ident: BHYVE-747A-2A76-FAC fwsectors: 0 fwheads: 0 And is partitioned in the guest as follows: root@vm0:~ # gpart show -p =3D> 34 20971453 vtbd0 GPT (10G) 34 1024 vtbd0p1 freebsd-boot (512K) 1058 19919872 vtbd0p2 freebsd-ufs (9.5G) 19920930 1048576 vtbd0p3 freebsd-swap (512M) 20969506 1981 - free - (991K) What I am trying to work out is whether there is any way I can mount the = guest=20 UFS partition on the host with volmode=3Dguest - with volmode=3Ddefault = (ie. geom when vfs.zfs.vol.mode=3D1) the device does show up as a geom provider = and you can just mount the partition from /dev/vol directly. The ZFS man page = suggests that you can=E2=80=99t but given that you can clearly mount from within a VM = was wondering is there is any way round this on the host (I am trying to clone a disk = device to run multiple bhyve instances but want to mount and modify some of the=20= rc.conf parameters before passing to bhyve) volmode=3Ddefault | geom | dev | none This property specifies how volumes should be exposed to the = OS. Setting it to geom exposes volumes as geom(4) providers, = providing maximal functionality. Setting it to dev exposes volumes only = as cdev device in devfs. Such volumes can be accessed only as raw = disk device files, i.e. they can not be partitioned, mounted, = participate in RAIDs, etc, but they are faster, and in some use scenarios = with untrusted consumer, such as NAS or VM storage, can be more = safe. Volumes with property set to none are not exposed outside ZFS, = but can be snapshoted, cloned, replicated, etc, that can be = suitable for backup purposes. Value default means that volumes exposition = is con- trolled by system-wide sysctl/tunable vfs.zfs.vol.mode, where = geom, dev and none are encoded as 1, 2 and 3 respectively. The = default values is geom. This property can be changed any time, but so = far it is processed only during volume creation and pool import. Using volmode=3Ddefault (geom - vfs.zfs.vol.mode=3D1) causes the = installer to fail when you try to create a UFS filesystem under bhyve - it is possible to = get round this by creating the partitions manually but my preference would = be to use volmode=3Ddev. Paul =20 =20= From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 14:19:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1CB2D883 for ; Sat, 27 Dec 2014 14:19:18 +0000 (UTC) Received: from mail-wg0-f53.google.com (mail-wg0-f53.google.com [74.125.82.53]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A57FF66525 for ; Sat, 27 Dec 2014 14:19:17 +0000 (UTC) Received: by mail-wg0-f53.google.com with SMTP id l18so15884027wgh.40 for ; Sat, 27 Dec 2014 06:19:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=/zZWYchlsfS4eUgUxBLQJr6/N8imykw9k9ln4hp+mRM=; b=CbxlQSbnpTKzhaZYDWhUiOfVUdU+Zkl+kW+cqGQvtT7R/MkY/pPCxE+YYumgctFKXr 6wX0/gbXL5lWFw4U3k4swmTiRNT6/qbx10dfxyP5OONxGFgtz7+MwgMqC4fiXuQrXwAn BO09COvFPh034XT6ayJrGv0iK1iANpJeaBHtmZOTKggZDDY3yG+GRbs4RkZw9e7OL7HY E2qXQCI85IRv6jlrNNVdwsO8ra48Xl2gP2GYiruLschZC//ziIyslZqr/uBJrka4KptP COB6+VFWv3WCj2AsKTeA/3L2KG6zq9XWv6BnJ/GfqxCuAk4OkO5jHEwcD2oQPGnO3fwX /WkA== X-Gm-Message-State: ALoCoQllc7bmMj2dDGdSgdNP5KhTHxzPZSVseyQpS0CLlxYHn8zSMXYCBvy1vUrEBRQeJvY9/sc1 X-Received: by 10.180.9.241 with SMTP id d17mr78445387wib.13.1419689955654; Sat, 27 Dec 2014 06:19:15 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id d2sm42393707wjs.32.2014.12.27.06.19.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 27 Dec 2014 06:19:14 -0800 (PST) Message-ID: <549EBFD9.8090407@multiplay.co.uk> Date: Sat, 27 Dec 2014 14:19:05 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS: Mount partition from ZVOL with volmode=dev References: <91E1211B-7E84-472B-8098-630AE8C97251@gmail.com> <32BEFAB7-936E-42F0-AE75-FB978C13885C@gmail.com> In-Reply-To: <32BEFAB7-936E-42F0-AE75-FB978C13885C@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 14:19:18 -0000 On 27/12/2014 13:18, Paul Chakravarti wrote: >> On 26/12/2014 23:54, Paul Chakravarti wrote: >>> Hello, >>> >>> I am using a ZVOL configured with 'volmode=dev' as the virtio disk device for >>> a bhyve instance (which works fine) but was trying to workout whether there >>> was any way of mounting the underlying partitions on the host system - the >>> partitions don’t show up under /dev/zvol as separate devices with >>> 'volmode=dev' so was wondering is there is any other way of getting at these >>> other than mounting in a bhyve instance? >>> >>> Thanks, Paul >> I cant reproduce this on HEAD r276067 >> >> zfs create -V 8192 -o volmode=dev tank/tvol >> root at head:src> ls -l /dev/zvol/tank/ >> total 0 >> crw-r----- 1 root operator 0x85 Dec 27 00:08 tvol >> >> Regards >> Steve > Hi, > > Sorry - I should have been clearer. The zvol shows up on the host > system but the partitions aren’t exposed to geom. That's exactly what volmode=dev does, if you want geom ones don't specify volmode. > On the host system: > > # zfs create -V10G -o volmode=dev tank/vps/vm0 > > # zfs list -o name,used,volmode tank/vps/vm0 > NAME USED VOLMODE > tank/vps/vm0 10.3G dev > > # ls -l /dev/zvol/tank/vps/ > total 0 > crw-r----- 1 root operator 0x7b Dec 27 13:30 vm0 > > The zvol mounted on the bhyve guest: > > # bhyveload -c /dev/nmdm0A -m 512M -d /dev/zvol/tank/vps/vm0 vm0 > # bhyve -c 2 -m 512M -A -H -P -s 0:0,hostbridge -s 1:0,virtio-net,tap0 -s 2:0,lpc -s 3:0,virtio-blk,/dev/zvol/tank/vps/vm0 -l com1,/dev/nmdm0A vm0 > > On the bhyve guest this shows up as a geom device: > > root@vm0:~ # geom disk list > Geom name: vtbd0 > Providers: > 1. Name: vtbd0 > Mediasize: 10737418240 (10G) > Sectorsize: 512 > Mode: r2w2e3 > descr: (null) > ident: BHYVE-747A-2A76-FAC > fwsectors: 0 > fwheads: 0 > > And is partitioned in the guest as follows: > > root@vm0:~ # gpart show -p > => 34 20971453 vtbd0 GPT (10G) > 34 1024 vtbd0p1 freebsd-boot (512K) > 1058 19919872 vtbd0p2 freebsd-ufs (9.5G) > 19920930 1048576 vtbd0p3 freebsd-swap (512M) > 20969506 1981 - free - (991K) > > What I am trying to work out is whether there is any way I can mount the guest > UFS partition on the host with volmode=guest - with volmode=default (ie. geom There is no volmode=guest, where did you get that from? > when vfs.zfs.vol.mode=1) the device does show up as a geom provider and you can > just mount the partition from /dev/vol directly. The ZFS man page suggests that > you can’t but given that you can clearly mount from within a VM was wondering > is there is any way round this on the host (I am trying to clone a disk device > to run multiple bhyve instances but want to mount and modify some of the > rc.conf parameters before passing to bhyve) > > volmode=default | geom | dev | none > This property specifies how volumes should be exposed to the OS. > Setting it to geom exposes volumes as geom(4) providers, providing > maximal functionality. Setting it to dev exposes volumes only as > cdev device in devfs. Such volumes can be accessed only as raw disk > device files, i.e. they can not be partitioned, mounted, participate > in RAIDs, etc, but they are faster, and in some use scenarios with > untrusted consumer, such as NAS or VM storage, can be more safe. > Volumes with property set to none are not exposed outside ZFS, but > can be snapshoted, cloned, replicated, etc, that can be suitable for > backup purposes. Value default means that volumes exposition is con- > trolled by system-wide sysctl/tunable vfs.zfs.vol.mode, where geom, > dev and none are encoded as 1, 2 and 3 respectively. The default > values is geom. This property can be changed any time, but so far it > is processed only during volume creation and pool import. > > Using volmode=default (geom - vfs.zfs.vol.mode=1) causes the installer to fail > when you try to create a UFS filesystem under bhyve - it is possible to get > round this by creating the partitions manually but my preference would be to > use volmode=dev. What error do you get? Regards Steve From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 14:28:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BFE8CA9D; Sat, 27 Dec 2014 14:28:39 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 5CD456662B; Sat, 27 Dec 2014 14:28:38 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtYEAJDBnlSDaFve/2dsb2JhbABbhDSDAccEglACgR0BAQEBAX2EEyMEgRYZAgRVBgGIPrQtlQoBAQEBAQUBAQEBAQEBARqPQxkigmiBQQWJS4YViDWNCoM5IoQMIIF2fgEBAQ X-IronPort-AV: E=Sophos;i="5.07,651,1413259200"; d="scan'208";a="181380897" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 27 Dec 2014 09:28:16 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 1E833AEA36; Sat, 27 Dec 2014 09:28:16 -0500 (EST) Date: Sat, 27 Dec 2014 09:28:16 -0500 (EST) From: Rick Macklem To: FreeBSD Filesystems , John Baldwin , Konstantin Belousov Message-ID: <1190766207.2826601.1419690496079.JavaMail.root@uoguelph.ca> In-Reply-To: <1894262154.2825656.1419690232046.JavaMail.root@uoguelph.ca> Subject: RFC: new NFS mount option or restore old behaviour for Solaris server bug? MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_2826599_1031187847.1419690496077" X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 14:28:40 -0000 ------=_Part_2826599_1031187847.1419690496077 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi, The FreeBSD9.1 and earlier NFS clients almost always (unless the tod clock ticked to next second while the operation was in progress) set the mtime to the server's time (xx_TOSERVER) for exclusive open. Starting with FreeBSD9.2, the mtime would be set to the client's time due to r245508, which fixed the code for utimes() to use VA_UTIMES_NULL. This change tickled a bug in recent Solaris servers, which return NFS_OK to the Setattr RPC but don't actually set the file's mode bits. (The bug isn't tickled when mtime is set to the server's time.) I have patches to work around this in two ways: 1 - Add a new "useservertime" mount option that forces xx_TOSERVER. (This patch would force xx_TOSERVER for exclusive open.) It permits the man page to document why it is needed-->broken Solaris servers. 2 - Use xx_TOSERVER for exclusive open always. Since this was the normal behaviour until FreeBSD9.2, I don't think this would cause problems or be a POLA violation, but I can't be sure? I am leaning towards #2, since it avoids yet another mount option. However, I'd like other people's opinions on which option is better, or any other suggestions? Thanks in advance for your comments, rick ps: The trivial patch for #2 is attached, in case you are interested. ------=_Part_2826599_1031187847.1419690496077 Content-Type: text/x-patch; name=setservertime.patch Content-Disposition: attachment; filename=setservertime.patch Content-Transfer-Encoding: base64 LS0tIGZzL25mc2NsaWVudC9uZnNfY2xwb3J0LmMuc2F2CTIwMTQtMTItMjUgMTI6NTQ6MjUuMDAw MDAwMDAwIC0wNTAwCisrKyBmcy9uZnNjbGllbnQvbmZzX2NscG9ydC5jCTIwMTQtMTItMjUgMTI6 NTU6NDkuMDAwMDAwMDAwIC0wNTAwCkBAIC0xMDk2LDkgKzEwOTYsMTYgQEAgbmZzY2xfY2hlY2tz YXR0cihzdHJ1Y3QgdmF0dHIgKnZhcCwgc3RydQogCSAqIHVzIHRvIGRvIGEgU0VUQVRUUiBSUEMu IEZyZWVCU0Qgc2VydmVycyBzdG9yZSB0aGUgdmVyaWZpZXIKIAkgKiBpbiBhdGltZSwgYnV0IHdl IGNhbid0IHJlYWxseSBhc3N1bWUgdGhhdCBhbGwgc2VydmVycyB3aWxsCiAJICogc28gd2UgZW5z dXJlIHRoYXQgb3VyIFNFVEFUVFIgc2V0cyBib3RoIGF0aW1lIGFuZCBtdGltZS4KKwkgKiBTZXQg dGhlIFZBX1VUSU1FU19OVUxMIGZsYWcgZm9yIHRoaXMgY2FzZSwgc28gdGhhdAorCSAqIHRoZSBz ZXJ2ZXIncyB0aW1lIHdpbGwgYmUgdXNlZC4gIFRoaXMgaXMgbmVlZGVkIHRvCisJICogd29yayBh cm91bmQgYSBidWcgaW4gc29tZSBTb2xhcmlzIHNlcnZlcnMsIHdoZXJlCisJICogc2V0dGluZyB0 aGUgdGltZSBUT0NMSUVOVCBjYXVzZXMgdGhlIFNldGF0dHIgUlBDCisJICogdG8gcmV0dXJuIE5G U19PSywgYnV0IG5vdCBzZXQgdmFfbW9kZS4KIAkgKi8KLQlpZiAodmFwLT52YV9tdGltZS50dl9z ZWMgPT0gVk5PVkFMKQorCWlmICh2YXAtPnZhX210aW1lLnR2X3NlYyA9PSBWTk9WQUwpIHsKIAkJ dmZzX3RpbWVzdGFtcCgmdmFwLT52YV9tdGltZSk7CisJCXZhcC0+dmFfdmFmbGFncyB8PSBWQV9V VElNRVNfTlVMTDsKKwl9CiAJaWYgKHZhcC0+dmFfYXRpbWUudHZfc2VjID09IFZOT1ZBTCkKIAkJ dmFwLT52YV9hdGltZSA9IHZhcC0+dmFfbXRpbWU7CiAJcmV0dXJuICgxKTsK ------=_Part_2826599_1031187847.1419690496077-- From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 14:38:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CC037C2F for ; Sat, 27 Dec 2014 14:38:32 +0000 (UTC) Received: from mail-wi0-f179.google.com (mail-wi0-f179.google.com [209.85.212.179]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5532266735 for ; Sat, 27 Dec 2014 14:38:32 +0000 (UTC) Received: by mail-wi0-f179.google.com with SMTP id ex7so18773470wid.0 for ; Sat, 27 Dec 2014 06:38:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=R8hXejziig/oyAYVzgsWzKJskyv+ej+yvq7QgJgd9xQ=; b=aKNXURfkmCk3geMLwSRvi5fSAjlvVV6E0lKjeIC48IkA8ZsEpXJPN3l02bdcX8gPSM +hJ2BUqi21Wd5zQFcZHTyKtGIk+dCnAOV5SofpQ72bKGRZhtHJss1595Lp5nPEdvLk6s TUFpWu+RmM2hpaV6eyEcr1bqajMaHQL/VEUIdvP9YBhvVV0KqV1x/LEewd4kLLoZc889 VEhmMedHw6KwYJYwQyJx4G4l1qKMBkd8z5jj2qlss7RgX6vEFONPWGJ/QGGhyu6COZyM V5bZmwFr+uHaEBb7e4i6Z1y5SRiCBYh4qL/pZqzZMhM+XjcAp1wG0/dBp6D5i+/Kzaks ZW0A== X-Gm-Message-State: ALoCoQm6YApsC+qtxDqlvw4fxdad7SeMpKyR7IOyW/BuPPNQtAV60cUeZvrPVtlI6znWwDjP33al X-Received: by 10.180.108.143 with SMTP id hk15mr79570682wib.6.1419691105184; Sat, 27 Dec 2014 06:38:25 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id fp2sm29485145wib.8.2014.12.27.06.38.24 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 27 Dec 2014 06:38:24 -0800 (PST) Message-ID: <549EC457.6010509@multiplay.co.uk> Date: Sat, 27 Dec 2014 14:38:15 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Craig Yoshioka , "freebsd-fs@FreeBSD.ORG" Subject: Re: ZFS: FreeBSD 10.1 can't import/mount FreeBSD 9 pool References: <549DF2B1.3030909@jrv.org> <549DF7EB.1080308@multiplay.co.uk> <549E18AB.8060708@jrv.org> <549E1C4F.7090400@multiplay.co.uk> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 14:38:32 -0000 This was an upstream change so I couldn't comment on the details. The intention is clear, to prevent a pool getting into a state where it has no space and from which it can't recover, but why so much of the pool is reserved is unclear particularly for large pools e.g. on a 8 TB pool 256GB is reserved. I can seen the benefit of making this configurable on a pool by pool basis, or at least capping it to a reasonable value, as well as making it backwards compatible so scenarios like this don't occur but there may be implementation details which prevent this I'm not sure as I've not looked into the details. If this is where your mind is too then I would suggest looking to raise issue upstream. On 27/12/2014 05:04, Craig Yoshioka wrote: > I brought this up before, but I will again. This was not a great change. I also have archival drives which can now give me problems. Why was this reserved space not implemented as a user configurable FS option? > > Sent from my iPhone > >> On Dec 26, 2014, at 8:41 PM, Steven Hartland wrote: >> >> It was introduced by: >> https://svnweb.freebsd.org/base?view=revision&revision=268473 >> >> Tuning of it was added by: >> https://svnweb.freebsd.org/base?view=revision&revision=274674 >> >> Hope this helps. >> >> Regards >> Steve >> >>> On 27/12/2014 02:25, James R. Van Artsdalen wrote: >>> Oops - this will break every single one of my archival pools. >>> >>> If there is no userland ability to enable backwards compatibility, can >>> you tell me where it is in the source or about when it was added? >>> >>>> On 12/26/2014 6:06 PM, Steven Hartland wrote: >>>> Later versions reserve space for deletions etc, so if your volume is >>>> too full could fail in this manor. >>>> >>>> The fix would be to clear down space so this is no longer an issue. >>>> >>>>> On 26/12/2014 23:43, James R. Van Artsdalen wrote: >>>>> FreeBSD bigtex.housenet.jrv 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #2 >>>>> r273476M: Thu Oct 23 20:39:40 CDT 2014 >>>>> james@bigtex.housenet.jrv:/usr/obj/usr/src/sys/GENERIC amd64 >>>>> >>>>> A pool created by a FreeBSD 9 system was imported into FreeBSD 10.1 but >>>>> failed to create the recursive mountpoints as shown below. >>>>> >>>>> What's especially interesting is that the free space reported by >>>>> zpool(1) and zfs(1) are wildly different, even though there are no >>>>> reservations. >>>>> >>>>> Note that I was able to do a zpool upgrade, but that zfs upgrade failed >>>>> on the children datasets. >>>>> >>>>> # zpool import SAS01 >>>>> cannot mount '/SAS01/t03': failed to create mountpoint >>>>> cannot mount '/SAS01/t04': failed to create mountpoint >>>>> cannot mount '/SAS01/t05': failed to create mountpoint >>>>> cannot mount '/SAS01/t06': failed to create mountpoint >>>>> cannot mount '/SAS01/t07': failed to create mountpoint >>>>> cannot mount '/SAS01/t08': failed to create mountpoint >>>>> cannot mount '/SAS01/t12': failed to create mountpoint >>>>> cannot mount '/SAS01/t13': failed to create mountpoint >>>>> # zpool list SAS01 >>>>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH >>>>> ALTROOT >>>>> SAS01 43.5T 42.6T 948G - 0% 97% 1.00x ONLINE - >>>>> # zfs list -p SAS01 >>>>> NAME USED AVAIL REFER MOUNTPOINT >>>>> SAS01 33279222543840 0 314496 /SAS01 >>>>> # zpool get all SAS01 >>>>> NAME PROPERTY VALUE >>>>> SOURCE >>>>> SAS01 size 43.5T - >>>>> SAS01 capacity 97% - >>>>> SAS01 altroot - >>>>> default >>>>> SAS01 health ONLINE - >>>>> SAS01 guid 1341452135 >>>>> default >>>>> SAS01 version - >>>>> default >>>>> SAS01 bootfs - >>>>> default >>>>> SAS01 delegation on >>>>> default >>>>> SAS01 autoreplace off >>>>> default >>>>> SAS01 cachefile - >>>>> default >>>>> SAS01 failmode wait >>>>> default >>>>> SAS01 listsnapshots off >>>>> default >>>>> SAS01 autoexpand off >>>>> default >>>>> SAS01 dedupditto 0 >>>>> default >>>>> SAS01 dedupratio 1.00x - >>>>> SAS01 free 948G - >>>>> SAS01 allocated 42.6T - >>>>> SAS01 readonly off - >>>>> SAS01 comment - >>>>> default >>>>> SAS01 expandsize - - >>>>> SAS01 freeing 0 >>>>> default >>>>> SAS01 fragmentation 0% - >>>>> SAS01 leaked 0 >>>>> default >>>>> SAS01 feature@async_destroy enabled >>>>> local >>>>> SAS01 feature@empty_bpobj active >>>>> local >>>>> SAS01 feature@lz4_compress active >>>>> local >>>>> SAS01 feature@multi_vdev_crash_dump enabled >>>>> local >>>>> SAS01 feature@spacemap_histogram active >>>>> local >>>>> SAS01 feature@enabled_txg active >>>>> local >>>>> SAS01 feature@hole_birth active >>>>> local >>>>> SAS01 feature@extensible_dataset enabled >>>>> local >>>>> SAS01 feature@embedded_data active >>>>> local >>>>> SAS01 feature@bookmarks enabled >>>>> local >>>>> SAS01 feature@filesystem_limits enabled >>>>> local >>>>> # zfs get all SAS01 >>>>> NAME PROPERTY VALUE SOURCE >>>>> SAS01 type filesystem - >>>>> SAS01 creation Tue Dec 23 2:51 2014 - >>>>> SAS01 used 30.3T - >>>>> SAS01 available 0 - >>>>> SAS01 referenced 307K - >>>>> SAS01 compressratio 1.00x - >>>>> SAS01 mounted yes - >>>>> SAS01 quota none default >>>>> SAS01 reservation none default >>>>> SAS01 recordsize 128K default >>>>> SAS01 mountpoint /SAS01 default >>>>> SAS01 sharenfs off default >>>>> SAS01 checksum on default >>>>> SAS01 compression off default >>>>> SAS01 atime on default >>>>> SAS01 devices on default >>>>> SAS01 exec on default >>>>> SAS01 setuid on default >>>>> SAS01 readonly off default >>>>> SAS01 jailed off default >>>>> SAS01 snapdir hidden default >>>>> SAS01 aclmode discard default >>>>> SAS01 aclinherit restricted default >>>>> SAS01 canmount on default >>>>> SAS01 xattr off temporary >>>>> SAS01 copies 1 default >>>>> SAS01 version 5 - >>>>> SAS01 utf8only off - >>>>> SAS01 normalization none - >>>>> SAS01 casesensitivity sensitive - >>>>> SAS01 vscan off default >>>>> SAS01 nbmand off default >>>>> SAS01 sharesmb off default >>>>> SAS01 refquota none default >>>>> SAS01 refreservation none default >>>>> SAS01 primarycache all default >>>>> SAS01 secondarycache all default >>>>> SAS01 usedbysnapshots 0 - >>>>> SAS01 usedbydataset 307K - >>>>> SAS01 usedbychildren 30.3T - >>>>> SAS01 usedbyrefreservation 0 - >>>>> SAS01 logbias latency default >>>>> SAS01 dedup off default >>>>> SAS01 mlslabel - >>>>> SAS01 sync standard default >>>>> SAS01 refcompressratio 1.00x - >>>>> SAS01 written 307K - >>>>> SAS01 logicalused 30.2T - >>>>> SAS01 logicalreferenced 12K - >>>>> SAS01 volmode default default >>>>> SAS01 filesystem_limit none default >>>>> SAS01 snapshot_limit none default >>>>> SAS01 filesystem_count none default >>>>> SAS01 snapshot_count none default >>>>> SAS01 redundant_metadata all default >>>>> # zpool status SAS01 >>>>> pool: SAS01 >>>>> state: ONLINE >>>>> scan: scrub repaired 0 in 20h26m with 0 errors on Thu Dec 25 >>>>> 20:57:34 2014 >>>>> config: >>>>> >>>>> NAME STATE READ WRITE CKSUM >>>>> SAS01 ONLINE 0 0 0 >>>>> raidz2-0 ONLINE 0 0 0 >>>>> da45 ONLINE 0 0 0 >>>>> da44 ONLINE 0 0 0 >>>>> da47 ONLINE 0 0 0 >>>>> da43 ONLINE 0 0 0 >>>>> da42 ONLINE 0 0 0 >>>>> da46 ONLINE 0 0 0 >>>>> da41 ONLINE 0 0 0 >>>>> da40 ONLINE 0 0 0 >>>>> >>>>> errors: No known data errors >>>>> # zfs upgrade -r SAS01 >>>>> cannot set property for 'SAS01/t03': out of space >>>>> cannot set property for 'SAS01/t04': out of space >>>>> cannot set property for 'SAS01/t05': out of space >>>>> cannot set property for 'SAS01/t06': out of space >>>>> cannot set property for 'SAS01/t07': out of space >>>>> cannot set property for 'SAS01/t08': out of space >>>>> cannot set property for 'SAS01/t12': out of space >>>>> cannot set property for 'SAS01/t13': out of space >>>>> 0 filesystems upgraded >>>>> 1 filesystems already at this version >>>>> # >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 22:01:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 30163433; Sat, 27 Dec 2014 22:01:30 +0000 (UTC) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1014064DE1; Sat, 27 Dec 2014 22:01:30 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id sBRM1QYp062205; Sat, 27 Dec 2014 14:01:27 -0800 (PST) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201412272201.sBRM1QYp062205@chez.mckusick.com> To: Rick Macklem Subject: Re: RFC: new NFS mount option or restore old behaviour for Solaris server bug? In-reply-to: <1190766207.2826601.1419690496079.JavaMail.root@uoguelph.ca> Date: Sat, 27 Dec 2014 14:01:26 -0800 From: Kirk McKusick Cc: FreeBSD Filesystems , Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 22:01:30 -0000 > Date: Sat, 27 Dec 2014 09:28:16 -0500 (EST) > From: Rick Macklem > To: FreeBSD Filesystems , > John Baldwin , Konstantin Belousov > Subject: RFC: new NFS mount option or restore old behaviour for Solaris > server bug? > > Hi, > > The FreeBSD9.1 and earlier NFS clients almost always (unless the > tod clock ticked to next second while the operation was in progress) > set the mtime to the server's time (xx_TOSERVER) for exclusive open. > Starting with FreeBSD9.2, the mtime would be set to the client's time > due to r245508, which fixed the code for utimes() to use VA_UTIMES_NULL. > > This change tickled a bug in recent Solaris servers, which return > NFS_OK to the Setattr RPC but don't actually set the file's mode bits. > (The bug isn't tickled when mtime is set to the server's time.) > I have patches to work around this in two ways: > 1 - Add a new "useservertime" mount option that forces xx_TOSERVER. > (This patch would force xx_TOSERVER for exclusive open.) > It permits the man page to document why it is needed-->broken Solaris servers. > 2 - Use xx_TOSERVER for exclusive open always. Since this was the normal > behaviour until FreeBSD9.2, I don't think this would cause problems or > be a POLA violation, but I can't be sure? > > I am leaning towards #2, since it avoids yet another mount option. > However, I'd like other people's opinions on which option is better, > or any other suggestions? > > Thanks in advance for your comments, rick I lean towards solution #2. It tracks historic practice and avoids yet another mount flag. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Sat Dec 27 23:32:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 632E8F0A; Sat, 27 Dec 2014 23:32:56 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 08FABEDA; Sat, 27 Dec 2014 23:32:55 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApkGAIdAn1SDaFve/2dsb2JhbABcFoNCWAEDgwHDYYVzgR8BAQEBAX2ENgSBBwINGQJfAYg+Da8ZlHsBAQEBBgEBAQEBARyBIY4igyOBQQWJS4JjhSaGcYUugnyHaSKEDCAygUR+AQEB X-IronPort-AV: E=Sophos;i="5.07,653,1413259200"; d="scan'208";a="179638411" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 27 Dec 2014 18:32:49 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id A94DCB3F2B; Sat, 27 Dec 2014 18:32:48 -0500 (EST) Date: Sat, 27 Dec 2014 18:32:48 -0500 (EST) From: Rick Macklem To: FreeBSD Filesystems , Kirk McKusick , Gleb Kurtsou , Konstantin Belousov Message-ID: <1966344327.2961798.1419723168645.JavaMail.root@uoguelph.ca> Subject: patch that makes d_fileno 64bits MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Dec 2014 23:32:56 -0000 Hi, Kirk and Gleb Kurtsou (plus some others) are working through the difficult job of changing ino_t to 64bits. (Changes to syscalls, libraries, etc.) This patch: http://people.freebsd.org/~rmacklem/64bitfileno.patch is somewhat tangential to the above, in that it changes the d_fileno field of "struct dirent" and va_fileid to uint64_t. It also includes adding a field called d_cookie to "struct dirent", which is the position of the next directory entry in the underlying file system. A majority of this patch are changes to the NFS code, but it includes a simple "new struct dirent"->"old struct dirent32" copy routine for getdirentries(2) and small changes to all the file systems so they fill in the "new struct dirent". This patch can be applied to head/current and the resultant kernel should work fine, although I've only been able to test some of the file systems. However, DO NOT propagate the changes to sys/sys/dirent.h out to userland (/usr/include/sys/dirent.h) and build a userland from it or things will get badly broken. I don't know if Kirk and/or Gleb will find some of this useful for their updates to project/ino64, but it will allow people to test these changes. (It modifies the NFS server so that it no longer uses the "cookie" args to VOP_READDIR(), but that part can easily be removed from the patch.) If folks can test this patch, I think it would be helpful for the effort of changing ino_t to 64bits. Have fun with it, rick