Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 25 Nov 2016 12:54:07 +0000
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Konstantin Belousov <kostikbel@gmail.com>
Cc:        Alan Somers <asomers@freebsd.org>, FreeBSD CURRENT <freebsd-current@freebsd.org>
Subject:   Re: NFSv4 performance degradation with 12.0-CURRENT client
Message-ID:  <YTXPR01MB018969FB21212700C4043AEADD890@YTXPR01MB0189.CANPRD01.PROD.OUTLOOK.COM>
In-Reply-To: <20161125084106.GX54029@kib.kiev.ua>
References:  <CAOtMX2jJ2XoQyVG1c04QL7NTJn1pg38s=XEgecE38ea0QoFAOw@mail.gmail.com> <20161124090811.GO54029@kib.kiev.ua> <YTXPR01MB0189E0B1DB5B16EE6B388B7DDDB60@YTXPR01MB0189.CANPRD01.PROD.OUTLOOK.COM> <CAOtMX2hBXAJN_udED-u5%2B6UznR2%2BW88xgb=RqKSZL65Z3%2BcKOw@mail.gmail.com> <YTXPR01MB0189C3E11821E4F7B7DF1814DDB60@YTXPR01MB0189.CANPRD01.PROD.OUTLOOK.COM>, <20161125084106.GX54029@kib.kiev.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
Konstantin Belousov wrote:

>On Thu, Nov 24, 2016 at 10:45:51PM +0000, Rick Macklem wrote:
>> asomers@gmail.com wrote:
>> >OpenOwner     Opens LockOwner     Locks    Delegs  LocalOwn LocalOpen L=
ocalLOwn
>> >     5638    141453         0         0         0         0         0  =
       0
>> Ok, I think this shows us the problem. 141453 opens is a lot and the cli=
ent would have
>> to chek these every time another open is done (there goes all that CPU;-=
).
>>
>> Now, why has this occurred?
>> Well, the NFSv4 client can't close NFSv4 Opens on a vnode until that vno=
de's
>> v_usecount goes to 0. This is because mmap'd files might do I/O after th=
e file
>> descriptor is closed.
>> Now, hopefully Kostik will know something about nullfs and can help with=
 this.
>> My guess is that nullfs ends up acquiring a refcnt on the NFS vnode so t=
he
>> v_usecount doesn't go to 0 and, therefore, the client never closes the N=
FSv4 Opens.
>> Kostik, do you know if this is the case and whether or not it can be cha=
nged?
>You are absolutely right. Nullfs vnode keeps a reference to the lower
>vnode which is below the nullfs one, i.e. to the nfs vnode in this case.
>If cache option is specified for the nullfs mount (default), the nullfs
>vnodes are cached normally to avoid the cost of creating and destroying
>nullfs vnode on each operation, and related cost of the exclusive locks
>on the lower vnode.
>
>An answer to my question in the previous mail to try with nocache
>option would give the confirmation. Really, I suspected that v_hash
>is calculated differently for NFSv3 and v4 mounts, but if opens are
>accumulated until use ref is dropped, that would explain things as well.
Hopefully Alan can test this and let us know if "nocache" on the nullfs mou=
nt
fixes the problem.

>Assuming your diagnosis is correct, are you in fact stating that the
>current VFS KPI is flawed ?  It sounds as if either some another callback
>or counter needs to exist to track number of mapping references to the
>vm object of the vnode, in addition to VOP_OPEN/VOP_CLOSE ?
>
>Currently a rough estimation of the number of mappings, which is sometimes
>slightly wrong, can be obtained by the expression
>        vp->v_object->ref_count - vp->v_object->shadow_count
Well, ideally theer would be a VOP_MMAPDONE() or something like that, which
would tell the NFSv4 client that I/O is done on the vnode so it can close i=
t.
If there was some way for the NFSv4 VOP_CLOSE() to be able to tell if the f=
ile
has been mmap'd, that would help since it could close the ones that are not
mmap'd on the last descriptor close.
(A counter wouldn't be as useful, since NFSv4 would have to keep checking i=
t to
 see if it can do the close yet, but it might still be doable.)
>
>> >LocalLock
>> >        0
>> >Rpc Info:
>> >TimedOut   Invalid X Replies   Retries  Requests
>> >        0         0         0         0       662
>> >Cache Info:
>> >Attr Hits    Misses Lkup Hits    Misses BioR Hits    Misses BioW Hits  =
  Misses
>> >     1275        58       837       121         0         0         0  =
       0
>> >BioRLHits    Misses BioD Hits    Misses DirE Hits    Misses
>> >        1         0         6         0         1         0
>> >
>> [more stuff snipped]
>> >What role could nullfs be playing?
>> As noted above, my hunch is that is acquiring a refcnt on the NFS client=
 vnode such
>> that the v_usecount doesn't go to zero (at least for a long time) and wi=
thout
>> a VOP_INACTIVE() on the NFSv4 vnode, the NFSv4 Opens don't get closed an=
d
>> accumulate.
>> (If that isn't correct, it is somehow interfering with the client Closin=
g the NFSv4 Opens
>>  in some other way.)
>>
>The following patch should automatically unset cache option for nullfs
>mounts over NFSv4 filesystem.
>
>diff --git a/sys/fs/nfsclient/nfs_clvfsops.c b/sys/fs/nfsclient/nfs_clvfso=
ps.c
>index 524a372..a7e9fe3 100644
>--- a/sys/fs/nfsclient/nfs_clvfsops.c
>+++ b/sys/fs/nfsclient/nfs_clvfsops.c
>@@ -1320,6 +1320,8 @@ out:
>                MNT_ILOCK(mp);
>                mp->mnt_kern_flag |=3D MNTK_LOOKUP_SHARED | MNTK_NO_IOPF |
>                    MNTK_USES_BCACHE;
>+               if ((VFSTONFS(mp)->nm_flag & NFSMNT_NFSV4) !=3D 0)
>+                       mp->mnt_kern_flag |=3D MNTK_NULL_NOCACHE;
>                MNT_IUNLOCK(mp);
>        }
>        return (error);
>diff --git a/sys/fs/nullfs/null_vfsops.c b/sys/fs/nullfs/null_vfsops.c
>index 49bae28..de05e8b 100644
>--- a/sys/fs/nullfs/null_vfsops.c
>+++ b/sys/fs/nullfs/null_vfsops.c
>@@ -188,7 +188,8 @@ nullfs_mount(struct mount *mp)
>        }
>
>       xmp->nullm_flags |=3D NULLM_CACHE;
>-       if (vfs_getopt(mp->mnt_optnew, "nocache", NULL, NULL) =3D=3D 0)
>+       if (vfs_getopt(mp->mnt_optnew, "nocache", NULL, NULL) =3D=3D 0 ||
>+           (xmp->nullm_vfs->mnt_kern_flag & MNTK_NULL_NOCACHE) !=3D 0)
>                xmp->nullm_flags &=3D ~NULLM_CACHE;
>
>        MNT_ILOCK(mp);
>diff --git a/sys/sys/mount.h b/sys/sys/mount.h
>index 94cabb6..b6f9fec 100644
>--- a/sys/sys/mount.h
>+++ b/sys/sys/mount.h
>@@ -370,7 +370,8 @@ void          __mnt_vnode_markerfree_active(struct vno=
de **mvp, >struct mount *);
 >#define        MNTK_SUSPEND    0x08000000      /* request write suspensio=
n */
 >#define        MNTK_SUSPEND2   0x04000000      /* block secondary writes =
*/
 >#define        MNTK_SUSPENDED  0x10000000      /* write operations are su=
spended */
>-#define        MNTK_UNUSED1    0x20000000
>+#define        MNTK_NULL_NOCACHE       0x20000000 /* auto disable cache f=
or nullfs
>+                                             mounts over this fs */
>#define MNTK_LOOKUP_SHARED     0x40000000 /* FS supports shared lock looku=
ps */
> #define        MNTK_NOKNOTE    0x80000000      /* Don't send KNOTEs from =
VOP hooks */
If the "nocache" option fixes Alan's problem, then I think a patch like thi=
s is a good
idea.

Does unionfs suffer from the same issue?
- I just took a glance and it doesn't have a "nocache" mount option.
I can probably do a little test later to-day to see if unionfs seems to suf=
fer from
the same "accumulating opens" issue.

Thanks for looking at this, rick




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?YTXPR01MB018969FB21212700C4043AEADD890>