Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 27 Dec 2020 17:50:20 +0200
From:      Konstantin Belousov <kostikbel@gmail.com>
To:        Rick Macklem <rmacklem@uoguelph.ca>
Cc:        J David <j.david.lists@gmail.com>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: Major issues with nfsv4
Message-ID:  <X%2BitPGhDz6Vldqh%2B@kib.kiev.ua>
In-Reply-To: <YQXPR0101MB096897CA4344DFDC8D22DFE9DDDB0@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>
References:  <YQXPR0101MB0968B17010B3B36C8C41FDE1DDC90@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <X9Q9GAhNHbXGbKy7@kib.kiev.ua> <YQXPR0101MB0968C7629D57CA21319E50C2DDC90@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <X9UDArKjUqJVS035@kib.kiev.ua> <CABXB=RRNnW9nNhFCJS1evNUTEX9LNnzyf2gOmZHHGkzAoQxbPw@mail.gmail.com> <YQXPR0101MB0968B120A417AF69CEBB6A12DDC80@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <X9aGwshgh7Cwiv8p@kib.kiev.ua> <CABXB=RTFSAEZvp%2BmoiF%2BrE9vpEjJVacLYa6G=yP641f9oHJ1zw@mail.gmail.com> <YQXPR0101MB09681D2CB8FBD5DDE907D5A5DDC40@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM> <YQXPR0101MB096897CA4344DFDC8D22DFE9DDDB0@YQXPR0101MB0968.CANPRD01.PROD.OUTLOOK.COM>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Dec 26, 2020 at 11:10:01PM +0000, Rick Macklem wrote:
> Although you have not posted the value for
> vfs.deferred_inact, if that value has become
> relatively large when the problem occurs,
> it might support this theory w.r.t. how this
> could happen.
> 
> Two processes in different jails do "stat()" or
> similar on the same file in the NFS file system
> at basically the same time.
> --> They both get shared locked nullfs vnodes,
>       both of which hold shared locks on the
>       same lowervp (the NFS client one).
> --> They both do vput() on these nullfs vnodes
>       concurrently.
> 
> If both call vput_final() concurrently, I think both
> could have the VOP_LOCK(vp, LK_UPGRADE | LK_INTERLOCK |
>    LK_NOWAIT) at line #3147 fail, since this will call null_lock()
> for both nullfs vnodes and then both null_lock() calls will
> do VOP_LOCK(lvp, flags); at line #705.
> --> The call fails for both processes, since the other one still
>       holds the shared lock on the NFS client vnode.
> 
> If I have this right, then both processes end up calling
> vdefer_inactive() for the upper nullfs vnodes.
> 
> If this is what is happening, then when does the VOP_INACTIVE()
> get called for the lowervp?
> 
> I see vfs_deferred_inactive() in sys/kern/vfs_subr.c, but I do not
> know when/how it gets called?
Right, vfs_deferred_inactive() is one way which tries to handle missed
inactivations. If upon vput() the lock is only shared and upgrade
failed, vnode is marked as VI_OWEINACT and put onto 'lazy' list,
processed by vfs_sync(MNT_LAZY). It is typically called from syncer,
which means each 60 secs. There, if the vnode is still unreferenced, it
is inactivated.

Another place where inactivation can occur is reclamation. There in
vgonel(), we call VOP_INACTIVE() if VI_OWEINACT is set. In principle,
this is redundand because correct filesystem must do the same cleanup
(and more) at reclamation as at the inactivation.  But we also call
VOP_CLOSE(FNONBLOCK) before VOP_RECLAIM().

Looking at this from another angle, if inactivation for NFSv4 vnodes
is not called longer than 2 minutes, perhaps there is a reference leak.
It is not due to VFS forgetting about due VOP_INACTIVE() call.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?X%2BitPGhDz6Vldqh%2B>