Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 20 Mar 2013 21:14:37 -0400 (EDT)
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Konstantin Belousov <kostikbel@gmail.com>
Cc:        Jeremy Chadwick <jdc@koitsu.org>, Michael Landin Hostbaek <mich@FreeBSD.org>, freebsd-stable@FreeBSD.org, John Baldwin <jhb@FreeBSD.org>, Andriy Gapon <avg@FreeBSD.org>
Subject:   Re: Core Dump / panic sleeping thread
Message-ID:  <223607151.4129243.1363828477555.JavaMail.root@erie.cs.uoguelph.ca>
In-Reply-To: <20130320173100.GF3794@kib.kiev.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
Konstantin Belousov wrote:
> On Wed, Mar 20, 2013 at 11:37:56AM -0400, Rick Macklem wrote:
> > Konstantin Belousov wrote:
> > > On Wed, Mar 20, 2013 at 12:13:05PM +0100, Michael Landin Hostbaek
> > > wrote:
> > > >
> > > > On Mar 20, 2013, at 10:49 AM, Konstantin Belousov
> > > > <kostikbel@gmail.com> wrote:
> > > > >
> > > > > I do not like it. As I said in the previous response to
> > > > > Andrey,
> > > > > I think that moving the vnode_pager_setsize() after the unlock
> > > > > is
> > > > > better, since it reduces races with other thread seeing
> > > > > half-done
> > > > > attribute update or making attribute change simultaneously.
> > > >
> > > > OK - so should I wait for another patch - or?
> > >
> > > I think the following is what I mean. As an additional note, why
> > > nfs
> > > client does not trim the buffers when server reported node size
> > > change
> > > ?
> > >
> > > diff --git a/sys/fs/nfsclient/nfs_clport.c
> > > b/sys/fs/nfsclient/nfs_clport.c
> > > index a07a67f..4fe2e35 100644
> > > --- a/sys/fs/nfsclient/nfs_clport.c
> > > +++ b/sys/fs/nfsclient/nfs_clport.c
> > > @@ -361,6 +361,8 @@ nfscl_loadattrcache(struct vnode **vpp, struct
> > > nfsvattr *nap, void *nvaper,
> > > struct nfsnode *np;
> > > struct nfsmount *nmp;
> > > struct timespec mtime_save;
> > > + u_quad_t nsize;
> > > + int setnsize;
> > >
> > > /*
> > > * If v_type == VNON it is a new node, so fill in the v_type,
> > > @@ -418,6 +420,7 @@ nfscl_loadattrcache(struct vnode **vpp, struct
> > > nfsvattr *nap, void *nvaper,
> > > } else
> > > vap->va_fsid = vp->v_mount->mnt_stat.f_fsid.val[0];
> > > np->n_attrstamp = time_second;
> > > + setnsize = 0;
> > > if (vap->va_size != np->n_size) {
> > > if (vap->va_type == VREG) {
> > > if (dontshrink && vap->va_size < np->n_size) {
> > > @@ -444,10 +447,13 @@ nfscl_loadattrcache(struct vnode **vpp,
> > > struct
> > > nfsvattr *nap, void *nvaper,
> > > np->n_size = vap->va_size;
> > > np->n_flag |= NSIZECHANGED;
> > > }
> > > - vnode_pager_setsize(vp, np->n_size);
> > > } else {
> > > np->n_size = vap->va_size;
> > > }
> > > + if (vap->va_type == VREG || vap->va_type == VDIR) {
> > > + setnsize = 1;
> > > + nsize = vap->va_size;
> > I might have used np->n_size here, since that is what is given
> > as the argument for the pre-patched version, but since
> > np->n_size should equal vap->va_size (it is set the same for
> > all cases in the code at this point), it doesn't really matter.
> >
> > I have no idea what the implications of doing vnode_pager_setsize()
> > for VDIR is, but Kostik would be much more conversant that I on
> > this,
> > so if he thinks it's ok, that's fine with me.
> >
> > > + }
> > > }
> > > /*
> > > * The following checks are added to prevent a race between (say)
> > > @@ -480,6 +486,8 @@ nfscl_loadattrcache(struct vnode **vpp, struct
> > > nfsvattr *nap, void *nvaper,
> > > KDTRACE_NFS_ATTRCACHE_LOAD_DONE(vp, vap, 0);
> > > #endif
> > > NFSUNLOCKNODE(np);
> > > + if (setnsize)
> > > + vnode_pager_setsize(vp, nsize);
> > > return (0);
> > > }
> > Yes, I think Kostik's version of the patch is good. I had thought
> > of doing it that way, but want for the "minimal change" version.
> > I agree that avoiding unlocking/relocking the mutex is a good idea,
> > although I didn't see anything after the relock that I thought
> > might be an issue if something changed while unlocked.
> If the parallel calls to nfscl_loadattrcache() are possible, then
> IMHO at least the n_attrstamp could be cleared needlessly.
> 
And that could result in an attribute cache miss, since setting
n_attrstamp = 0 invalidates the cached attributes.

I agree that your patch is preferred. I'll admit I was mostly lazy
(and a little afraid of moving the vnode_pager_setsize()) when I
did the patch.

> >
> > Kostik, thanks for posting this version, rick
> > ps: Michael, I'd suggest you try this patch instead of mine.
> Still, my patch has the issue I noted for the head as well: the
> buffers
> are not destroyed if the size of the vnode is decreased. I would be
> inclined to suggest the following change on top of my patch, but I am
> sure that it does not work, since vnode is generally not locked in
> the nfs_loadattrcache(), I think:
> 
Well, read/write sharing of files over NFS is pretty rare, so I suspect
a truncation of a file by another client (or locally in the NFS server)
is a rare event. As such, not invalidating the buffers here doesn't seem
like a big issue? (The client uses np->n_size to determine EOF.)

Also, I think close-to-open consistency will typically throw away the
buffers on the next open when it sees the mtime changed. (Yes, there
won't necessarily be another open, but...)

As you point out, I don't think the vnode will be locked when
nfscl_loadattrcache() is called for read/writes being done by the
nfsiod threads and will only be a shared vnode lock for other reads.

I think your patch without the below addition is just fine, rick

> diff --git a/sys/fs/nfsclient/nfs_clport.c
> b/sys/fs/nfsclient/nfs_clport.c
> index 4fe2e35..3a08424 100644
> --- a/sys/fs/nfsclient/nfs_clport.c
> +++ b/sys/fs/nfsclient/nfs_clport.c
> @@ -487,7 +487,7 @@ nfscl_loadattrcache(struct vnode **vpp, struct
> nfsvattr *nap, void *nvaper,
> #endif
> NFSUNLOCKNODE(np);
> if (setnsize)
> - vnode_pager_setsize(vp, nsize);
> + vtruncbuf(vp, NOCRED, nsize, vp->v_bufobj.bo_bsize);
> return (0);
> }



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?223607151.4129243.1363828477555.JavaMail.root>