Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 4 Aug 2007 09:40:20 +0300
From:      Kostik Belousov <kostikbel@gmail.com>
To:        Pawel Jakub Dawidek <pjd@freebsd.org>
Cc:        howard0su@gmail.com, kib@freebsd.org, Dmitry Morozovsky <marck@rinet.ru>, current@freebsd.org
Subject:   Re: contemporary -current panic: locking against myself
Message-ID:  <20070804064020.GN2738@deviant.kiev.zoral.com.ua>
In-Reply-To: <20070803102019.GG37984@garage.freebsd.pl>
References:  <20070802155317.X50347@woozle.rinet.ru> <20070803102019.GG37984@garage.freebsd.pl>

next in thread | previous in thread | raw e-mail | index | archive | help

--b/Q3JWIUAuLE0ZFy
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Aug 03, 2007 at 12:20:19PM +0200, Pawel Jakub Dawidek wrote:
> On Thu, Aug 02, 2007 at 03:58:26PM +0400, Dmitry Morozovsky wrote:
> >=20
> > Hi there colleagues,
> >=20
> > FreeBSD/i386 on Athlon X2, HEAD without WITNESS. 4G of RAM. tmpfs used =
for=20
> > 'make release'.
> >=20
> >=20
> > panic: lockmgr: locking against myself
> > cpuid =3D 0
> > KDB: enter: panic
> > [thread pid 19396 tid 100245 ]
> > Stopped at      kdb_enter+0x32: leave
> >=20
> > db> tr =20
> > Tracing pid 19396 tid 100245 td 0xce194220
> > kdb_enter(c066f664,0,c066dca9,e92799cc,0,...) at kdb_enter+0x32
> > panic(c066dca9,e92799dc,c0559cc7,e9279ac0,ca2f7770,...) at panic+0x124
> > _lockmgr(ca2f77c8,3002,ca2f77f8,ce194220,c0675afc,...) at _lockmgr+0x401
> > vop_stdlock(e9279a5c,ce194220,3002,ca2f7770,e9279a80,...) at vop_stdloc=
k+0x40
> > VOP_LOCK1_APV(d06417e0,e9279a5c,e9279bc0,0,c8d00330,...) at VOP_LOCK1_A=
PV+0x46
> > _vn_lock(ca2f7770,3002,ce194220,c0675afc,7f3,...) at _vn_lock+0x166
> > vget(ca2f7770,1000,ce194220,0,e9279b98,...) at vget+0x114
> > vm_object_reference(d1c70348,e9279b30,c063f81d,c0c71000,e381d000,...) a=
t=20
> > vm_object_reference+0x12a
> > kern_execve(ce194220,e9279c5c,0,28204548,282045d8,e381d000,e381d000,e38=
1d015,e381d4dc,e385d000,3fb24,3,20)=20
> > at kern_execve+0x31a
> > execve(ce194220,e9279cfc,c,ce194220,e9279d2c,...) at execve+0x4c
> > syscall(e9279d38) at syscall+0x345
> > Xint0x80_syscall() at Xint0x80_syscall+0x20
> > --- syscall (59, FreeBSD ELF32, execve), eip =3D 0x28146a47, esp =3D 0x=
bfbfe4cc,=20
> > ebp =3D 0xbfbfe4e8 ---
> >=20
> > db> show lockedvnods
> > Locked vnodes
> >=20
> > 0xca2f7770: tag tmpfs, type VREG
> >     usecount 1, writecount 0, refcount 4 mountedhere 0
> >     flags ()
> >     v_object 0xd1c70348 ref 1 pages 19
> >      lock type tmpfs: EXCL (count 1) by thread 0xce194220 (pid 19396) w=
ith 1=20
> > pending
> > tag VT_TMPFS, tmpfs_node 0xd177f9d4, flags 0x0, links 9
> >         mode 0555, owner 0, group 0, size 76648, status 0x0
> >=20
> > It seems there is some locking problem in tmpfs.
> >=20
> > What other info should I provide to help resolve the problem?
>=20
> Here you can find two patches, which may or may not fix your problem.
> The first one is actually only to improve debug.
>=20
> This patch adds all vnode flags to the output, because I believe you
> have VI_OWEINACT set, but not printed:
>=20
> 	http://people.freebsd.org/~pjd/patches/vfs_subr.c.4.patch
>=20
> The problem here is that vm_object_reference() calls vget() without any
> lock flag and vget() locks vnode exclusively when the VI_OWEINACT flag
> is set. vget() should probably be fixed too, but jeff@ opinion is that
> it shouldn't happen in this case, so this may be tmpfs bug.
>=20
> The patch below fixes some locking problems in tmpfs:
>=20
> 	http://people.freebsd.org/~pjd/patches/tmpfs.patch
>=20
> The problems are:
> - tmpfs_root() should honour 'flags' argument, and not always lock the
>   vnode exclusively,
> - tmpfs_lookup() should lock vnode using cnp->cn_lkflags, and not always
>   do it exclusively,
> - in ".." case when we unlock directory vnode to avoid deadlock, we
>   should relock it using the same type of lock it was locked before and
>   not always relock it exclusively,
>=20
> Note, that this patch wasn't even compiled tested.

Might be, vget shall check whether the vnode is already locked. On the
other hand, I do not see how this scenario could be realized (note that
usecount is already > 0).

tmpfs may operate on random vnodes due to lack of synchronization between
reclamation and vnode attachment to the tmpfs node. I already discussed
this with delphij@.

--b/Q3JWIUAuLE0ZFy
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (FreeBSD)

iD8DBQFGtB9TC3+MBN1Mb4gRAmQvAKCkdQ5ga1TlH3+Nu99Jm1k/29hFEQCg3ifT
j5jZ/j19HPRYWmyZB6nxOhM=
=ouXa
-----END PGP SIGNATURE-----

--b/Q3JWIUAuLE0ZFy--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070804064020.GN2738>