Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 29 Jul 2010 16:39:02 -0700
From:      mdf@FreeBSD.org
To:        freebsd-hackers@freebsd.org
Subject:   sched_pin() versus PCPU_GET
Message-ID:  <AANLkTikY20TxyeyqO5zP3zC-azb748kV-MdevPfm%2B8cq@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
We've seen a few instances at work where witness_warn() in ast()
indicates the sched lock is still held, but the place it claims it was
held by is in fact sometimes not possible to keep the lock, like:

	thread_lock(td);
	td->td_flags &= ~TDF_SELECT;
	thread_unlock(td);

What I was wondering is, even though the assembly I see in objdump -S
for witness_warn has the increment of td_pinned before the PCPU_GET:

ffffffff802db210:	65 48 8b 1c 25 00 00 	mov    %gs:0x0,%rbx
ffffffff802db217:	00 00
ffffffff802db219:	ff 83 04 01 00 00    	incl   0x104(%rbx)
	 * Pin the thread in order to avoid problems with thread migration.
	 * Once that all verifies are passed about spinlocks ownership,
	 * the thread is in a safe path and it can be unpinned.
	 */
	sched_pin();
	lock_list = PCPU_GET(spinlocks);
ffffffff802db21f:	65 48 8b 04 25 48 00 	mov    %gs:0x48,%rax
ffffffff802db226:	00 00
	if (lock_list != NULL && lock_list->ll_count != 0) {
ffffffff802db228:	48 85 c0             	test   %rax,%rax
	 * Pin the thread in order to avoid problems with thread migration.
	 * Once that all verifies are passed about spinlocks ownership,
	 * the thread is in a safe path and it can be unpinned.
	 */
	sched_pin();
	lock_list = PCPU_GET(spinlocks);
ffffffff802db22b:	48 89 85 f0 fe ff ff 	mov    %rax,-0x110(%rbp)
ffffffff802db232:	48 89 85 f8 fe ff ff 	mov    %rax,-0x108(%rbp)
	if (lock_list != NULL && lock_list->ll_count != 0) {
ffffffff802db239:	0f 84 ff 00 00 00    	je     ffffffff802db33e
<witness_warn+0x30e>
ffffffff802db23f:	44 8b 60 50          	mov    0x50(%rax),%r12d

is it possible for the hardware to do any re-ordering here?

The reason I'm suspicious is not just that the code doesn't have a
lock leak at the indicated point, but in one instance I can see in the
dump that the lock_list local from witness_warn is from the pcpu
structure for CPU 0 (and I was warned about sched lock 0), but the
thread id in panic_cpu is 2.  So clearly the thread was being migrated
right around panic time.

This is the amd64 kernel on stable/7.  I'm not sure exactly what kind
of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC.

So... do we need some kind of barrier in the code for sched_pin() for
it to really do what it claims?  Could the hardware have re-ordered
the "mov    %gs:0x48,%rax" PCPU_GET to before the sched_pin()
increment?

Thanks,
matthew



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikY20TxyeyqO5zP3zC-azb748kV-MdevPfm%2B8cq>