Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 4 Aug 2010 01:46:16 +0000
From:      mdf@FreeBSD.org
To:        freebsd-hackers@freebsd.org
Subject:   Re: sched_pin() versus PCPU_GET
Message-ID:  <AANLkTinp7278ZD1L8s616seQET=OQBx1RZ4eHx=e%2BpD5@mail.gmail.com>
In-Reply-To: <201007301031.34266.jhb@freebsd.org>
References:  <AANLkTikY20TxyeyqO5zP3zC-azb748kV-MdevPfm%2B8cq@mail.gmail.com> <201007301008.22501.jhb@freebsd.org> <201007301031.34266.jhb@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Jul 30, 2010 at 2:31 PM, John Baldwin <jhb@freebsd.org> wrote:
> On Friday, July 30, 2010 10:08:22 am John Baldwin wrote:
>> On Thursday, July 29, 2010 7:39:02 pm mdf@freebsd.org wrote:
>> > We've seen a few instances at work where witness_warn() in ast()
>> > indicates the sched lock is still held, but the place it claims it was
>> > held by is in fact sometimes not possible to keep the lock, like:
>> >
>> > =A0 =A0 thread_lock(td);
>> > =A0 =A0 td->td_flags &=3D ~TDF_SELECT;
>> > =A0 =A0 thread_unlock(td);
>> >
>> > What I was wondering is, even though the assembly I see in objdump -S
>> > for witness_warn has the increment of td_pinned before the PCPU_GET:
>> >
>> > ffffffff802db210: =A0 65 48 8b 1c 25 00 00 =A0 =A0mov =A0 =A0%gs:0x0,%=
rbx
>> > ffffffff802db217: =A0 00 00
>> > ffffffff802db219: =A0 ff 83 04 01 00 00 =A0 =A0 =A0 incl =A0 0x104(%rb=
x)
>> > =A0 =A0 =A0* Pin the thread in order to avoid problems with thread mig=
ration.
>> > =A0 =A0 =A0* Once that all verifies are passed about spinlocks ownersh=
ip,
>> > =A0 =A0 =A0* the thread is in a safe path and it can be unpinned.
>> > =A0 =A0 =A0*/
>> > =A0 =A0 sched_pin();
>> > =A0 =A0 lock_list =3D PCPU_GET(spinlocks);
>> > ffffffff802db21f: =A0 65 48 8b 04 25 48 00 =A0 =A0mov =A0 =A0%gs:0x48,=
%rax
>> > ffffffff802db226: =A0 00 00
>> > =A0 =A0 if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) {
>> > ffffffff802db228: =A0 48 85 c0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0test =A0=
 %rax,%rax
>> > =A0 =A0 =A0* Pin the thread in order to avoid problems with thread mig=
ration.
>> > =A0 =A0 =A0* Once that all verifies are passed about spinlocks ownersh=
ip,
>> > =A0 =A0 =A0* the thread is in a safe path and it can be unpinned.
>> > =A0 =A0 =A0*/
>> > =A0 =A0 sched_pin();
>> > =A0 =A0 lock_list =3D PCPU_GET(spinlocks);
>> > ffffffff802db22b: =A0 48 89 85 f0 fe ff ff =A0 =A0mov =A0 =A0%rax,-0x1=
10(%rbp)
>> > ffffffff802db232: =A0 48 89 85 f8 fe ff ff =A0 =A0mov =A0 =A0%rax,-0x1=
08(%rbp)
>> > =A0 =A0 if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) {
>> > ffffffff802db239: =A0 0f 84 ff 00 00 00 =A0 =A0 =A0 je =A0 =A0 fffffff=
f802db33e
>> > <witness_warn+0x30e>
>> > ffffffff802db23f: =A0 44 8b 60 50 =A0 =A0 =A0 =A0 =A0 =A0 mov =A0 =A00=
x50(%rax),%r12d
>> >
>> > is it possible for the hardware to do any re-ordering here?
>> >
>> > The reason I'm suspicious is not just that the code doesn't have a
>> > lock leak at the indicated point, but in one instance I can see in the
>> > dump that the lock_list local from witness_warn is from the pcpu
>> > structure for CPU 0 (and I was warned about sched lock 0), but the
>> > thread id in panic_cpu is 2. =A0So clearly the thread was being migrat=
ed
>> > right around panic time.
>> >
>> > This is the amd64 kernel on stable/7. =A0I'm not sure exactly what kin=
d
>> > of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC.
>> >
>> > So... do we need some kind of barrier in the code for sched_pin() for
>> > it to really do what it claims? =A0Could the hardware have re-ordered
>> > the "mov =A0 =A0%gs:0x48,%rax" PCPU_GET to before the sched_pin()
>> > increment?
>>
>> Hmmm, I think it might be able to because they refer to different locati=
ons.
>>
>> Note this rule in section 8.2.2 of Volume 3A:
>>
>> =A0 =95 Reads may be reordered with older writes to different locations =
but not
>> =A0 =A0 with older writes to the same location.
>>
>> It is certainly true that sparc64 could reorder with RMO. =A0I believe i=
a64
>> could reorder as well. =A0Since sched_pin/unpin are frequently used to p=
rovide
>> this sort of synchronization, we could use memory barriers in pin/unpin
>> like so:
>>
>> sched_pin()
>> {
>> =A0 =A0 =A0 td->td_pinned =3D atomic_load_acq_int(&td->td_pinned) + 1;
>> }
>>
>> sched_unpin()
>> {
>> =A0 =A0 =A0 atomic_store_rel_int(&td->td_pinned, td->td_pinned - 1);
>> }
>>
>> We could also just use atomic_add_acq_int() and atomic_sub_rel_int(), bu=
t they
>> are slightly more heavyweight, though it would be more clear what is hap=
pening
>> I think.
>
> However, to actually get a race you'd have to have an interrupt fire and
> migrate you so that the speculative read was from the other CPU. =A0Howev=
er, I
> don't think the speculative read would be preserved in that case. =A0The =
CPU
> has to return to a specific PC when it returns from the interrupt and it =
has
> no way of storing the state for what speculative reordering it might be
> doing, so presumably it is thrown away? =A0I suppose it is possible that =
it
> actually retires both instructions (but reordered) and then returns to th=
e PC
> value after the read of listlocks after the interrupt. =A0However, in tha=
t case
> the scheduler would not migrate as it would see td_pinned !=3D 0. =A0To g=
et the
> race you have to have the interrupt take effect prior to modifying td_pin=
ned,
> so I think the processor would have to discard the reordered read of
> listlocks so it could safely resume execution at the 'incl' instruction.
>
> The other nit there on x86 at least is that the incl instruction is doing
> both a read and a write and another rule in the section 8.2.2 is this:
>
> =A0=95 Reads are not reordered with other reads.
>
> That would seem to prevent the read of listlocks from passing the read of
> td_pinned in the incl instruction on x86.

I wonder how that's interpreted in the microcode, though?  I.e. if the
incr instruction decodes to load, add, store, does the h/w allow the
later reads to pass the final store?

I added the following:

 	sched_pin();
 	lock_list =3D PCPU_GET(spinlocks);
 	if (lock_list !=3D NULL && lock_list->ll_count !=3D 0) {
+		/* XXX debug for bug 67957 */
+		mfence();
+		lle =3D PCPU_GET(spinlocks);
+		if (lle !=3D lock_list) {
+			panic("Bug 67957: had lock list %p, now %p\n",
+			    lock_list, lle);
+		}
+		/* XXX end debug */
 		sched_unpin();

 		/*

... and the panic triggered.  I think it's more likely that some
barrier is needed in sched_pin() than that %gs is getting corrupted
but can always be dereferenced.

An mfence() at the end of sched_pin() would be sufficient, but it
seems like overkill since all we really need is to prevent instruction
re-ordering.  As I said above, on PowerPC this would be isync; what is
the equivalent on x86?  I can try it out and see if this panic goes
away.

Thanks,
matthew



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTinp7278ZD1L8s616seQET=OQBx1RZ4eHx=e%2BpD5>