Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 May 2011 16:46:03 -0400
From:      John Baldwin <jhb@freebsd.org>
To:        Max Laier <max@love2party.net>
Cc:        FreeBSD current <freebsd-current@freebsd.org>, Peter Grehan <grehan@freebsd.org>, Andriy Gapon <avg@freebsd.org>, neel@freebsd.org
Subject:   Re: proposed smp_rendezvous change
Message-ID:  <201105161646.03338.jhb@freebsd.org>
In-Reply-To: <201105161630.44577.max@love2party.net>
References:  <4DCD357D.6000109@FreeBSD.org> <201105161421.27665.jhb@freebsd.org> <201105161630.44577.max@love2party.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Monday, May 16, 2011 4:30:44 pm Max Laier wrote:
> On Monday 16 May 2011 14:21:27 John Baldwin wrote:
> > Yes, we need to fix that.  Humm, it doesn't preempt when you do a
> > critical_exit() though?  Or do you use a hand-rolled critical exit that
> > doesn't do a deferred preemption?
> 
> Right now I just did a manual td_critnest++/--, but I guess ...

Ah, ok, so you would "lose" a preemption.  That's not really ideal.

> > Actually, I'm curious how the spin unlock inside the IPI could yield the
> > CPU.  Oh, is rmlock doing a wakeup inside the IPI handler?  I guess that is
> > ok as long as the critical_exit() just defers the preemption to the end of
> > the IPI handler.
> 
> ... the earliest point where it is safe to preempt is after doing the 
> 
>    atomic_add_int(&smp_rv_waiters[2], 1);
> 
> so that we can start other IPIs again.  However, since we don't accept new 
> IPIs until we signal EOI in the MD code (on amd64), this might still not be a 
> good place to do the yield?!?

Hmm, yeah, you would want to do the EOI before you yield.  However, we could
actually move the EOI up before calling the MI code so long as we leave
interrupts disabled for the duration of the handler (which we do).

> The spin unlock boils down to a critical_exit() and unless we did a 
> critical_enter() at some point during the redenvouz setup, we will yield() if 
> we owepreempt.  I'm not quite sure how that can happen, but it seems like 
> there is a path that allows the scheduler to set it from a foreign CPU.

No, it is only set on curthread by curthread.  This is actually my main
question.  I've no idea how this could happen unless the rmlock code is
actually triggering a wakeup or sched_add() in its rendezvous handler.

I don't see anything in rm_cleanIPI() that would do that however.

I wonder if your original issue was really fixed  just by the first patch
you had which fixed the race in smp_rendezvous()?

-- 
John Baldwin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201105161646.03338.jhb>