Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 10 May 2004 03:03:02 +0200
From:      Thomas Moestl <tmm@FreeBSD.org>
To:        Julian Elischer <julian@elischer.org>
Cc:        FreeBSD current users <current@freebsd.org>
Subject:   Re: sparc64 kernel code question..
Message-ID:  <20040510010301.GA6829@timesink.dyndns.org>
In-Reply-To: <Pine.BSF.4.21.0405091542380.24403-100000@InterJet.elischer.org>
References:  <Pine.BSF.4.21.0405091542380.24403-100000@InterJet.elischer.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 2004/05/09 at 15:44:40 -0700, Julian Elischer wrote:
> in vm_machdep.c the sparc64  code has
> void
> cpu_sched_exit(struct thread *td)
> {
>         struct vmspace *vm;
>         struct pcpu *pc;
>         struct proc *p;
> 
>         mtx_assert(&sched_lock, MA_OWNED);
>  
>         p = td->td_proc;
>         vm = p->p_vmspace;
>         if (vm->vm_refcnt > 1)
>                 return;
>         SLIST_FOREACH(pc, &cpuhead, pc_allcpu) {
>                 if (pc->pc_vmspace == vm) {
>                         vm->vm_pmap.pm_active &= ~pc->pc_cpumask;
>                         vm->vm_pmap.pm_context[pc->pc_cpuid] = -1;
>                         pc->pc_vmspace = NULL;
>                 }
>         }
> }
> 
> 
> 
> This is thw only architecture that has this..
> What does it do? And what does it have to do with the scheduler?

To quote from the commit log:
  date: 2002/06/24 15:48:01;  author: jake;  state: Exp;  lines: +1 -0
  Add an MD callout like cpu_exit, but which is called after sched_lock is
  obtained, when all other scheduling activity is suspended.  This is needed
  on sparc64 to deactivate the vmspace of the exiting process on all cpus.
  Otherwise if another unrelated process gets the exact same vmspace structure
  allocated to it (same address), its address space will not be activated
  properly.  This seems to fix some spontaneous signal 11 problems with smp
  on sparc64.

To elaborate on that a bit:
The sparc64 cpu_switch() has an optimization to avoid needlessly
invalidating TLB entries: when we switch to a kernel thread, we need
not switch VM contexts at all, and do with whatever vmspace was active
before. When we switch to a thread that has the vmspace that is
already in use currently, we need not load a new context register
value (which is analog to flushing the TLB).

We identify vmspaces by their pointers for this purpose, so there can
be a race between freeing the struct vmspace by wait()ing (on another
processor) and switching to another thread (on the first
processor). Specifically, the first processor could be switching to a
newly created thread that has the same struct vmspace that was just
freed, so we would mistakenly assume that we need not bother loading
the context register, and continue using outdated TLB entries.

To prevent this, cpu_sched_exit() zeros the respective PCPU variables
holding the active vmspace if it is going to be destroyed, so it will
never match any other during the next cpu_switch().

	- Thomas

-- 
Thomas Moestl	<t.moestl@tu-bs.de>	http://www.tu-bs.de/~y0015675/
		<tmm@FreeBSD.org>	http://people.FreeBSD.org/~tmm/
"I try to make everyone's day a little more surreal."
						-- Calvin and Hobbes



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040510010301.GA6829>