Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 17 May 2004 17:32:35 -0700 (PDT)
From:      Julian Elischer <julian@elischer.org>
To:        sparc64@freebsd.org
Subject:   Re: (again)  sparc64 kernel code question.. (fwd)
Message-ID:  <Pine.BSF.4.21.0405171731550.27448-100000@InterJet.elischer.org>

next in thread | raw e-mail | index | archive | help

sorry about cross posting but I meant to send this to sparc64@

---------- Forwarded message ----------
Date: Mon, 17 May 2004 17:29:50 -0700 (PDT)
From: Julian Elischer <julian@elischer.org>
To: Thomas Moestl <t.moestl@tu-bs.de>
Cc: FreeBSD current users <current@freebsd.org>
Subject: Re: (again)  sparc64 kernel code question..


Sorry for the long delay but...

On Mon, 10 May 2004, Thomas Moestl wrote:

> On Sun, 2004/05/09 at 15:44:40 -0700, Julian Elischer wrote:
> > in vm_machdep.c the sparc64  code has
> > void
> > cpu_sched_exit(struct thread *td)
> > {
> >         struct vmspace *vm;
> >         struct pcpu *pc;
> >         struct proc *p;
> > 
> >         mtx_assert(&sched_lock, MA_OWNED);
> >  
> >         p = td->td_proc;
> >         vm = p->p_vmspace;
> >         if (vm->vm_refcnt > 1)
> >                 return;
> >         SLIST_FOREACH(pc, &cpuhead, pc_allcpu) {
> >                 if (pc->pc_vmspace == vm) {
> >                         vm->vm_pmap.pm_active &= ~pc->pc_cpumask;
> >                         vm->vm_pmap.pm_context[pc->pc_cpuid] = -1;
> >                         pc->pc_vmspace = NULL;
> >                 }
> >         }
> > }
> > 
> > 
> > 
> > This is thw only architecture that has this..
> > What does it do? And what does it have to do with the scheduler?
> 
> To quote from the commit log:
>   date: 2002/06/24 15:48:01;  author: jake;  state: Exp;  lines: +1 -0
>   Add an MD callout like cpu_exit, but which is called after sched_lock is
>   obtained, when all other scheduling activity is suspended.  This is needed
>   on sparc64 to deactivate the vmspace of the exiting process on all cpus.
>   Otherwise if another unrelated process gets the exact same vmspace structure
>   allocated to it (same address), its address space will not be activated
>   properly.  This seems to fix some spontaneous signal 11 problems with smp
>   on sparc64.
> 
> To elaborate on that a bit:
> The sparc64 cpu_switch() has an optimization to avoid needlessly
> invalidating TLB entries: when we switch to a kernel thread, we need
> not switch VM contexts at all, and do with whatever vmspace was active
> before. When we switch to a thread that has the vmspace that is
> already in use currently, we need not load a new context register
> value (which is analog to flushing the TLB).
> 
> We identify vmspaces by their pointers for this purpose, so there can
> be a race between freeing the struct vmspace by wait()ing (on another
> processor) and switching to another thread (on the first
> processor). Specifically, the first processor could be switching to a
> newly created thread that has the same struct vmspace that was just
> freed, so we would mistakenly assume that we need not bother loading
> the context register, and continue using outdated TLB entries.
> 
> To prevent this, cpu_sched_exit() zeros the respective PCPU variables
> holding the active vmspace if it is going to be destroyed, so it will
> never match any other during the next cpu_switch().
> 

This seems to be the wrong answer to me..

Surely the answer is to accuratly reference count the 
vmspace so that it is never recycled if another entity is still using
it? The answer would be for wait() to free the last vmspace reference,
not to lie about it.. This sort of hack requires inside knowledge to
understand and is not robust across kernel changes..


> 	- Thomas

_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0405171731550.27448-100000>