Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 26 Jan 2010 12:26:27 -0800
From:      Marcel Moolenaar <xcllnt@mac.com>
To:        "M. Warner Losh" <imp@bsdimp.com>
Cc:        src-committers@freebsd.org, jhb@freebsd.org, svn-src-all@freebsd.org, attilio@freebsd.org, marcel@freebsd.org, svn-src-head@freebsd.org
Subject:   Re: svn commit: r202889 - head/sys/kern
Message-ID:  <3023270A-755A-4BCF-AC9A-C1F290052279@mac.com>
In-Reply-To: <20100126.130932.722022410132669562.imp@bsdimp.com>
References:  <3bbf2fe11001260058i65604619l664bd0e49c1dbbd@mail.gmail.com> <3bbf2fe11001260339u7a694069m6a2bb7e18b2c546a@mail.gmail.com> <C6A8F7A7-F0A9-4F63-B61E-DDC5332DC495@mac.com> <20100126.130932.722022410132669562.imp@bsdimp.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Jan 26, 2010, at 12:09 PM, M. Warner Losh wrote:
> cpu_switch(struct thread *old, struct thread *new, struct mutext *mtx)
> {
> 	/* Save the registers to the pcb */
> 	old->td_lock = mtx;
> #if defined(SMP) && defined(SCHED_ULE)
> 	/* s/long/int/ if sizeof(long) != sizeof(void *) */
> 	/* as we have no 'void *' version of the atomics */
> 	while (atomic_load_acq_long(&new->td_lock) == (long)&blocked_lock)
> 		continue;
> #endif
> 	/* Switch to new context */
> }

Ok. So this is what ia64 has already, except for the atomic_load()
in the while loop. Since td_lock is volatile, I don't think we need
atomic_load(). To be explicit, ia64 has:

		old->td_lock = mtx;
#if defined(SCHED_ULE) && defined(SMP)
		/* td_lock is volatile */
		while (new->td_lock == &blocked_lock)
			;
#endif

Am I right, or am I missing a critical aspect of using atomic load?

> I also think that we should have that code somewhere for reference.

Since ia64 has a C implementation of cpu_switch(), we could make
that the reference implementation?

-- 
Marcel Moolenaar
xcllnt@mac.com






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3023270A-755A-4BCF-AC9A-C1F290052279>