Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 14 Mar 2008 10:08:23 +0800
From:      David Xu <davidxu@FreeBSD.org>
To:        Jeff Roberson <jroberson@chesapeake.net>
Cc:        arch@FreeBSD.org, Peter Wemm <peter@wemm.org>
Subject:   Re: amd64 cpu_switch in C.
Message-ID:  <47D9DE17.7030605@freebsd.org>
In-Reply-To: <20080313132152.Y1091@desktop>
References:  <20080310161115.X1091@desktop> <47D758AC.2020605@freebsd.org> <e7db6d980803120125y41926333hb2724ecd07c0ac92@mail.gmail.com> <20080313124213.J31200@delplex.bde.org> <20080312211834.T1091@desktop> <20080313230809.W32527@delplex.bde.org> <20080313132152.Y1091@desktop>

next in thread | previous in thread | raw e-mail | index | archive | help
Jeff Roberson wrote:

>> Ugh, this is from spinlocks bogusly masking interrupts.  More than half
>> the cycles have interrupts masked.  This at least shows that lots of
>> time is being spent near cpu_switch() with a spinlock held.
>>
> 
> I'm not sure why you feel masking interrupts in spinlocks is bogus.  
> It's central to our SMP strategy.  Unless you think we should do it 
> lazily like we do with critical_*.  I know jhb had that working at one 
> point but it was abandoned.

It may be that general mutex already does spinning, so spinlock is used
only when interrupt should be enabled and disabled which is expensive.
I don't know how many spinlocks are abused in CURRENT source code.

Regards,
David Xu




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47D9DE17.7030605>