Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 24 Jan 2003 10:59:22 -0800 (PST)
From:      Julian Elischer <julian@elischer.org>
To:        Bosko Milekic <bmilekic@unixdaemons.com>
Cc:        Jeff Roberson <jroberson@chesapeake.net>, arch@FreeBSD.ORG
Subject:   Re: New scheduler
Message-ID:  <Pine.BSF.4.21.0301241057290.78261-100000@InterJet.elischer.org>
In-Reply-To: <20030123190706.A79935@unixdaemons.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On Thu, 23 Jan 2003, Bosko Milekic wrote:

> 
> On Thu, Jan 23, 2003 at 06:19:39PM -0500, Jeff Roberson wrote:
> [...]
> > >   OK, after looking over the code, I'm curious: why does everything
> > >   still seem to be protected by the sched_lock?  Can you not now protect
> > >   the per-CPU runqueues with their own spinlocks?  I'm assuming that the
> > >   primary reason for not switching to the finer grained model is
> > >   complications related to the sched_lock protecting necessarily
> > >   unpremptable sections of code elsewhere in the kernel... notably
> > >   switching to a more finer grained model would involve changes in the
> > >   context switching code and I think we would have to teach some MD code
> > >   about the per-CPU runqueues, which would make this less "pluggable" than
> > >   it was intended, correct?
> > 
> > stand -> walk -> run :-)  I didn't want to make it any more invasive than
> > it currently is as that would require either desupporting the current
> > scheduler, or using it only on UP.  Also, it's a lot of extra effort and a
> > lot of extra bugs.  I doubt there is much sched lock contention today.

KSE is relying to much on schedlock at the moment. If you remove
schedlock then KSE will probabyl explode.
We are plannign a "lock-a-thon" at some stage where we rationalise all
teh locks that have organically grown.

> 
>   Oh, that makes sense.  I just wanted to make sure that the possibility
>   existed to move this way at some point in the time to come, assuming
>   that the additional complexity (if any) is not too costly when
>   compared to the [measured] performance gains (again, if they are
>   measurable, and I am sure they will be - the same cache locality
>   arguments you apply below undoubtedly also apply to splitting a global
>   lock into per-CPU locks).
> 
> > >   I think that one of the main advantages of this thing is the reduction
> > >   of the contention on the sched lock.  If that can be achieved than
> > >   scheduling any thread, including interrupt threads, would already be
> > >   cheaper than it currently is (assuming you could go through a context
> > >   switch without the global sched_lock, and I don't see why with this
> > >   code you could not).
> > 
> > I'd like to reeorg the mi_switch/cpu_switch path.  I'd like do pick the
> > new thread in mi_switch and hand it off to cpu_switch instead of calling
> > back into sched_choose().  This will make all of this slightly cleaner.
> 
>   Good idea.  This would make other things easier to implement, too
>   (including lightweight interrupt threads, should we decide to persue
>   that at some point again).

Be aware that KSE has its fingers in there too.


> 
> > >   Finally, I have one question regarding your results.  Obviously, 35%
> > >   and 10% are noteworthy numbers.  What do you attribute the speedup to,
> > >   primarily, given that this is still all under a global sched_lock?
> > >
> > >   Thanks again for all your work.
> > >
> > 
> > There are a few factors.  Most notably the cpu affinity.  The caches are
> > trashing so much on SMP with the old scheduler that it's actually slower
> > than UP in some cases.  Also, since the balancing is currently pooched the
> > memory bus is contended for less.  So the 35% will probably get a bit
> > smaller, but hopefully the real time will too.
> > 
> > The new scheduler is also algorithmically cheaper.  10 times a second
> > schedcpu() would run on the old scheduler and pollute your cache.  With
> > lots of processes this code could take a while too.
> 
>   Both of these are what I was looking for, thanks.  I totally believe
>   the cache locality argument, especially given that the slight
>   performance improvements I've seen when doing mb_alloc were also
>   attributed to that.
> 
> > Cheers,
> > Jeff
> 
> Regards,
> -- 
> Bosko Milekic * bmilekic@unixdaemons.com * bmilekic@FreeBSD.org
> 
> 
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-arch" in the body of the message
> 


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0301241057290.78261-100000>