Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 03 Oct 1996 12:45:31 -0500
From:      "Chris Csanady" <ccsanady@friley216.res.iastate.edu>
To:        Peter Wemm <peter@spinner.dialix.com>
Cc:        freebsd-smp@freebsd.org
Subject:   Re: Scheduling and idle loops.. (Was Re: cvs commit: sys/kern . . ) 
Message-ID:  <199610031745.MAA03088@friley216.res.iastate.edu>
In-Reply-To: Your message of Thu, 03 Oct 1996 20:39:32 %2B0800. <199610031239.UAA05805@spinner.DIALix.COM> 

next in thread | previous in thread | raw e-mail | index | archive | help

>"Chris Csanady" wrote:
>> Should we perhaps set the var hw.ncpu early on, and get rid of the NCPU confi
>    g
>> variable?  It seems to be there for nothing right now..
>
>NCPU is rather important at the moment, it defines how much room we 
>reserve in various tables, and really it should be called "MAXCPU" which 
>would be more accurate.  There should be nothing stopping somebody 
>configuring NCPU to 10 at present, it will only create the idle procs for 
>the number of active cpus (providing there's enough available slots of 
>course)

I was more thinking of the hw.ncpu sysctl variable that just sits there.  ;)  I
Its quite nice that this all gets started automatically now.

>
>> >  
>> >  Also... shudder.. It fires up the alternate cpus as soon as the idle proc
>> >  is scheduled, ie: at boot time immediately after init.  No more need to
>> >  set kern.smp_active to 2.  In theory, if you have 4 cpus, smp_active shoul
>    d
>> >  end up with the value "4".  Raising and lowering it will probably enable a
>    nd
>> >  disable the appropriate numbered cpus.  Setting smp_active to 1 should cau
>    se
>> >  the system to effectively run uniprocessor.
>> 
>> On this topic, the smp.todo mentions we want to get rid of the idle processes
>    ?!
>> I have been looking at the scheduling code, and although I am currently
>> somewhat confused, it does not seem that this would be possible until we have
>> a threaded kernel.  Am I incorrect in assuming this?  the current idle loop i
>    n
>> the UP code is in the kernel, so it doesnt seem as if we could do it that
>> way without some degree of threading.
>
>This is equal top of my "todo" list.  I think we'll get the tlb 
>invalidation going next though, since it's a showstopper.
>
>> More generally though, Is anyone looking at rewriting the scheduler?  And bas
>    ed
>> upon what?  I was going to try and work on this, but currently, I havent gott
>    en
>> a complete understanding of everything going on, and how to go about it the
>> right way.
>
>At present, we have 32 run queues with 4 (8?) priority levels in each 
>queue.  When the scheduler runs, it picks the first process off the top of 
>the highest priority queue.

Currently, the processes are just taken off the run queues from the idle loops,
corrects?  Anyway, what guarantees that the idle loops are running on their
respective processors?  Or perhaps it does not even matter.  Im still unclear
as to how both cpus run things. (i mean at the lowest possible level, what
tells the secondary cpu actually run something?)

>Something has to be done here since there seems to be no real way to bias 
>processes to attempt to give them preference for a single cpu to get some 
>advantage of the on-cpu cache.  Otherwise, the processes seem to bounce 
>backwards and forwards from one cpu to the other and so on.

I have been thinking about this as well..

>I don't know an easy answer offhand..

Me either..  Also, it would be nice to balance the load evenly.  I recall there
there being a lot of discussion of this in Schimmel's book, but I dont have a
copy of that right now. :(  Perhaps just having 2 entirely different run queues?
Depending on relative cpu idle times, processors may steal processes from the
others run queue.  So in a relatively stable situation, none would move at all.
(ie if no processors are bored, nothing happens:)

Chris

>If we maintain this strategy, perhaps we'd need to have one set of 32 run 
>queues for each cpu as well as the common one.  The scheduler could look 
>in the per-cpu queue first and if there's a process available that's "near 
>enough" to the head of the "real" run queue then choose that.  That starts 
>to get messy though, perhaps a simple list of "recently run processes" to 
>look in the run queues for may be enough.
>
>> I dont know..  perhaps I should just stay away from some stuff. :)
>
>Well, if anybody comes up with a better strategy that fits the kernel 
>without too much trauma (that's also understandable), it's worth hearing.
>
>> Chris Csanady
>
>Cheers,
>-Peter
>
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199610031745.MAA03088>