Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 13 Aug 1999 15:11 -0600
From:      "Brian McGroarty" <BMCGROARTY@high-voltage.com>
To:        "Peter Wemm" <peter@netplex.com.au>, "Alan Cox" <alc@cs.rice.edu>
Cc:        "Josep Maria M. Blanquer" <blanquer@cs.ucsb.edu>, "freebsd-smp" <freebsd-smp@FreeBSD.ORG>
Subject:   RE:  Re: Questions 
Message-ID:  <6E585390B950D31186D50008C7333C82@high-voltage.com>

next in thread | raw e-mail | index | archive | help
Does this selection mode skew overall processor allocation measurably?

I imagine allocation is severely skewed by roundrobin() between schedcpu()
calls, but then the high p_estcpu bumps the favored task down a run level
queue, compensating with a moment of being starved a processor.

The worst case would then be a high priority task with a goodly sized pool of
others a few run queue levels down. The process would race and then bob down,
float up as p_estcpu is averaged down through schedcpu() calls, soon climb a
layer back up for its higher priority, then race and pitch downward again.


 -----Original Message-----
From: Peter Wemm [mailto:peter@netplex.com.au]
Sent: Friday, August 13, 1999 4:55 AM
To: Brian McGroarty; Alan Cox
Cc: Josep Maria M. Blanquer; freebsd-smp
Subject: Re: Questions

> Try asking John Dyson (dyson@iquest.net).  I think he has
> experimented with some limited forms of affinity scheduling.

I've done this BTW and have it currently running.  I've turned up a bug or
two that look awfully like something is changing p->p_priority of processes
while they are on run queues.

Even doing trivial affinity makes a big difference here.  Trivial meaning
that when selecting a process to run, walk the current run queue level and
find the first process with a matching lastcpu id rather than just the
first in the queue.  If no match, then take the head.  This is what John
did, but I rewrote setrunqueue, remrq in C and moved the process selection
out of i386/swtch.s and into C.  The compiler generates suprisingly similar
code to the assembler version, but when you turn on the U/V pipeline
scheduling and the cpu-specific code generation options (eg: use cmove etc)
then it seems to do slightly better than the original assembler code.

This dramatically simplifies the complexity of cpu_switch() and swtch.s
and moves the run queue management to MI code.  All that is left in
cpu_switch() is the actual context switch code.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6E585390B950D31186D50008C7333C82>