Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 26 Feb 2009 09:19:35 -0800
From:      Tim Kientzle <kientzle@freebsd.org>
To:        Robert Watson <rwatson@freebsd.org>
Cc:        Siddharth Prakash Singh <spsneo@gmail.com>, freebsd-hackers@freebsd.org
Subject:   Re: Google SoC 2009 Idea
Message-ID:  <49A6CF27.3000203@freebsd.org>
In-Reply-To: <alpine.BSF.2.00.0902261620100.41191@fledge.watson.org>
References:  <e8e9f3930902240943o2e2f4b1bh34916b775692a26f@mail.gmail.com> <49A5D6FC.1090800@freebsd.org> <alpine.BSF.2.00.0902261620100.41191@fledge.watson.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Robert Watson wrote:
> On Wed, 25 Feb 2009, Tim Kientzle wrote:
> 
>>> I have not gone through the process scheduler code of Free BSD. 
>>> Hence, I am not yet aware about the current support for Multicore 
>>> Architectures.
>>
>> Since you posted to a lot of different lists, I think you probably 
>> don't already use FreeBSD. (If you did, why would you post to NetBSD 
>> and DragonflyBSD lists?)  Scheduler work is quite complex and 
>> interacts heavily with the rest of the system; it may not be a good 
>> choice for someone who doesn't already have a lot of experience with 
>> FreeBSD.
> 
> All the things you say are true, but let's not be too hard on the new 
> guy, however -- many of our GSoC students don't have previous FreeBSD 
> kernel-hacking experience.  However, it does mean that they have to pick 
> project ideas that are well-suited to a significant warmup and 
> investigation period on the front end of the project.

I apologize to Siddharth and others if I came off overly
harsh.  My intention was to caution him that he should
plan for a lot of work prior to GSoC if he wants to
tackle something that's at the core of the OS like this.

> I'm also not convinced that a scheduler project along these lines would 
> be the most successful, but I wonder if a more experimental-spin 
> proposal for looking at how to investigate poor scheduling decisions 
> using dtrace, instrumentation and metrics to help us understand 
> performance on NUMA systems, and exploring the impact of heuristics 
> might go a long way.

That's a good idea.  The thing that's always impressed
me about scheduling work is how very difficult it is to
test.  It's easy to change the scheduler code; it's
much harder to measure whether those changes have
made the scheduler better or not.

Some testing support would help.  Ideally, something
non-intrusive that could be easily run on a lot
of different machines so as to collect better information
about the impacts of scheduler changes:
  * Load balancing:  How effectively are all cores being used?
  * CPU switching:  What percentage of the time does a thread
stay on the same core?
  * NUMA statistics:  How often does a thread get scheduled on a 
different processor from it's allocated memory?
  * Priority inversion:  How often is a higher-priority thread
idle while a lower-priority thread is running?

A student who built such a tool and then ran some tests
with a variety of hardware and workloads could really
do a lot to advance scheduler development.  Eventually,
turning such a tool into something that anyone could run
and upload data to a central collection site could be
a huge advance.

Certainly something to think about...

Tim



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?49A6CF27.3000203>