Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 25 Feb 2007 10:51:31 +0000 (GMT)
From:      Robert Watson <rwatson@FreeBSD.org>
To:        Kris Kennaway <kris@obsecurity.org>
Cc:        smp@freebsd.org, hackers@freebsd.org, current@freebsd.org, cokane@cokane.org
Subject:   Re: Progress on scaling of FreeBSD on 8 CPU systems
Message-ID:  <20070225104709.S36322@fledge.watson.org>
In-Reply-To: <20070225054120.GA47059@xor.obsecurity.org>
References:  <20070224213111.GB41434@xor.obsecurity.org> <346a80220702242100i7ec22b5h4b25cc7d20d03e98@mail.gmail.com> <20070225054120.GA47059@xor.obsecurity.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 25 Feb 2007, Kris Kennaway wrote:

> On Sat, Feb 24, 2007 at 10:00:35PM -0700, Coleman Kane wrote:
>
>> What does the performance curve look like for the in-CVS 7-CURRENT tree 
>> with 4BSD or ULE ? How do those stand up against the Linux SMP scheduler 
>> for scalability. It would be nice to see the comparison displayed to see 
>> what the performance improvements of the aforementioned patch were realized 
>> to. This would likely be a nice graphics for the SMPng project page, BTW...
>
> There are graphs of this on Jeff's blog, referenced in that URL. Fixing 
> filedesc locking makes a HUGE difference.

I think the real message of all this is that our locking strategy is basically 
pretty reasonable for the paths exercised by this (and quite a few) workloads, 
but our low-level scheduler and locking primitives need a lot of refinement. 
The next step here is to look at the impact of these changes (individually and 
together) with other hardware configurations and other workloads.  On the 
hardware side, I'd very much like to see measurements done on that rather 
nasty generation of Intel Xeon P4's where the costs of mutexes were 
astronomically out of proportion with other operation costs, which 
historically has heavily pessimized ULE due to the additional locking it had 
(don't know if this still applies).

It would be really great if we could find "workload owners" who would maintain 
easy-to-run benchmark configurations and also run them regularly on a fixed 
hardware configuration over a long time publishing results and testing 
patches.  Kris has done this for SQL benchmarks to great effect, giving a nice 
controlled testing environment for a host of performance-related patches, but 
SQL is not the be-all and end-all of application workloads, so having others 
do similar things with other benchmarks would be very helpful.

Robert N M Watson
Computer Laboratory
University of Cambridge



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070225104709.S36322>