Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 May 2005 13:32:32 -0700
From:      Bakul Shah <bakul@BitBlocks.com>
To:        Petri Helenius <pete@he.iki.fi>
Cc:        performance@freebsd.org
Subject:   Re: Regression testing (was Re: Performance issue) 
Message-ID:  <200505102032.j4AKWWcD073387@gate.bitblocks.com>
In-Reply-To: Your message of "Tue, 10 May 2005 22:51:46 %2B0300." <428110D2.8070004@he.iki.fi> 

next in thread | previous in thread | raw e-mail | index | archive | help
> This sounds somewhat similar to Solaris dtrace stuff?

Dtrace can be a (very useful) component for collecting
performance metrics.  What I am talking about is a framework
where you'd apply dtrace or other micro/system level
performance tests or benchmarks on a regular basis for a
variety of machines, loads etc. and collate results in a
usable form.

The purpose is to provide an ongoing view of how performance
of various subsystems and the system as a whole changes for
various loads and configurations as the codebase evolves.

This gives an early warning of performance loss (as seen in
-5.x versus -4.x releases) as well as early confirmation of
improvements (as seen in -6.x versus -5.x).  Users can
provide early feedback witout having to wait for a release.
It is difficult and time consuming for developers to measure
the impact of their changes across a variety of systems,
configurations and loads.  A centralized performance
measuring system can be very valuable here.  If you see that
e.g.  a new scheduler has a terrible impact on some systems
or loads, you'd either come up with something better or
provide a knob.  If you see that a nifty new feature has a
significant performance cost, you'd be less tempted to make
it the default (or at least others get a chance to scream
early on).



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200505102032.j4AKWWcD073387>