Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 8 Jan 2014 13:27:51 -0700
From:      Alan Somers <asomers@freebsd.org>
To:        Erik Cederstrand <erik+lists@cederstrand.dk>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>, Alan Somers <asomers@freebsd.org>
Subject:   Re: Continual benchmarking / regression testing?
Message-ID:  <CAOtMX2iSDvzX1n_GsBePNsrijKf2KkNAfa2mQQ99m1BK2Bc_QQ@mail.gmail.com>
In-Reply-To: <513D4C78-D6FC-45D8-8B1F-CFD2C96E872F@cederstrand.dk>
References:  <lah8s3$8ur$1@ger.gmane.org> <CDBEEA8C-90FE-4E4B-B16E-8A5EF7685F51@cederstrand.dk> <CAOtMX2hiMAnZ5=-FC1TW07eML4p2s_f6iG%2B6KPofq9zxpbauNg@mail.gmail.com> <513D4C78-D6FC-45D8-8B1F-CFD2C96E872F@cederstrand.dk>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jan 8, 2014 at 1:13 PM, Erik Cederstrand
<erik+lists@cederstrand.dk> wrote:
> Den 08/01/2014 kl. 17.38 skrev Alan Somers <asomers@FreeBSD.org>:
>>
>> I like that you stored test results in a SQL database.  My SQL-foo is
>> poor, so ATM my framework is using CSV.  It also looks like you've got
>> code to generate a website.  Do you have any example output?
>
> Yes, there=92s a website to filter results, generate graphs, see commit m=
essages between two data points and show the hardware and software configur=
ation of the client running the benchmark. A continuous benchmarking framew=
ork is only useful if it can assist you in analyzing the data, finding regr=
essions and their cause.

I meant, do you have any generated html that you can share?

>
>> The PXE stuff, however, does not belong in the
>> benchmark framework, IMHO.  I think that the benchmark framework
>> should just include the benchmarking and system profiling aspect, not
>> system configuration.  Provisioning and configuring systems can be
>> done in a separate utility, one that can be shared, for example, with
>> the continuous Kyua tester.
>
> System configuration affects benchmark results, so that needs to be recor=
ded along with the benchmark results. My work was intended as a complete co=
ntinuous benchmarking system with a build machine that produces OS installa=
tion images and tells clients what to install and what to run. But I agree =
that a benchmark framework contains many self-contained parts that could be=
 shared among projects.

My strategy was to separate system configuration from system analysis.
 That way, the user can configure the system however he likes, using
any tools.  Then the framework will analyze the system in as much
detail as possible.  It will determine the CPU type, CPU count, memory
type, memory amount, kernel version, network interface configuration,
filesystem, filesystem properties, zpool topology, hard disk model,
etc.  The analysis engine is a lot of work, but its more robust and
flexible than tying system configuration into the framework.

This separation of analysis from configuration allows the
configuration aspect to be handled by a separate tool, that knows
nothing of benchmarks.  Sequencing the benchmarks can then be run by a
short sh script or something.

-Alan

>
> Erik



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2iSDvzX1n_GsBePNsrijKf2KkNAfa2mQQ99m1BK2Bc_QQ>