From owner-freebsd-hackers@FreeBSD.ORG Wed Jan 8 20:27:53 2014 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6EF3089C; Wed, 8 Jan 2014 20:27:53 +0000 (UTC) Received: from mail-we0-x22f.google.com (mail-we0-x22f.google.com [IPv6:2a00:1450:400c:c03::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D636B1EED; Wed, 8 Jan 2014 20:27:52 +0000 (UTC) Received: by mail-we0-f175.google.com with SMTP id w62so1921304wes.6 for ; Wed, 08 Jan 2014 12:27:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=IwHVh0B/iA6H6kydnVtNARNSAG5zEDQ6lTzo8zm+blc=; b=IsSRzsRUZp57A+patOHt5n+Sy1afRKvJYij4QS5Bo1cjdUeME2H0vZ1LePnFkvn0+0 n/ViuOZvMxfSQq+06fgzQMbQ0fAEOZXt8AOqNm+TN/0n1kuuQQGhsq8qJ/nDtHNBikAJ 0KjcGMro6f48/DmLwIYL1SWCBdkBtYa2XTqEk6raBR3sy0GmpH9k3mVVS6UFuWorAwZM 2k2VfjlJQXUSPUtdc0lhno3qQvVU6i37k4J42C6kt5BihJGf0JuNmesoNI6jJjGGi+5F iO4zDJ7erFChO/VG9ReEt/xoxnkHIxtRrkgouvSdKQdB63zi627qVQm1/Hj/d/D+BrFe kvuQ== MIME-Version: 1.0 X-Received: by 10.194.63.228 with SMTP id j4mr84873252wjs.34.1389212871292; Wed, 08 Jan 2014 12:27:51 -0800 (PST) Sender: asomers@gmail.com Received: by 10.194.22.35 with HTTP; Wed, 8 Jan 2014 12:27:51 -0800 (PST) In-Reply-To: <513D4C78-D6FC-45D8-8B1F-CFD2C96E872F@cederstrand.dk> References: <513D4C78-D6FC-45D8-8B1F-CFD2C96E872F@cederstrand.dk> Date: Wed, 8 Jan 2014 13:27:51 -0700 X-Google-Sender-Auth: U0m-D2tFl20Z1Ljc6PZR8RodxUY Message-ID: Subject: Re: Continual benchmarking / regression testing? From: Alan Somers To: Erik Cederstrand Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Cc: "freebsd-hackers@freebsd.org" , Alan Somers X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jan 2014 20:27:53 -0000 On Wed, Jan 8, 2014 at 1:13 PM, Erik Cederstrand wrote: > Den 08/01/2014 kl. 17.38 skrev Alan Somers : >> >> I like that you stored test results in a SQL database. My SQL-foo is >> poor, so ATM my framework is using CSV. It also looks like you've got >> code to generate a website. Do you have any example output? > > Yes, there=92s a website to filter results, generate graphs, see commit m= essages between two data points and show the hardware and software configur= ation of the client running the benchmark. A continuous benchmarking framew= ork is only useful if it can assist you in analyzing the data, finding regr= essions and their cause. I meant, do you have any generated html that you can share? > >> The PXE stuff, however, does not belong in the >> benchmark framework, IMHO. I think that the benchmark framework >> should just include the benchmarking and system profiling aspect, not >> system configuration. Provisioning and configuring systems can be >> done in a separate utility, one that can be shared, for example, with >> the continuous Kyua tester. > > System configuration affects benchmark results, so that needs to be recor= ded along with the benchmark results. My work was intended as a complete co= ntinuous benchmarking system with a build machine that produces OS installa= tion images and tells clients what to install and what to run. But I agree = that a benchmark framework contains many self-contained parts that could be= shared among projects. My strategy was to separate system configuration from system analysis. That way, the user can configure the system however he likes, using any tools. Then the framework will analyze the system in as much detail as possible. It will determine the CPU type, CPU count, memory type, memory amount, kernel version, network interface configuration, filesystem, filesystem properties, zpool topology, hard disk model, etc. The analysis engine is a lot of work, but its more robust and flexible than tying system configuration into the framework. This separation of analysis from configuration allows the configuration aspect to be handled by a separate tool, that knows nothing of benchmarks. Sequencing the benchmarks can then be run by a short sh script or something. -Alan > > Erik