From owner-freebsd-performance@FreeBSD.ORG Tue Mar 25 02:27:23 2008 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A2E3F106566C for ; Tue, 25 Mar 2008 02:27:23 +0000 (UTC) (envelope-from bmeekhof@umich.edu) Received: from skycaptain.mr.itd.umich.edu (smtp.mail.umich.edu [141.211.93.160]) by mx1.freebsd.org (Postfix) with ESMTP id 45CD18FC12 for ; Tue, 25 Mar 2008 02:27:23 +0000 (UTC) (envelope-from bmeekhof@umich.edu) Received: FROM atom.heart.mother (c-68-40-199-244.hsd1.mi.comcast.net [68.40.199.244]) BY skycaptain.mr.itd.umich.edu ID 47E85C00.F18F7.12794 ; 24 Mar 2008 21:57:21 -0400 Message-ID: <47E85C00.4010601@umich.edu> Date: Mon, 24 Mar 2008 21:57:20 -0400 From: "Benjeman J. Meekhof" User-Agent: Thunderbird 2.0.0.9 (X11/20071031) MIME-Version: 1.0 To: freebsd-performance@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: performance tuning on perc6 (LSI) controller X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Mar 2008 02:27:23 -0000 Hello, I think this might be useful information, and am also hoping for a little input. We've been doing some FreeBSD benchmarking on Dell PE2950 systems with Perc6 controllers (dual-quad Xeon, 16GB, Perc6=LSI card, mfi driver, 7.0-RELEASE). There are two controllers in each system, and each has two MD1000 disk shelves attached via the 2 4x SAS interfaces. (so 30PD available to each controller, 60 PD on the system). My baseline was this - on linux 2.6.20 we're doing 800MB/s write and greater read with this configuration: 2 raid6 volumes volumes striped into a raid0 volume using linux software raid, XFS filesystem. Each raid6 is a volume on one controller using 30 PD. We've spent time tuning this, more than I have with FreeBSD so far. Initially I was getting strangely poor read results. Here is one example (before launching into quicker dd tests, i already had similarly bad results from some more complete iozone tests): time dd if=/dev/zero of=/test/deletafile bs=1M count=10240 10737418240 bytes transferred in 26.473629 secs (405589209 bytes/sec) time dd if=/test/deletafile of=/dev/null bs=1M count=10240 10737418240 bytes transferred in 157.700367 secs (68087465 bytes/sec) To make a very long story short, much better results achieved in the end by simply by increasing the filesystem blocksize to the maximum (same dd commands). I'm running a more thorough test on this setup using iozone: #gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2 #newfs -U -b 65536 /dev/stripe/test #write: 19.240875 secs (558052492 bytes/sec) #read: 20.000606 secs (536854644 bytes/sec) Also did this in /boot/loader.conf - it effected nothing very much in any test but the settings seemed reasonable so I kept them: kern.geom.stripe.fast=1 vfs.hirunningspace=5242880 vfs.read_max=32 Any other suggestions to get best throughput? There is also HW RAID stripe size to adjust larger or smaller. ZFS is also on the list for testing. Should I perhaps be running -CURRENT or -STABLE to be get best results with ZFS? -Ben -- Benjeman Meekhof - UM ATLAS/AGLT2 Computing bmeekhof@umich.edu