From owner-freebsd-current@FreeBSD.ORG Fri Sep 14 03:16:50 2012 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 18AC61065670; Fri, 14 Sep 2012 03:16:50 +0000 (UTC) (envelope-from sendtomatt@gmail.com) Received: from mail-ie0-f182.google.com (mail-ie0-f182.google.com [209.85.223.182]) by mx1.freebsd.org (Postfix) with ESMTP id 93F628FC0C; Fri, 14 Sep 2012 03:16:49 +0000 (UTC) Received: by iea17 with SMTP id 17so2480086iea.13 for ; Thu, 13 Sep 2012 20:16:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=n9NtWhmmkHvunjc2vjlaG23MuC6vnyyQIgbWK74wv/Y=; b=cxZu1MqN9PJLY8kt69OET1fBeJe0dZXaFLQvW3AwufoRu2LTv/lz7m4xrwT8XSP5cZ IZoBqaY5+CgITb3F/7G4eEA1fLTX6c/DGxoQgsBjtl3rXinWQCI9v/uw1dIUxri4o8uh 7uBJL5CHJ5RxNHHbc+u4LmqpG3i6vgkvwyeo9joZVsRjSGIYczspLrlZfncZ92sKW/uS eilKPOQt68bEIypkkQ87CPBOANI5/jK9Wrf2djAqZspIpiE8nue7I2gk3njgQNVqjY1k vBN8xgnwf3iT/QBf9hROI0o83OWmJ/X2hNg0TqqFmAH71DObKMtagZ2/WG0SAxt3JyCP UV3Q== Received: by 10.50.186.132 with SMTP id fk4mr26848885igc.41.1347592176557; Thu, 13 Sep 2012 20:09:36 -0700 (PDT) Received: from flatline.local (70-36-223-239.dsl.dynamic.sonic.net. [70.36.223.239]) by mx.google.com with ESMTPS id y9sm465761igm.10.2012.09.13.20.09.34 (version=SSLv3 cipher=OTHER); Thu, 13 Sep 2012 20:09:35 -0700 (PDT) Message-ID: <50529FEA.4090604@gmail.com> Date: Thu, 13 Sep 2012 20:09:30 -0700 From: matt User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:15.0) Gecko/20120912 Thunderbird/15.0.1 MIME-Version: 1.0 To: Garrett Cooper References: <6A0C3360-0A1E-4905-A33E-D6CC590D7A5A@bnc.net> <504E200A.5020604@gmail.com> <504E330A.4090806@FreeBSD.org> <504E9EB0.2040504@gmail.com> <505239E2.1040805@gmail.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Achim Patzner , Andrey Zonov , freebsd-current@freebsd.org Subject: Re: mfi driver performance X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Sep 2012 03:16:50 -0000 On 09/13/12 13:13, Garrett Cooper wrote: > On Thu, Sep 13, 2012 at 12:54 PM, matt wrote: >> On 09/10/12 19:31, Garrett Cooper wrote: > ... > >> It seems hw.mfi.max_cmds is read only. The performance is pretty close to >> expected with no nvram or bbu on this card and commodity disks from 1.5 >> years ago, as far as I'm concerned. I'd love better write performance, but >> it's probably being held back by the single platter in the mirror when it is >> writing far from its edge. > Try loader.conf: > > $ grep -r hw.mfi.max_cmds /sys/dev/mfi/ > /sys/dev/mfi/mfi.c:TUNABLE_INT("hw.mfi.max_cmds", &mfi_max_cmds); > > Cheers, > -Garrett Here are the results for differing values of max_cmds with same test conditions as against mps Original mfi performance (max_cmds=128) Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP flatline.local 32G 125 99 71443 24 53177 21 317 99 220280 33 255.3 52 Latency 533ms 566ms 1134ms 86565us 357ms 252ms Version 1.96 ------Sequential Create------ --------Random Create-------- flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 22347 94 12389 30 16804 100 18729 99 27798 99 5317 99 Latency 33818us 233ms 558us 26581us 75us 12319us 1.96,1.96,flatline.local,1,1347329123,32G,,125,99,71443,24,53177,21,317,99,220280,33,255.3,52,16,,,,,22347,94,12389,30,16804,100,18729,99,27798,99,5317,99,533ms,566ms,1134ms,86565us,357ms,252ms,33818us,233ms,558us,26581us,75us,12319us max_cmds=256 Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP flatline.local 32G 125 99 70856 24 53503 21 327 98 232650 33 265.1 60 Latency 637ms 522ms 1050ms 121ms 318ms 339ms Version 1.96 ------Sequential Create------ --------Random Create-------- flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 17126 76 11865 31 17134 99 18265 99 27169 100 5006 99 Latency 114ms 522ms 875us 24250us 87us 14324us 1.96,1.96,flatline.local,1,1347580235,32G,,125,99,70856,24,53503,21,327,98,232650,33,265.1,60,16,,,,,17126,76,11865,31,17134,99,18265,99,27169,100,5006,99,637ms,522ms,1050ms,121ms,318ms,339ms,114ms,522ms,875us,24250us,87us,14324us max_cmds=64 Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP flatline.local 32G 125 99 71161 24 54035 21 288 90 229860 34 254.2 62 Latency 310ms 378ms 809ms 567ms 308ms 447ms Version 1.96 ------Sequential Create------ --------Random Create-------- flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 22570 95 14243 35 13170 99 23503 99 +++++ +++ 22225 99 Latency 18111us 282ms 1165us 24786us 117us 80us 1.96,1.96,flatline.local,1,1347584224,32G,,125,99,71161,24,54035,21,288,90,229860,34,254.2,62,16,,,,,22570,95,14243,35,13170,99,23503,99,+++++,+++,22225,99,310ms,378ms,809ms,567ms,308ms,447ms,18111us,282ms,1165us,24786us,117us,80us Still digesting the differences, but 256 seems to get more random seeks and better sequential reads at the expense of higher latencies (some probably identical). I think with lots of small files like a buildworld, it looks like 64 would excel slightly more than 128, but the differences between 128 and 64 are less extreme than the difference between 128 and 256. Interestingly, sequential read appears better at 64 and 256 than 128, but I assume this is a testing fluke...sample set is small. Matt