From owner-freebsd-performance@FreeBSD.ORG Tue May 15 21:02:17 2007 Return-Path: X-Original-To: freebsd-performance@freebsd.org Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id F065916A400 for ; Tue, 15 May 2007 21:02:16 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from mh2.centtech.com (moat3.centtech.com [64.129.166.50]) by mx1.freebsd.org (Postfix) with ESMTP id BD44813C465 for ; Tue, 15 May 2007 21:02:16 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from neutrino.centtech.com (neutrino.centtech.com [10.177.171.220]) by mh2.centtech.com (8.13.8/8.13.8) with ESMTP id l4FL2E0t029894 for ; Tue, 15 May 2007 16:02:14 -0500 (CDT) (envelope-from anderson@freebsd.org) Message-ID: <464A1FD6.8070104@freebsd.org> Date: Tue, 15 May 2007 16:02:14 -0500 From: Eric Anderson User-Agent: Thunderbird 2.0.0.0 (X11/20070420) MIME-Version: 1.0 To: freebsd-performance@freebsd.org References: <464474F0.3040306@tomjudge.com> <7579f7fb0705110735h1a65ef7atcab00bdbc25224d6@mail.gmail.com> <4649A71C.4030302@skylinecorp.com> <4649CC6B.1050203@tomjudge.com> <000301c7970e$6dbdcc80$9501a8c0@skylinecorp.net> In-Reply-To: <000301c7970e$6dbdcc80$9501a8c0@skylinecorp.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.88.4/3247/Tue May 15 06:31:00 2007 on mh2.centtech.com X-Virus-Status: Clean X-Spam-Status: No, score=-2.5 required=8.0 tests=AWL,BAYES_00 autolearn=ham version=3.1.6 X-Spam-Checker-Version: SpamAssassin 3.1.6 (2006-10-03) on mh2.centtech.com Subject: Re: PERC5i throughput [was: possible issues with Dell/Perc5 raid] X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 May 2007 21:02:17 -0000 On 05/15/07 11:30, Kevin Kobb wrote: > Tom Judge wrote: >> Randy Schultz wrote: >>> On Tue, 15 May 2007, Kevin Kobb spaketh thusly: >>> >>> -}These reports on poor performance using mpt seem to be on SATA >>> -}drives. Has anybody been seeing this using SAS drives? -} >>> -}We are testing Dell PE840s with hot swap SAS drives, and seem to >>> get >>> decent -}performance, though I haven't run any benchmarks yet. Any >>> opinions if the -}PERC5i/mfi is a better choice than the SAS5iR/mpt >>> combination? >>> >>> Would it be possible for you to install blogbench and give it a quick >>> run while your system is idle? We will be ordering a new box in 1-2 >>> months that we need high disk I/O capabilities. We've been looking >>> at >>> the PERC5i set up RAID 5 with 4-6 SAS drives. I'ld be interested in >>> what you're seeing for throughput on the PERC5i. >>> >> I have attached some blogbench tests from the following configs: >> >> Perc5i 4 * SAS 15K RPM 146 Gig disks in raid 5. >> >> >> Perc5e 15 * SATA 7.2K RPM 500 Gig disks in raid 50 (3 raid 5 volumes >> 5 disks each) >> >> >> Other configurations that I may be able to test are 2 SATA in raid 1 >> and 2 SAS in raid 1 (both on perc5i). > > Ok, I installed blogbench (which I have not used before) and ran a > couple of quick tests. > I have a SAS 5iR controller, (not a PERC5 though) on a PE840, with 2 GB > RAM, and 2 146 GB 10K RPM hot plug drives in a RAID 1. > > I ran the test and noticed: > "mpt0: QUEUE FULL EVENT: Bus 0x00 Target 0x20 Depth 121" messages in my > logs. > Then I ran, camcontrol tags 0:0:0 -N 119 -v, and ran it again. > > I didn't get any messages this time and got the results indicated in > test 2. Just for an additional data point, here's what a 2Gb Fiber channel connected array with 16 750GB SATA disks looks like: Frequency = 10 secs Scratch dir = [/vol3/test/] Spawning 3 writers... Spawning 1 rewriters... Spawning 5 commenters... Spawning 100 readers... Benchmarking for 30 iterations. The test will run during 5 minutes. Nb blogs R articles W articles R pictures W pictures R comments W comments 48 85428 2630 60484 2865 46462 8899 78 77246 1670 56118 1547 48798 5714 101 68730 1634 47639 1103 41230 5108 127 64230 1663 43522 1422 35531 4517 150 64072 1326 42968 1330 35485 4165 168 39332 1163 26511 993 20236 2697 194 53474 1527 35969 1137 32142 4251 215 55310 1362 37140 1274 30882 4401 232 49766 1203 32995 1046 29133 3979 251 38767 1061 27122 909 23652 3130 272 40820 1344 29009 920 23557 3728 285 23580 771 15778 746 14036 2406 300 26545 979 18758 853 14721 3001 323 34491 1319 23422 1222 20155 4197 333 15418 732 10068 738 7867 2324 361 21929 1340 15030 1505 12914 5008 373 13122 524 9329 743 7842 2145 396 24737 871 17423 1177 14929 3182 404 8858 323 6651 345 5864 1165 433 25465 1450 18236 1472 15915 3982 438 9460 379 6477 259 4627 1284 454 15580 869 10450 1087 9702 3297 470 11886 874 8590 603 7578 2541 487 16931 1088 11651 803 9846 3231 496 11122 559 7517 580 5341 2065 517 13609 883 9288 1150 8291 3611 545 8043 1610 5581 1022 4973 3771 557 5049 686 3784 542 2195 2383 578 9534 1451 6703 1334 6541 5202 593 4383 834 3230 1011 3432 1381 Final score for writes: 593 Final score for reads : 18671 My system also is pretty busy doing an rsync to/from some other arrays, so the numbers are lower that what I'd get on an idle system. I'll try this again on a faster system if I get the chance. Eric