Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 15 May 2007 16:02:14 -0500
From:      Eric Anderson <anderson@freebsd.org>
To:        freebsd-performance@freebsd.org
Subject:   Re: PERC5i throughput [was: possible issues with Dell/Perc5 raid]
Message-ID:  <464A1FD6.8070104@freebsd.org>
In-Reply-To: <000301c7970e$6dbdcc80$9501a8c0@skylinecorp.net>
References:  <Pine.BSF.4.64.0705101545340.34925@tdream.lly.earlham.edu>	<464474F0.3040306@tomjudge.com>	<7579f7fb0705110735h1a65ef7atcab00bdbc25224d6@mail.gmail.com>	<4649A71C.4030302@skylinecorp.com>	<Pine.BSF.4.64.0705150948390.61298@tdream.lly.earlham.edu>	<4649CC6B.1050203@tomjudge.com> <000301c7970e$6dbdcc80$9501a8c0@skylinecorp.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On 05/15/07 11:30, Kevin Kobb wrote:
> Tom Judge wrote:
>> Randy Schultz wrote:
>>> On Tue, 15 May 2007, Kevin Kobb spaketh thusly:
>>>
>>> -}These reports on poor performance using mpt seem to be on SATA
>>> -}drives. Has anybody been seeing this using SAS drives? -}
>>> -}We are testing Dell PE840s with hot swap SAS drives, and seem to
>>> get 
>>> decent -}performance, though I haven't run any benchmarks yet. Any
>>> opinions if the -}PERC5i/mfi is a better choice than the SAS5iR/mpt
>>> combination? 
>>>
>>> Would it be possible for you to install blogbench and give it a quick
>>> run while your system is idle?  We will be ordering a new box in 1-2
>>> months that we need high disk I/O capabilities.  We've been looking
>>> at 
>>> the PERC5i set up RAID 5 with 4-6 SAS drives.  I'ld be interested in
>>> what you're seeing for throughput on the PERC5i.
>>>
>> I have attached some blogbench tests from the following configs:
>>
>> Perc5i 4 * SAS 15K RPM 146 Gig disks in raid 5.
>>
>>
>> Perc5e 15 * SATA 7.2K RPM 500 Gig disks in raid 50 (3 raid 5 volumes
>> 5 disks each) 
>>
>>
>> Other configurations that I may be able to test are 2 SATA in raid 1
>> and 2 SAS in raid 1 (both on perc5i).
> 
> Ok, I installed blogbench (which I have not used before) and ran a
> couple of quick tests.
> I have a SAS 5iR controller, (not a PERC5 though) on a PE840, with 2 GB
> RAM, and 2 146 GB 10K RPM hot plug drives in a RAID 1.
> 
> I ran the test and noticed:
> "mpt0: QUEUE FULL EVENT: Bus 0x00 Target 0x20 Depth 121" messages in my
> logs.
> Then I ran, camcontrol tags 0:0:0 -N 119 -v, and ran it again.
> 
> I didn't get any messages this time and got the results indicated in
> test 2.

Just for an additional data point, here's what a 2Gb Fiber channel 
connected array with 16 750GB SATA disks looks like:

Frequency = 10 secs
Scratch dir = [/vol3/test/]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 30 iterations.
The test will run during 5 minutes.

   Nb blogs   R articles    W articles    R pictures    W pictures    R 
comments    W comments
         48        85428          2630         60484          2865 
    46462          8899
         78        77246          1670         56118          1547 
    48798          5714
        101        68730          1634         47639          1103 
    41230          5108
        127        64230          1663         43522          1422 
    35531          4517
        150        64072          1326         42968          1330 
    35485          4165
        168        39332          1163         26511           993 
    20236          2697
        194        53474          1527         35969          1137 
    32142          4251
        215        55310          1362         37140          1274 
    30882          4401
        232        49766          1203         32995          1046 
    29133          3979
        251        38767          1061         27122           909 
    23652          3130
        272        40820          1344         29009           920 
    23557          3728
        285        23580           771         15778           746 
    14036          2406
        300        26545           979         18758           853 
    14721          3001
        323        34491          1319         23422          1222 
    20155          4197
        333        15418           732         10068           738 
     7867          2324
        361        21929          1340         15030          1505 
    12914          5008
        373        13122           524          9329           743 
     7842          2145
        396        24737           871         17423          1177 
    14929          3182
        404         8858           323          6651           345 
     5864          1165
        433        25465          1450         18236          1472 
    15915          3982
        438         9460           379          6477           259 
     4627          1284
        454        15580           869         10450          1087 
     9702          3297
        470        11886           874          8590           603 
     7578          2541
        487        16931          1088         11651           803 
     9846          3231
        496        11122           559          7517           580 
     5341          2065
        517        13609           883          9288          1150 
     8291          3611
        545         8043          1610          5581          1022 
     4973          3771
        557         5049           686          3784           542 
     2195          2383
        578         9534          1451          6703          1334 
     6541          5202
        593         4383           834          3230          1011 
     3432          1381

Final score for writes:           593
Final score for reads :         18671


My system also is pretty busy doing an rsync to/from some other arrays, 
so the numbers are lower that what I'd get on an idle system.

I'll try this again on a faster system if I get the chance.

Eric







Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?464A1FD6.8070104>