Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 Jun 2010 13:27:43 -0600
From:      Scott Long <scottl@samsco.org>
To:        Artem Belevich <fbsdlist@src.cx>
Cc:        oizs <oizs@freemail.hu>, freebsd-current@freebsd.org
Subject:   Re: Dell Perc 5/i Performance issues
Message-ID:  <63849665-E1D3-417F-B6BD-5601E5361315@samsco.org>
In-Reply-To: <AANLkTinYsUMVdlFNYnw4RFMPWdWqhzFUlQYIznNTq2PP@mail.gmail.com>
References:  <4C1AB4C0.4020604@freemail.hu> <A594C946-32C0-4C4A-AA37-0E81D270162A@mac.com> <4C1B3792.9000007@freemail.hu> <AANLkTimsHZLREByndqXEjt2yjdvOYVV7Rnw8AMjqxYIl@mail.gmail.com> <4C1C0ED9.8090103@freemail.hu> <2F904ED8-BC95-459F-8536-A889ADDA8D31@samsco.org> <4C1E4722.3050506@freemail.hu> <AANLkTinYsUMVdlFNYnw4RFMPWdWqhzFUlQYIznNTq2PP@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Yeah, there's no value in using the /dev/random devices for testing disk =
i/o.  Use /dev/zero instead.  I've known of hardware RAID engines in the =
past that can recognize certain repeating i/o benchmark patterns and =
optimize for them, but I have no idea if LSI controllers do this, tho =
based on your results it's probably safe to say that they don't.

Scott

On Jun 20, 2010, at 1:09 PM, Artem Belevich wrote:

> /dev/random and /dev/urandom are relatively slow and are not suitable
> as the source of data for testing modern hard drives' sequential
> throughput.
>=20
> On my 3GHz dual-core amd63 box both /dev/random and /dev/urandom max
> out at ~80MB/s while consuming 100% CPU time on one of the processor
> cores.
> That would not be enough to saturate single disk with sequential =
writes.
>=20
> --Artem
>=20
>=20
>=20
> On Sun, Jun 20, 2010 at 9:51 AM, oizs <oizs@freemail.hu> wrote:
>> I've tried almost everything now.
>> The battery is probably fine:
>> mfiutil show battery
>> mfi0: Battery State:
>>  Manufacture Date: 7/25/2009
>>    Serial Number: 3716
>>     Manufacturer: SMP-PA1.9
>>            Model: DLFR463
>>        Chemistry: LION
>>  Design Capacity: 1800 mAh
>>   Design Voltage: 3700 mV
>>   Current Charge: 99%
>>=20
>> My results:
>> Settings:
>> Raid5:
>> Stripe: 64k
>> mfiutil cache 0
>> mfi0 volume mfid0 cache settings:
>>      I/O caching: writes
>>    write caching: write-back
>>       read ahead: none
>> drive write cache: default
>> Raid0:
>> Stripe: 64k
>> mfiutil cache 0
>> mfi0 volume mfid0 cache settings:
>>      I/O caching: writes
>>    write caching: write-back
>>       read ahead: none
>> drive write cache: default
>>=20
>> Tried to play around with this as well, with almost no difference.
>>=20
>> Raid5
>> read:
>> dd if=3D/dev/mfid0 of=3D/dev/null bs=3D10M
>> 1143+0 records in
>> 1143+0 records out
>> 11985223680 bytes transferred in 139.104134 secs (86160083 bytes/sec)
>> write:
>> dd if=3D/dev/random of=3D/dev/mfid0 bs=3D64K
>> 22747+0 records in
>> 22747+0 records out
>> 1490747392 bytes transferred in 23.921103 secs (62319342 bytes/sec)
>>=20
>> Raid0
>> read:
>> dd if=3D/dev/mfid0 of=3D/dev/null bs=3D64K
>> 92470+0 records in
>> 92470+0 records out
>> 6060113920 bytes transferred in 47.926007 secs (126447294 bytes/sec)
>> write:
>> dd if=3D/dev/random of=3D/dev/mfid0 bs=3D64K
>> 16441+0 records in
>> 16441+0 records out
>> 1077477376 bytes transferred in 17.232486 secs (62525939 bytes/sec)
>>=20
>> I'm writing directly to the device so im not sure any slice issues =
could
>> cause the problems.
>>=20
>> -zsozso
>> On 2010.06.20. 4:53, Scott Long wrote:
>>>=20
>>> Two big things  can affect RAID-5 performance:
>>>=20
>>> 1. Battery backup.  If you don't have a working battery attached to =
the
>>> card, it will turn off the write-back cache, no matter what you do.  =
Check
>>> this.  If you're unsure, use the mfiutil tool that I added to =
FreeBSD a few
>>> months ago and send me the output.
>>>=20
>>> 2. Partition alignment.  If you're using classic MBR slices, =
everything
>>> gets misaligned by 63 sectors, making it impossible for the =
controller to
>>> optimize both reads and writes.  If the array is used for secondary =
storage,
>>> simply don't use an MBR scheme.  If it's used for primary storage, =
try using
>>> GPT instead and setting up your partitions so that they are aligned =
to large
>>> power-of-2 boundaries.
>>>=20
>>> Scott
>>>=20
>>> On Jun 18, 2010, at 6:27 PM, oizs wrote
>>>=20
>>=20
>> _______________________________________________
>> freebsd-current@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-current
>> To unsubscribe, send any mail to =
"freebsd-current-unsubscribe@freebsd.org"
>>=20
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to =
"freebsd-current-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?63849665-E1D3-417F-B6BD-5601E5361315>