Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 Jun 2010 12:09:25 -0700
From:      Artem Belevich <fbsdlist@src.cx>
To:        oizs <oizs@freemail.hu>
Cc:        freebsd-current@freebsd.org
Subject:   Re: Dell Perc 5/i Performance issues
Message-ID:  <AANLkTinYsUMVdlFNYnw4RFMPWdWqhzFUlQYIznNTq2PP@mail.gmail.com>
In-Reply-To: <4C1E4722.3050506@freemail.hu>
References:  <4C1AB4C0.4020604@freemail.hu> <A594C946-32C0-4C4A-AA37-0E81D270162A@mac.com> <4C1B3792.9000007@freemail.hu> <AANLkTimsHZLREByndqXEjt2yjdvOYVV7Rnw8AMjqxYIl@mail.gmail.com> <4C1C0ED9.8090103@freemail.hu> <2F904ED8-BC95-459F-8536-A889ADDA8D31@samsco.org> <4C1E4722.3050506@freemail.hu>

next in thread | previous in thread | raw e-mail | index | archive | help
/dev/random and /dev/urandom are relatively slow and are not suitable
as the source of data for testing modern hard drives' sequential
throughput.

On my 3GHz dual-core amd63 box both /dev/random and /dev/urandom max
out at ~80MB/s while consuming 100% CPU time on one of the processor
cores.
That would not be enough to saturate single disk with sequential writes.

--Artem



On Sun, Jun 20, 2010 at 9:51 AM, oizs <oizs@freemail.hu> wrote:
> I've tried almost everything now.
> The battery is probably fine:
> mfiutil show battery
> mfi0: Battery State:
> =A0Manufacture Date: 7/25/2009
> =A0 =A0Serial Number: 3716
> =A0 =A0 Manufacturer: SMP-PA1.9
> =A0 =A0 =A0 =A0 =A0 =A0Model: DLFR463
> =A0 =A0 =A0 =A0Chemistry: LION
> =A0Design Capacity: 1800 mAh
> =A0 Design Voltage: 3700 mV
> =A0 Current Charge: 99%
>
> My results:
> Settings:
> Raid5:
> Stripe: 64k
> mfiutil cache 0
> mfi0 volume mfid0 cache settings:
> =A0 =A0 =A0I/O caching: writes
> =A0 =A0write caching: write-back
> =A0 =A0 =A0 read ahead: none
> drive write cache: default
> Raid0:
> Stripe: 64k
> mfiutil cache 0
> mfi0 volume mfid0 cache settings:
> =A0 =A0 =A0I/O caching: writes
> =A0 =A0write caching: write-back
> =A0 =A0 =A0 read ahead: none
> drive write cache: default
>
> Tried to play around with this as well, with almost no difference.
>
> Raid5
> read:
> dd if=3D/dev/mfid0 of=3D/dev/null bs=3D10M
> 1143+0 records in
> 1143+0 records out
> 11985223680 bytes transferred in 139.104134 secs (86160083 bytes/sec)
> write:
> dd if=3D/dev/random of=3D/dev/mfid0 bs=3D64K
> 22747+0 records in
> 22747+0 records out
> 1490747392 bytes transferred in 23.921103 secs (62319342 bytes/sec)
>
> Raid0
> read:
> dd if=3D/dev/mfid0 of=3D/dev/null bs=3D64K
> 92470+0 records in
> 92470+0 records out
> 6060113920 bytes transferred in 47.926007 secs (126447294 bytes/sec)
> write:
> dd if=3D/dev/random of=3D/dev/mfid0 bs=3D64K
> 16441+0 records in
> 16441+0 records out
> 1077477376 bytes transferred in 17.232486 secs (62525939 bytes/sec)
>
> I'm writing directly to the device so im not sure any slice issues could
> cause the problems.
>
> -zsozso
> On 2010.06.20. 4:53, Scott Long wrote:
>>
>> Two big things =A0can affect RAID-5 performance:
>>
>> 1. Battery backup. =A0If you don't have a working battery attached to th=
e
>> card, it will turn off the write-back cache, no matter what you do. =A0C=
heck
>> this. =A0If you're unsure, use the mfiutil tool that I added to FreeBSD =
a few
>> months ago and send me the output.
>>
>> 2. Partition alignment. =A0If you're using classic MBR slices, everythin=
g
>> gets misaligned by 63 sectors, making it impossible for the controller t=
o
>> optimize both reads and writes. =A0If the array is used for secondary st=
orage,
>> simply don't use an MBR scheme. =A0If it's used for primary storage, try=
 using
>> GPT instead and setting up your partitions so that they are aligned to l=
arge
>> power-of-2 boundaries.
>>
>> Scott
>>
>> On Jun 18, 2010, at 6:27 PM, oizs wrote
>>
>
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org=
"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTinYsUMVdlFNYnw4RFMPWdWqhzFUlQYIznNTq2PP>