Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 08 Mar 2007 06:13:51 +0100
From:      Fluffles <etc@fluffles.net>
To:        Ivan Voras <ivoras@fer.hr>
Cc:        freebsd-stable@freebsd.org, freebsd-geom@freebsd.org
Subject:   Re: Some Unix benchmarks for those who are interesed
Message-ID:  <45EF9B8F.4000201@fluffles.net>
In-Reply-To: <45EF253B.8030909@fer.hr>
References:  <20070306020826.GA18228@nowhere>	<45ECF00D.3070101@samsco.org><20070306050312.GA2437@nowhere><008101c75fcc$210c74a0$0c00a8c0@Artem>	<esk9vq$uhh$1@sea.gmane.org><001a01c7601d$5d635ee0$0c00a8c0@Artem>	<eskka8$adn$1@sea.gmane.org><001801c7603a$5339e020$0c00a8c0@Artem>	<eskpd1$sm4$1@sea.gmane.org>	<20070307105144.1d4a382f@daydream.goid.lan><002801c760e2$5cb5eb50$0c00a8c0@Artem>	<esmvnp$khs$1@sea.gmane.org><005b01c760e6$9a798bf0$0c00a8c0@Artem>	<esn2s6$1i9$1@sea.gmane.org>	<001601c760ee$f76fa300$0c00a8c0@Artem>	<45EF2252.1000202@fluffles.net> <45EF253B.8030909@fer.hr>

next in thread | previous in thread | raw e-mail | index | archive | help
Ivan Voras wrote:
> Fluffles wrote:
>
>   
>> If you use dd on the raw device (meaning no UFS/VFS) there is no
>> read-ahead. This means that the following DD-command will give lower STR
>> read than the second:
>>
>> no read-ahead:
>> dd if=/dev/mirror/data of=/dev/null bs=1m count=1000
>> read-ahead and multiple I/O queue depth:
>> dd if=/mounted/mirror/volume of=/dev/null bs=1m count=1000
>>     
>
> I'd agree in theory, but bonnie++ gives WORSE results than raw device:
>   

On what hardware is this? Using any form of geom software RAID?

The low Per Char results would lead me to believe it's a very slow CPU;
maybe VIA C3 or some old pentium? Modern systems should get 100MB/s+ in
per-char bonnie benchmark, even a Sempron 2600+ 1.6GHz 128KB cache which
costs about $39. Then it might be logical DD gets higher results since
this is more 'easy' to handle by the CPU. The VFS/UFS layer adds
potential for nice performance-increases but it does take it's toll in
the form of cputime overhead. If your CPU is very slow, i can imagine
these optimizations having a detrimental effect instead. Just guessing here.

Also, checkout my benchmark results i posted in response to Andrei Kolu
in particular the geom_raid5 benchmark; there the UFS/VFS layer causes
25% lower write performance; due to cpu bottlenecks (and some UFS
inefficiency with regard to max blocks per cylinder). So for all i know
it may be just your CPU which is limiting sequential performance somewhat.

Regards,

- Veronica



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?45EF9B8F.4000201>