From owner-freebsd-geom@FreeBSD.ORG Thu Mar 8 05:14:00 2007 Return-Path: X-Original-To: freebsd-geom@freebsd.org Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 5CD2816A403; Thu, 8 Mar 2007 05:14:00 +0000 (UTC) (envelope-from etc@fluffles.net) Received: from auriate.fluffles.net (cust.95.160.adsl.cistron.nl [195.64.95.160]) by mx1.freebsd.org (Postfix) with ESMTP id 0BA0913C442; Thu, 8 Mar 2007 05:14:00 +0000 (UTC) (envelope-from etc@fluffles.net) Received: from destiny ([10.0.0.21]) by auriate.fluffles.net with esmtpa (Exim 4.63 (FreeBSD)) (envelope-from ) id 1HPAwr-000MZn-Sb; Thu, 08 Mar 2007 06:13:53 +0100 Message-ID: <45EF9B8F.4000201@fluffles.net> Date: Thu, 08 Mar 2007 06:13:51 +0100 From: Fluffles User-Agent: Thunderbird 1.5.0.8 (X11/20061114) MIME-Version: 1.0 To: Ivan Voras References: <20070306020826.GA18228@nowhere> <45ECF00D.3070101@samsco.org><20070306050312.GA2437@nowhere><008101c75fcc$210c74a0$0c00a8c0@Artem> <001a01c7601d$5d635ee0$0c00a8c0@Artem> <001801c7603a$5339e020$0c00a8c0@Artem> <20070307105144.1d4a382f@daydream.goid.lan><002801c760e2$5cb5eb50$0c00a8c0@Artem> <005b01c760e6$9a798bf0$0c00a8c0@Artem> <001601c760ee$f76fa300$0c00a8c0@Artem> <45EF2252.1000202@fluffles.net> <45EF253B.8030909@fer.hr> In-Reply-To: <45EF253B.8030909@fer.hr> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-stable@freebsd.org, freebsd-geom@freebsd.org Subject: Re: Some Unix benchmarks for those who are interesed X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Mar 2007 05:14:00 -0000 Ivan Voras wrote: > Fluffles wrote: > > >> If you use dd on the raw device (meaning no UFS/VFS) there is no >> read-ahead. This means that the following DD-command will give lower STR >> read than the second: >> >> no read-ahead: >> dd if=/dev/mirror/data of=/dev/null bs=1m count=1000 >> read-ahead and multiple I/O queue depth: >> dd if=/mounted/mirror/volume of=/dev/null bs=1m count=1000 >> > > I'd agree in theory, but bonnie++ gives WORSE results than raw device: > On what hardware is this? Using any form of geom software RAID? The low Per Char results would lead me to believe it's a very slow CPU; maybe VIA C3 or some old pentium? Modern systems should get 100MB/s+ in per-char bonnie benchmark, even a Sempron 2600+ 1.6GHz 128KB cache which costs about $39. Then it might be logical DD gets higher results since this is more 'easy' to handle by the CPU. The VFS/UFS layer adds potential for nice performance-increases but it does take it's toll in the form of cputime overhead. If your CPU is very slow, i can imagine these optimizations having a detrimental effect instead. Just guessing here. Also, checkout my benchmark results i posted in response to Andrei Kolu in particular the geom_raid5 benchmark; there the UFS/VFS layer causes 25% lower write performance; due to cpu bottlenecks (and some UFS inefficiency with regard to max blocks per cylinder). So for all i know it may be just your CPU which is limiting sequential performance somewhat. Regards, - Veronica