From owner-freebsd-current@FreeBSD.ORG Sat Jul 30 17:14:12 2005 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D813216A41F for ; Sat, 30 Jul 2005 17:14:12 +0000 (GMT) (envelope-from discussion-lists@linnet.org) Received: from orb.pobox.com (orb.pobox.com [207.8.226.5]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6C0EA43D45 for ; Sat, 30 Jul 2005 17:14:12 +0000 (GMT) (envelope-from discussion-lists@linnet.org) Received: from orb (localhost [127.0.0.1]) by orb.pobox.com (Postfix) with ESMTP id DA0B91E12; Sat, 30 Jul 2005 13:14:20 -0400 (EDT) Received: from billdog.local.linnet.org (dsl-212-74-113-66.access.uk.tiscali.com [212.74.113.66]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by orb.sasl.smtp.pobox.com (Postfix) with ESMTP id 86A3990; Sat, 30 Jul 2005 13:14:18 -0400 (EDT) Received: from lists by billdog.local.linnet.org with local (Exim 4.50 (FreeBSD)) id 1Dyuvw-0000EO-Ny; Sat, 30 Jul 2005 18:15:36 +0100 Date: Sat, 30 Jul 2005 18:15:36 +0100 From: Brian Candler To: Julian Elischer Message-ID: <20050730171536.GA740@uk.tiscali.com> References: <87711.1122534245@phk.freebsd.dk> <42EB5687.2070400@elischer.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42EB5687.2070400@elischer.org> User-Agent: Mutt/1.4.2.1i Cc: Poul-Henning Kamp , FreeBSD Current Subject: Re: Apparent strange disk behaviour in 6.0 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Jul 2005 17:14:13 -0000 On Sat, Jul 30, 2005 at 03:29:27AM -0700, Julian Elischer wrote: > >Please use gstat and look at the service times instead of the > >busy percentage. > > > > > > The snapshot below is typical when doing tar from one drive to another.. > (tar c -C /disk1 f- .|tar x -C /disk2 -f - ) > > dT: 1.052 flag_I 1000000us sizeof 240 i -1 > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d > %busy Name > 0 405 405 1057 0.2 0 0 0.0 0 0 0.0 > 9.8| ad0 > 0 405 405 1057 0.3 0 0 0.0 0 0 0.0 > 11.0| ad0s2 > 0 866 3 46 0.4 863 8459 0.7 0 0 0.0 > 63.8| da0 > 25 866 3 46 0.5 863 8459 0.8 0 0 0.0 > 66.1| da0s1 > 0 405 405 1057 0.3 0 0 0.0 0 0 0.0 > 12.1| ad0s2f > 195 866 3 46 0.5 863 8459 0.8 0 0 0.0 > 68.1| da0s1d > > even though the process should be disk limitted neither of the disks is > anywhere > near 100%. Are ad0 and da0 both arrays? One IDE disk doing 405 reads per second (2.5ms per seek) is pretty good. A 7200rpm drive would have a theoretical average seek time of 1/(7200/60)/2 = 4.2ms, or 7200/60*2 = 240 ops per second. It can be better with read-ahead caching. But if really is only 12.1% busy (which the 0.3 ms/r implies), that means it would be capable of ~3350 operations per second... that's either a seriously good drive array with tons of cache, or the stats are borked :-) With a single bog-standard IDE drive tarring up a directory containing some large .iso images, and piping the output to /dev/null, I get: L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 1 389 388 49318 2.4 1 24 1.4 90.6| ad0s3d And tarring up /usr/src (again piping to /dev/null) I get: 1 564 564 5034 1.7 0 0 0.0 95.7| ad0s2e This is with 5-STABLE as of 2005-05-13 (i.e. a bit after 5.4-RELEASE), and an AMD 2500+ processor. Interestingly, I get a much higher kBps than your ad0 - although I'm not actually writing the data out again. Maybe it would be interesting to pipe the output of your tar to /dev/null and see how the read performance from ad0 compares with your measured performance? Then try just reading from da0? Regards, Brian.