From owner-freebsd-questions@FreeBSD.ORG Tue May 19 19:47:34 2009 Return-Path: Delivered-To: FreeBSD-Questions@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BAEBA10656D4 for ; Tue, 19 May 2009 19:47:34 +0000 (UTC) (envelope-from dkelly@Grumpy.DynDNS.org) Received: from smtp.knology.net (smtp.knology.net [24.214.63.101]) by mx1.freebsd.org (Postfix) with ESMTP id 5237C8FC17 for ; Tue, 19 May 2009 19:47:33 +0000 (UTC) (envelope-from dkelly@Grumpy.DynDNS.org) Received: (qmail 29145 invoked by uid 0); 19 May 2009 19:47:30 -0000 Received: from unknown (HELO Grumpy.DynDNS.org) (24.42.224.110) by smtp2.knology.net with SMTP; 19 May 2009 19:47:30 -0000 Received: by Grumpy.DynDNS.org (Postfix, from userid 928) id 18E9A2841F; Tue, 19 May 2009 14:47:29 -0500 (CDT) Date: Tue, 19 May 2009 14:47:28 -0500 From: David Kelly To: FreeBSD-Questions@FreeBSD.org Message-ID: <20090519194728.GA40036@Grumpy.DynDNS.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.3i Cc: Subject: ATA transfer block sizes in 7.2? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 May 2009 19:47:35 -0000 Back in 5.x or 6.x the maximum ATA transfer size was 127k or 128k as witnessed with "systat -v" during big file reads or writes. In the later versions of 6.x and now 7.2 this appears to be 63k or 64k. Is there a reason for the smaller transfer size? It seems disk throughput on my machine is limited by the number of transfers/sec less than the bytes/sec. Similar question: in addition to one "normal" drive I have two more configured as a geom striped volume. Transfers seem to be limited to 43k on these volumes. Am guessing the volume was allocated with the wrong multiples of stripe size and/or started on the wrong block, or something along those lines. I/O rates are about half what they were with same hardware using vinum. Think this is my current geom config, its dated March 2006: drive a device /dev/ad4s1d drive b device /dev/ad6s1d volume stripe plex org striped 279k sd drive a sd drive b And believe this was my vinum config (dated Sept 2004): drive vinumdrive1 device /dev/ad6s1d drive vinumdrive0 device /dev/ad4s1d volume vinum0 plex name vinum0.p0 org striped 558s vol vinum0 sd name vinum0.p0.s0 drive vinumdrive0 plex vinum0.p0 len 319571622s driveoffset 265s plexoffset 0s sd name vinum0.p0.s1 drive vinumdrive1 plex vinum0.p0 len 319571622s driveoffset 265s plexoffset 558s S.M.A.R.T. reports one of my striped drives is failing, new drives are in the mail as its also time for an upgrade. When I recreate the new volume how might I optimize it for performance? Stick with geom, or something else? -- David Kelly N4HHE, dkelly@HiWAAY.net ======================================================================== Whom computers would destroy, they must first drive mad.