From owner-freebsd-hackers Fri Nov 13 15:38:35 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id PAA20706 for freebsd-hackers-outgoing; Fri, 13 Nov 1998 15:38:35 -0800 (PST) (envelope-from owner-freebsd-hackers@FreeBSD.ORG) Received: from dingo.cdrom.com (dingo.cdrom.com [204.216.28.145]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id PAA20701 for ; Fri, 13 Nov 1998 15:38:34 -0800 (PST) (envelope-from mike@dingo.cdrom.com) Received: from dingo.cdrom.com (localhost.cdrom.com [127.0.0.1]) by dingo.cdrom.com (8.9.1/8.8.8) with ESMTP id PAA01117; Fri, 13 Nov 1998 15:36:14 -0800 (PST) (envelope-from mike@dingo.cdrom.com) Message-Id: <199811132336.PAA01117@dingo.cdrom.com> X-Mailer: exmh version 2.0.2 2/24/98 To: Bernd Walter cc: Mike Smith , Peter Jeremy , hackers@FreeBSD.ORG Subject: Re: [Vinum] Stupid benchmark: newfsstone In-reply-to: Your message of "Sat, 14 Nov 1998 00:25:23 +0100." <19981114002523.39363@cicely.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Fri, 13 Nov 1998 15:36:14 -0800 From: Mike Smith Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG > On Fri, Nov 13, 1998 at 01:50:40PM -0800, Mike Smith wrote: > > > Greg Lehey wrote: > > > > And it's almost impossible to find > > > >spindle synchronized disks nowadays. > > > > > > Seagate Barracuda's support it, I assumed that the newer Seagates did > > > as well. The impression I got was that all you had to do was wire the > > > `spindle sync' lines from all the disks together and then designate > > > all except one as a sync'd slave. Admittedly, I've never tried > > > actually using it. > > > > Most modern "server class" SCSI disks support it. It's not useful > > unless you turn off tagged queueing, caching and most other drive > > performance features. > Where's the problem with these options on when using Spindle Sync? The whole point of spindle sync is to exactly lock all the drives together to coordinate read/write activity. These features in conjunction with sector sparing and quantum differences between disks means that synchronising spindles is a complete waste of time, as the disks won't be mimicking each other anyway. > > > > Finally, aggregating involves a > > > >scatter/gather approach which, unless I've missed something, is not > > > >supported at a hardware level. Each request to the driver specifies > > > >one buffer for the transfer, so the scatter gather would have to be > > > >done by allocating more memory and performing the transfer there (for > > > >a read) and then copying to the correct place. > > > > > > Since the actual data transfer occurs to physical memory, whilst the > > > kernel buffers are in VM, this should just require some imaginative > > > juggling of the PTE's so the physical pages (or actual scatter/gather > > > requests) are de-interleaved (to match the data on each spindle). > > > > You'd have to cons a new map and have it present the scattered target > > area as a linear region. This is expensive, and the performance boost > > is likely to be low to nonexistent for optimal stripe sizes. > > Concatenation of multiple stripe reads is only a benefit if the stripe > > is small (so that concatenation significantly lowers overhead). > That's right - but you can't expect a high linear performance increase > when using great stripes. That depends on the application's read behaviour. If reads are larger than the stripe size, you still win. -- \\ Sometimes you're ahead, \\ Mike Smith \\ sometimes you're behind. \\ mike@smith.net.au \\ The race is long, and in the \\ msmith@freebsd.org \\ end it's only with yourself. \\ msmith@cdrom.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message