From owner-freebsd-current@FreeBSD.ORG Wed Jul 21 17:50:06 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id EAF5316A4D0 for ; Wed, 21 Jul 2004 17:50:06 +0000 (GMT) Received: from pimout3-ext.prodigy.net (pimout3-ext.prodigy.net [207.115.63.102]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6746943D31 for ; Wed, 21 Jul 2004 17:50:06 +0000 (GMT) (envelope-from bob@immure.com) Received: from maul.immure.com (adsl-66-136-206-1.dsl.austtx.swbell.net [66.136.206.1])i6LHnqlM114528; Wed, 21 Jul 2004 13:49:53 -0400 Received: from luke.immure.com (luke.immure.com [10.1.132.3]) by maul.immure.com (8.12.11/8.12.11) with ESMTP id i6LHnoa7043599; Wed, 21 Jul 2004 12:49:50 -0500 (CDT) (envelope-from bob@immure.com) Received: from luke.immure.com (localhost [127.0.0.1]) by luke.immure.com (8.12.11/8.12.11) with ESMTP id i6LHnobp092602; Wed, 21 Jul 2004 12:49:50 -0500 (CDT) (envelope-from bob@luke.immure.com) Received: (from bob@localhost) by luke.immure.com (8.12.11/8.12.11/Submit) id i6LHnoE5092601; Wed, 21 Jul 2004 12:49:50 -0500 (CDT) (envelope-from bob) Date: Wed, 21 Jul 2004 12:49:50 -0500 From: Bob Willcox To: jesk Message-ID: <20040721174950.GE89322@luke.immure.com> References: <056801c46eb3$bd0e2a40$45fea8c0@turbofresse> <20040721044816.GA56020@dan.emsphone.com> <01f201c46f45$231aec10$45fea8c0@turbofresse> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <01f201c46f45$231aec10$45fea8c0@turbofresse> User-Agent: Mutt/1.5.6i X-immure-MailScanner-Information: Please contact the ISP for more information X-immure-MailScanner: Found to be clean X-MailScanner-From: bob@immure.com cc: freebsd-current@freebsd.org cc: Dan Nelson Subject: Re: I/O or Threading Suffer X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Bob Willcox List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Jul 2004 17:50:07 -0000 On Wed, Jul 21, 2004 at 07:07:00PM +0200, jesk wrote: > > Ah. now that's a different story. You're out of the control of the > > process scheduler and into the disk. I don't suppose you're using an > > IDE/ATA disk with no tagged queueing? :) Run "dmesg | grep depth.queue" > > to see how many requests can be queued up on your disk at once. > > > > That dd is stuffing lots of dirty data into the disk cache, and all the > > other processes have to wait in line to get their I/Os done. You'll > > see much better results from a SCSI disk (with usual queue depths > > between 32 and 64), and even better results from a multi-disk hardware > > RAID array (which will have a large write cache). > > > > -- > > Dan Nelson > > dnelson@allantgroup.com > > _______________________________________________ > > freebsd-current@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > > > > > when system doesnt response any more in cause of high write operations > on the disk then the reason for this is not looked up in the device device > configuration or in non-scsi hardware ;) I have to agree with Dan here. I tried the simple dd test on one of my 5-current systems here that has both IDE disks and an LSI 320-2 SCSI RAID controller with 256 MB of writeback cache. When running the dd to one of the IDE drives the delay was _very_ noticable when attempting simultaneous commands that did I/O to that same disk (often the command didn't seem to even run till the dd completed). However, when doing this same thing to a filesystem on the LSI array their was no decernable delay (though the command did take abit longer to run). So, in my case anyway, I was seeing what I'm convinced was I/O starvation on the IDE disk but not on the hardware RAID controller attached disk. Bob -- Bob Willcox Serocki's Stricture: bob@immure.com Marriage is always a bachelor's last option. Austin, TX