From owner-freebsd-stable@FreeBSD.ORG Thu Mar 7 18:57:45 2013 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 928F0C08 for ; Thu, 7 Mar 2013 18:57:45 +0000 (UTC) (envelope-from prvs=1778223946=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 35F4DD05 for ; Thu, 7 Mar 2013 18:57:44 +0000 (UTC) Received: from r2d2 ([46.65.172.4]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50002602966.msg for ; Thu, 07 Mar 2013 18:57:38 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Thu, 07 Mar 2013 18:57:38 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 46.65.172.4 X-Return-Path: prvs=1778223946=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-stable@freebsd.org Message-ID: From: "Steven Hartland" To: "Karl Denninger" , References: <513524B2.6020600@denninger.net> <20130307072145.GA2923@server.rulingia.com> <5138A4C1.5090503@denninger.net> Subject: Re: ZFS "stalls" -- and maybe we should be talking about defaults? Date: Thu, 7 Mar 2013 18:57:41 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 07 Mar 2013 18:57:45 -0000 ----- Original Message ----- From: "Karl Denninger" > Where I am right now is this: > > 1. I *CANNOT* reproduce the spins on the test machine with Postgres > stopped in any way. Even with multiple ZFS send/recv copies going on > and the load average north of 20 (due to all the geli threads), the > system doesn't stall or produce any notable pauses in throughput. Nor > does the system RAM allocation get driven hard enough to force paging. > > This is with NO tuning hacks in /boot/loader.conf. I/O performance is > both stable and solid. > > 2. WITH Postgres running as a connected hot spare (identical to the > production machine), allocating ~1.5G of shared, wired memory, running > the same synthetic workload in (1) above I am getting SMALL versions of > the misbehavior. However, while system RAM allocation gets driven > pretty hard and reaches down toward 100MB in some instances it doesn't > get driven hard enough to allocate swap. The "burstiness" is very > evident in the iostat figures with spates getting into the single digit > MB/sec range from time to time but it's not enough to drive the system > to a full-on stall. > > There's pretty-clearly a bad interaction here between Postgres wiring > memory and the ARC, when the latter is left alone and allowed to do what > it wants. I'm continuing to work on replicating this on the test > machine... just not completely there yet. Another possibility to consider is how postgres uses the FS. For example does is request sync IO in ways not present in the system without it which is causing the FS and possibly underlying disk system to behave differently. One other options to test, just to rule it out is what happens if you use BSD scheduler instead of ULE? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk.