From owner-freebsd-performance@FreeBSD.ORG Thu Jul 7 05:55:43 2011 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4AB781065674; Thu, 7 Jul 2011 05:55:43 +0000 (UTC) (envelope-from ohartman@zedat.fu-berlin.de) Received: from outpost1.zedat.fu-berlin.de (outpost1.zedat.fu-berlin.de [130.133.4.66]) by mx1.freebsd.org (Postfix) with ESMTP id 026338FC0A; Thu, 7 Jul 2011 05:55:42 +0000 (UTC) Received: from inpost2.zedat.fu-berlin.de ([130.133.4.69]) by outpost1.zedat.fu-berlin.de (Exim 4.69) with esmtp (envelope-from ) id <1QehYs-00014g-2g>; Thu, 07 Jul 2011 07:55:42 +0200 Received: from e178008197.adsl.alicedsl.de ([85.178.8.197] helo=thor.walstatt.dyndns.org) by inpost2.zedat.fu-berlin.de (Exim 4.69) with esmtpsa (envelope-from ) id <1QehYr-000276-VO>; Thu, 07 Jul 2011 07:55:42 +0200 Message-ID: <4E154A5D.8080009@zedat.fu-berlin.de> Date: Thu, 07 Jul 2011 07:55:41 +0200 From: "Hartmann, O." User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:5.0) Gecko/20110630 Thunderbird/5.0 MIME-Version: 1.0 To: Arnaud Lacombe , FreeBSD Current , "freebsd-performance@freebsd.org" References: <4E1421D9.7080808@zedat.fu-berlin.de> <4E147F54.40908@zedat.fu-berlin.de> <20110706162811.GA68436@troutmask.apl.washington.edu> <20110706193636.GA69550@troutmask.apl.washington.edu> <4E14CCE5.4050906@zedat.fu-berlin.de> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: 85.178.8.197 Cc: Subject: Re: Heavy I/O blocks FreeBSD box for several seconds X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 07 Jul 2011 05:55:43 -0000 On 07/07/11 06:29, Arnaud Lacombe wrote: > Hi, > > On Wed, Jul 6, 2011 at 5:00 PM, Hartmann, O. > wrote: >> On 07/06/11 21:36, Steve Kargl wrote: >>> On Wed, Jul 06, 2011 at 03:18:35PM -0400, Arnaud Lacombe wrote: >>>> Hi, >>>> >>>> On Wed, Jul 6, 2011 at 12:28 PM, Steve Kargl >>>> wrote: >>>>> On Wed, Jul 06, 2011 at 05:29:24PM +0200, O. Hartmann wrote: >>>>>> I use SCHED_ULE on all machines, since it is supposed to be performing >>>>>> better on multicore boxes, but there are lots of suggestions switching >>>>>> back to the old SCHED_4BSD scheduler. >>>>>> >>>>> If you are using MPI in numerical codes, then you want >>>>> to use SCHED_4BSD. ?I've posted numerous times about ULE >>>>> and its very poor performance when using MPI. >>>>> >>>>> >>>>> http://lists.freebsd.org/pipermail/freebsd-hackers/2008-October/026375.html >>>>> >>>> [sarcasm] >>>> It is rather funny to see that the post you point out has generated >>>> exactly 0 meaningful follow-up then and as you mention later in this >>>> thread, the issue still remains today :-) >>>> [/sarcasm] >>>> >>> Apparently, you are privy to my private email exchanges >>> with jeffr. >>> >>> I'm also not sure why you're being sarcastic here. The >>> issue was and AFAIK still is a problem for anyone using >>> FreeBSD in a HPC cluster. ULE simply performs worse than >>> 4BSD. >>> >> Well, I know only very little people using FreeBSD within a HPC cluster or >> even for scientific purposes, except myself and some people around here. >> > Well, quad-core CPU, dual-socket machine are quite common these day, > even in non-HPC system. So, unless you understand enough the issue and > ULE to assert that this issue is tight to this workload only, I would > assume this issue to affect other ULE use-case and a broader user > spectrum than "you and some people around". > > - Arnaud Maybe this is a little misunderstanding. I complained about the fact that FreeBSD is more and more vanishing from HPC (it was very common in the mid 90s and at the beginning the 2000s). My former department banned after the introduction of Linux kernel 2.6 all FreeBSD boxes due to a much better performance (network) and the availability of HPC 64bit compilers. Nevertheless, nowadays the situation has turned even worse with GPGPU. As you said, multicores are very common and so the inabilities of the multicore-aware scheduler ULE become effective not even for a marginal group of users with a specific workload.