Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 7 Jul 2011 00:55:37 -0700
From:      Case van Rij <case.vanrij@gmail.com>
To:        sgk@troutmask.apl.washington.edu
Cc:        "freebsd-performance@freebsd.org" <freebsd-performance@freebsd.org>, Arnaud Lacombe <lacombar@gmail.com>
Subject:   Re: Heavy I/O blocks FreeBSD box for several seconds
Message-ID:  <CAKEWRjNtnKF7Hg3RZnyAfBfhx31a=jHTxG3zrwsGrgBtF86zhA@mail.gmail.com>
In-Reply-To: <4E154A5D.8080009@zedat.fu-berlin.de>
References:  <4E1421D9.7080808@zedat.fu-berlin.de> <CALH631=F4bSgNDE4w0qcXGMgGxZRRwCP9n-H4M0c%2B1UEaqWr7Q@mail.gmail.com> <4E147F54.40908@zedat.fu-berlin.de> <20110706162811.GA68436@troutmask.apl.washington.edu> <CACqU3MVLr5VXRovs1uV%2BzHazJi2rrjE9Sp3XzsCPJ0Un06pmDQ@mail.gmail.com> <20110706193636.GA69550@troutmask.apl.washington.edu> <4E14CCE5.4050906@zedat.fu-berlin.de> <CACqU3MWBr7fsLN25wBPV=WDSu5oajkO=iFEYcXKSHz7UnwSWxA@mail.gmail.com> <4E154A5D.8080009@zedat.fu-berlin.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jul 6, 2011 at 10:55 PM, Hartmann, O.
<ohartman@zedat.fu-berlin.de> wrote:
> On 07/07/11 06:29, Arnaud Lacombe wrote:
>>
>> Hi,
>>
>> On Wed, Jul 6, 2011 at 5:00 PM, Hartmann, O.
>> <ohartman@zedat.fu-berlin.de> =A0wrote:
>>>
>>> On 07/06/11 21:36, Steve Kargl wrote:
>>>>
>>>> On Wed, Jul 06, 2011 at 03:18:35PM -0400, Arnaud Lacombe wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> On Wed, Jul 6, 2011 at 12:28 PM, Steve Kargl
>>>>> <sgk@troutmask.apl.washington.edu> =A0 =A0wrote:
>>>>>>
>>>>>> On Wed, Jul 06, 2011 at 05:29:24PM +0200, O. Hartmann wrote:
>>>>>>>
>>>>>>> I use SCHED_ULE on all machines, since it is supposed to be
>>>>>>> performing
>>>>>>> better on multicore boxes, but there are lots of suggestions
>>>>>>> switching
>>>>>>> back to the old SCHED_4BSD scheduler.
>>>>>>>
>>>>>> If you are using MPI in numerical codes, then you want
>>>>>> to use SCHED_4BSD. ?I've posted numerous times about ULE
>>>>>> and its very poor performance when using MPI.
>>>>>>
>>>>>> http://lists.freebsd.org/pipermail/freebsd-hackers/2008-October/0263=
75.html

>>>>>>>With ULE, 2 Test_mpi jobs are always scheduled on the same core whil=
e one
>>>>>>>core remains idle.  Also, note the difference in the reported load a=
verages.

While possibly not the same issue you're seeing, I noticed a similar
problem on 8 and 12
core machines with ULE, specifically with a relatively small number of
threads runnable
but waiting to run on a busy core while other cores were sitting idle.

tdq_idled won't steal threads from a queue unless there are
kern.sched.steal_thresh
threads in that queue, where steal_thresh =3D min(fls(mp_ncpus) - 1, 3);
ie. on an 8 core system you need 3 threads in the queue before idled
steals one.
Fortunately you can simply override steal_thresh at run time. 1 works
great for me, ymmv.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAKEWRjNtnKF7Hg3RZnyAfBfhx31a=jHTxG3zrwsGrgBtF86zhA>