Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 6 Mar 2019 16:39:31 -0800
From:      Mark Millard <marklmi@yahoo.com>
To:        Justin Hibbits <chmeeedalf@gmail.com>
Cc:        freebsd-ppc <freebsd-ppc@freebsd.org>
Subject:   Re: head -r344018 powerpc64 variant on Powermac G5 (2 sockets, 2 cores each): [*buffer arena] shows up more . . .?
Message-ID:  <8668AAF7-9E6A-4278-9D1B-2ECDBD3804AA@yahoo.com>
In-Reply-To: <20190306151914.44ea831c@titan.knownspace>
References:  <D9B56EE2-35C7-44A2-9229-D9E4AECAD3E1@yahoo.com> <20190306151914.44ea831c@titan.knownspace>

next in thread | previous in thread | raw e-mail | index | archive | help


On 2019-Mar-6, at 13:19, Justin Hibbits <chmeeedalf at gmail.com> wrote:

> On Mon, 4 Mar 2019 19:43:09 -0800
> Mark Millard via freebsd-ppc <freebsd-ppc@freebsd.org> wrote:
> 
>> [It is possible that the following is tied to my hack to
>> avoid threads ending up stuck-sleeping. But I ask about
>> an alternative that I see in the code.]
>> 
>> Context: using the modern powerpc64 VM_MAX_KERNEL_ADDRESS
>> and using usefdt=1 on an old Powermac G5 (2 sockets, 2 cores
>> each). Hacks are in use to provide fairly reliable booting
>> and to avoid threads getting stuck sleeping.
>> 
>> Before the modern VM_MAX_KERNEL_ADDRESS figure there were only
>> 2 or 3 bufspacedaemon-* threads as I remember. Now there are 8
>> (plus bufdaemon and its worker), for example:
>> 
>> root         23   0.0  0.0     0   288  -  DL   15:48     0:00.39
>> [bufdaemon/bufdaemon] root         23   0.0  0.0     0   288  -  DL
>> 15:48     0:00.05 [bufdaemon/bufspaced] root         23   0.0
>> 0.0     0   288  -  DL   15:48     0:00.05 [bufdaemon/bufspaced]
>> root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05
>> [bufdaemon/bufspaced] root         23   0.0  0.0     0   288  -  DL
>> 15:48     0:00.05 [bufdaemon/bufspaced] root         23   0.0
>> 0.0     0   288  -  DL   15:48     0:00.05 [bufdaemon/bufspaced]
>> root         23   0.0  0.0     0   288  -  DL   15:48     0:00.07
>> [bufdaemon/bufspaced] root         23   0.0  0.0     0   288  -  DL
>> 15:48     0:00.05 [bufdaemon/bufspaced] root         23   0.0
>> 0.0     0   288  -  DL   15:48     0:00.56 [bufdaemon// worker]
>> 
>> I'm sometimes seeing processes showing [*buffer arena] that
>> seemed to wait for a fairly long time with that status, not
>> something I'd seen historically for those same types of
>> processes for a similar overall load (not much). During such
>> times trying to create processes to look around at what is
>> going on seems to also wait. (Probably with the same status?)
>> 
> 
> Hi Mark,
> 
> Can you try the attached patch?  It might be overkill in the
> synchronization, and I might be using the wrong barriers to be
> considered correct, but I think this should narrow the race down, and
> synchronize the timebases to within a very small margin.  The real
> correct fix would be to suspend the timebase on all cores, which is
> feasible (there's a GPIO for the G4s, and i2c for G5s), but that's a
> non-trivial extra work.
> 
> Be warned, I haven't tested it, I've only compiled it (I don't have a
> G5 to test with anymore).
> 

Sure, I'll try it when the G5 is again available: it is doing
a time consuming build.

I do see one possible oddity: tracing another
platform_smp_timebase_sync use in the code . . .

DEVMETHOD(cpufreq_drv_set,      pmufreq_set)

static int
pmufreq_set(device_t dev, const struct cf_setting *set)
{
. . .        
        error = pmu_set_speed(speed_sel);
. . .
}

int
pmu_set_speed(int low_speed)
{
. . .
        platform_sleep();
. . .
}

PLATFORMMETHOD(platform_sleep,          powermac_sleep),

void
powermac_sleep(platform_t platform)
{
        
        *(unsigned long *)0x80 = 0x100;
        cpu_sleep();
}

void
cpu_sleep()
{
. . .
        platform_smp_timebase_sync(timebase, 0);
. . .
}

PLATFORMMETHOD(platform_smp_timebase_sync, powermac_smp_timebase_sync),

The issue:

I do not see any matching platform_smp_timebase_sync(timebase, 1)
or other CPUs doing a powermac_smp_timebase_sync in this sequence.

(If this makes testing the patch inappropriate, let me know.)

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8668AAF7-9E6A-4278-9D1B-2ECDBD3804AA>