Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 6 Mar 2019 15:19:14 -0600
From:      Justin Hibbits <chmeeedalf@gmail.com>
To:        freebsd-ppc <freebsd-ppc@freebsd.org>
Subject:   Re: head -r344018 powerpc64 variant on Powermac G5 (2 sockets, 2 cores each): [*buffer arena] shows up more . . .?
Message-ID:  <20190306151914.44ea831c@titan.knownspace>
In-Reply-To: <D9B56EE2-35C7-44A2-9229-D9E4AECAD3E1@yahoo.com>
References:  <D9B56EE2-35C7-44A2-9229-D9E4AECAD3E1@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--MP_/95kzJ17J+e.fHE.d+h6njCd
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Mon, 4 Mar 2019 19:43:09 -0800
Mark Millard via freebsd-ppc <freebsd-ppc@freebsd.org> wrote:

> [It is possible that the following is tied to my hack to
> avoid threads ending up stuck-sleeping. But I ask about
> an alternative that I see in the code.]
> 
> Context: using the modern powerpc64 VM_MAX_KERNEL_ADDRESS
> and using usefdt=1 on an old Powermac G5 (2 sockets, 2 cores
> each). Hacks are in use to provide fairly reliable booting
> and to avoid threads getting stuck sleeping.
> 
> Before the modern VM_MAX_KERNEL_ADDRESS figure there were only
> 2 or 3 bufspacedaemon-* threads as I remember. Now there are 8
> (plus bufdaemon and its worker), for example:
> 
> root         23   0.0  0.0     0   288  -  DL   15:48     0:00.39
> [bufdaemon/bufdaemon] root         23   0.0  0.0     0   288  -  DL
> 15:48     0:00.05 [bufdaemon/bufspaced] root         23   0.0
> 0.0     0   288  -  DL   15:48     0:00.05 [bufdaemon/bufspaced]
> root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05
> [bufdaemon/bufspaced] root         23   0.0  0.0     0   288  -  DL
> 15:48     0:00.05 [bufdaemon/bufspaced] root         23   0.0
> 0.0     0   288  -  DL   15:48     0:00.05 [bufdaemon/bufspaced]
> root         23   0.0  0.0     0   288  -  DL   15:48     0:00.07
> [bufdaemon/bufspaced] root         23   0.0  0.0     0   288  -  DL
> 15:48     0:00.05 [bufdaemon/bufspaced] root         23   0.0
> 0.0     0   288  -  DL   15:48     0:00.56 [bufdaemon// worker]
> 
> I'm sometimes seeing processes showing [*buffer arena] that
> seemed to wait for a fairly long time with that status, not
> something I'd seen historically for those same types of
> processes for a similar overall load (not much). During such
> times trying to create processes to look around at what is
> going on seems to also wait. (Probably with the same status?)
> 

Hi Mark,

Can you try the attached patch?  It might be overkill in the
synchronization, and I might be using the wrong barriers to be
considered correct, but I think this should narrow the race down, and
synchronize the timebases to within a very small margin.  The real
correct fix would be to suspend the timebase on all cores, which is
feasible (there's a GPIO for the G4s, and i2c for G5s), but that's a
non-trivial extra work.

Be warned, I haven't tested it, I've only compiled it (I don't have a
G5 to test with anymore).

- Justin

--MP_/95kzJ17J+e.fHE.d+h6njCd
Content-Type: text/x-patch
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=powermac_tb_sync.diff

diff --git a/sys/powerpc/powermac/platform_powermac.c b/sys/powerpc/powermac/platform_powermac.c
index fe818829dc7..b5d34ef90c3 100644
--- a/sys/powerpc/powermac/platform_powermac.c
+++ b/sys/powerpc/powermac/platform_powermac.c
@@ -41,6 +41,7 @@ __FBSDID("$FreeBSD$");
 #include <vm/pmap.h>
 
 #include <machine/altivec.h>	/* For save_vec() */
+#include <machine/atomic.h>
 #include <machine/bus.h>
 #include <machine/cpu.h>
 #include <machine/fpu.h>	/* For save_fpu() */
@@ -396,6 +397,19 @@ powermac_smp_start_cpu(platform_t plat, struct pcpu *pc)
 static void
 powermac_smp_timebase_sync(platform_t plat, u_long tb, int ap)
 {
+	static int cpus;
+	static int unleash;
+
+	if (ap) {
+		atomic_add_int(&cpus, 1);
+		while (!atomic_load_acq_int(&unleash))
+			;
+	} else {
+		atomic_add_int(&cpus, 1);
+		while (atomic_load_int(&cpus) != mp_ncpus)
+			;
+		atomic_store_rel_int(&unleash, 1);
+	}
 
 	mttb(tb);
 }

--MP_/95kzJ17J+e.fHE.d+h6njCd--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20190306151914.44ea831c>