Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 4 Mar 2019 19:43:09 -0800
From:      Mark Millard <marklmi@yahoo.com>
To:        FreeBSD PowerPC ML <freebsd-ppc@freebsd.org>
Subject:   head -r344018 powerpc64 variant on Powermac G5 (2 sockets, 2 cores each): [*buffer arena] shows up more . . .?
Message-ID:  <D9B56EE2-35C7-44A2-9229-D9E4AECAD3E1@yahoo.com>

next in thread | raw e-mail | index | archive | help
[It is possible that the following is tied to my hack to
avoid threads ending up stuck-sleeping. But I ask about
an alternative that I see in the code.]

Context: using the modern powerpc64 VM_MAX_KERNEL_ADDRESS
and using usefdt=3D1 on an old Powermac G5 (2 sockets, 2 cores
each). Hacks are in use to provide fairly reliable booting
and to avoid threads getting stuck sleeping.

Before the modern VM_MAX_KERNEL_ADDRESS figure there were only
2 or 3 bufspacedaemon-* threads as I remember. Now there are 8
(plus bufdaemon and its worker), for example:

root         23   0.0  0.0     0   288  -  DL   15:48     0:00.39 =
[bufdaemon/bufdaemon]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05 =
[bufdaemon/bufspaced]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05 =
[bufdaemon/bufspaced]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05 =
[bufdaemon/bufspaced]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05 =
[bufdaemon/bufspaced]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05 =
[bufdaemon/bufspaced]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.07 =
[bufdaemon/bufspaced]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.05 =
[bufdaemon/bufspaced]
root         23   0.0  0.0     0   288  -  DL   15:48     0:00.56 =
[bufdaemon// worker]

I'm sometimes seeing processes showing [*buffer arena] that
seemed to wait for a fairly long time with that status, not
something I'd seen historically for those same types of
processes for a similar overall load (not much). During such
times trying to create processes to look around at what is
going on seems to also wait. (Probably with the same status?)

/usr/src/sys/vm/vm_init.c has:

        /*
         * Allocate the buffer arena.
         *
         * Enable the quantum cache if we have more than 4 cpus.  This
         * avoids lock contention at the expense of some fragmentation.
         */
        size =3D (long)nbuf * BKVASIZE;
        kmi->buffer_sva =3D firstaddr;
        kmi->buffer_eva =3D kmi->buffer_sva + size;
        vmem_init(buffer_arena, "buffer arena", kmi->buffer_sva, size,
            PAGE_SIZE, (mp_ncpus > 4) ? BKVASIZE * 8 : 0, 0);
        firstaddr +=3D size;

I wonder if the use of "BKVASIZE * 8" should track the
bufspacedeamon-* thread count and not just the mp_cpus count --or
if the bufspacedeamon-* thread count should track the mp_ncpus
count (and so be smaller for only 4 "cpus" in FreeBSD terms.)

Or may be  [*buffer arena] is inherent in having:
(Not from the time frame of having the [*buffer arena]
showing up, not even from after such. I've not managed
to see such figures during and I've not recorded any
after.)

real memory  =3D 17134088192 (16340 MB)
avail memory =3D 16385716224 (15626 MB)

hw.physmem: 17134088192
hw.usermem: 15232425984
hw.realmem: 17134088192

Virtual Memory:		(Total: 455052K Active: 413888K)
Real Memory:		(Total: 64736K Active: 62508K)
Shared Virtual Memory:	(Total: 56264K Active: 15232K)
Shared Real Memory:	(Total: 16416K Active: 14204K)
Free Memory:	14022736K

vm.kmem_size: 5482692608
vm.kmem_zmax: 65536
vm.kmem_size_min: 12582912
vm.kmem_size_max: 13743895347
vm.kmem_size_scale: 3
vm.kmem_map_size: 414158848
vm.kmem_map_free: 5068533760

vfs.bufspace: 1397690368
vfs.bufkvaspace: 559185920
vfs.bufmallocspace: 0
vfs.bufspacethresh: 1680538825
vfs.buffreekvacnt: 1007
vfs.bufdefragcnt: 0
vfs.buf_pager_relbuf: 0



=3D=3D=3D
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D9B56EE2-35C7-44A2-9229-D9E4AECAD3E1>