Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 May 2007 00:38:49 +1000 (EST)
From:      Bruce Evans <bde@optusnet.com.au>
To:        Attilio Rao <attilio@freebsd.org>
Cc:        arch@freebsd.org
Subject:   Re: sched_lock && thread_lock()
Message-ID:  <20070521225032.C57233@delplex.bde.org>
In-Reply-To: <4651FCB5.7070604@FreeBSD.org>
References:  <20070520155103.K632@10.0.0.1> <20070521113648.F86217@besplex.bde.org> <20070520213132.K632@10.0.0.1> <4651CAB8.8070007@FreeBSD.org> <4651CE2F.8080908@FreeBSD.org> <20070521022847.D679@10.0.0.1> <20070521195811.G56785@delplex.bde.org> <4651FCB5.7070604@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 21 May 2007, Attilio Rao wrote:

> Bruce Evans wrote:
>> We already have inconsistent accesses via PCPU_LAZY_INC(), and I think
>> all cases where you don't really care about the values can be handled
>> better by PCPU_LAZY_INC().  (Though I don't like PCPU_LAZY_INC(), since
>> it gives namespace pollution, a larger race window while combining the
>> values, complications, and missing support for combining the values
>> in dead kernels.)  Cases that can be handled by PCPU_LAZY_INC() are
>> distinguished by the variables being monotonically increasing with
>> time (until wrapping at UINT_MAX breaks them) and nothing except
>> userland looking at them.  The userland accesses are full of races,
>> but userland doesn't really care.  If accessing the pcpu is inefficient,
>> then PCPU_LAZY_INC() can always use atomic ops if that is less inefficient.
>
> Yes, but how do you manage to mantain per-cpu datas and not using per-cpu 
> fields? Do you want it fitting a table?
> In this case probabilly you will need an extra-lock for the table that will 
> increases overhead.

Just change the PCPU_LAZY_INC() macro to access the non-per-cpu global,
using the same locking as you use for the non-pcpu access method.  I
do this in some (old) sys trees for the !SMP case.  This gives a small
unpessimization and fixes reading the values from dead kernels, but
only in the !SMP case.  I also remove pc_cnt from the pcpu data in the
!SMP case.  This is a smaller optimization and requires larger changes
to remove adding up the pcpu data in vm_meter.c.  The adding up can
be left unchanged since it becomes a no-op if the data is left at all
zeros.  In the old sys trees, there is no locking for individual accesses,
so PCPU_LAZY_INC(var) becomes (++(var)).  In the current tree,
PCPU_LAZY_INC(var) is (++(<pcu var>)) written in asm for some arches to
ensure that it is atomic with respect to interrupts on these arches.
Atomicness with respect to interrupts is necessary and sufficient for
the non-per-cpu global !SMP case too.

I now seem to remember some more history of this.  A few vmmeter variables,
e.g., v_intr, have no natural locking and were originally locked by
sched_lock or atomic accesses.  People didn't like the slowness from this,
and removed some of the sched_locking and/or switched to PCPU_LAZY_INC().
The remaining sched locking seems to be mostly accidental.

Bugs found while grepping for sched_locking of VMCNT fields:
- v_trap is supposed to be updated by PCPU_LAZY_INC(), but subr_trap.c
   updates it by VMCNT_ADD().  The direct access used to be perfectly unlocked
   and non-atomic, since it occurs immediately after releasing sched_lock
   and just used "++".  Now it is atomic and a micro-pessimization.
- v_trap's name is now hidden by the macro so grepping for it doesn't work.

Non(?)-bugs found while grepping for VMCNT:
- most updates of v_free_count are via pq->lcnt++ and pq->lcnt-- in
   vm_page.c and vm_pageq.  pq->lcnt is VMCNT_PTR(&cnt.vm_free_count)
   for all pq.  The locking for this seems to be correct except of course
   in sysctls, since it uses vm_page_queue_free_mtx and never needed
   VMCNT access methods.  The unneeded VMCNT_GET()s for this alone comprise
   about 1/3 of the VMCNT_GET()s in vm.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070521225032.C57233>