Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 09 Apr 2004 17:53:58 +0900
From:      Seigo Tanimura <tanimura@tanimura.dyndns.org>
To:        Robert Watson <rwatson@FreeBSD.org>
Cc:        freebsd-arch@FreeBSD.org
Subject:   Re: mtx_lock_recurse/mtx_unlock_recurse functions (proof-of-concept).
Message-ID:  <200404090853.i398rwAq029617@urban>
In-Reply-To: <Pine.NEB.3.96L.1040408182241.54122B-100000@fledge.watson.org>
References:  <20040408213647.GL661@darkness.comp.waw.pl> <Pine.NEB.3.96L.1040408182241.54122B-100000@fledge.watson.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 8 Apr 2004 18:33:00 -0400 (EDT),
  Robert Watson <rwatson@FreeBSD.org> said:

rwatson> On Thu, 8 Apr 2004, Pawel Jakub Dawidek wrote:

>> As was discussed, it will be helpful to have functions, that are able to
>> acquire lock recursively, even if lock itself isn't recursable. 
>> 
>> Here is a patch, that should implement this functionality: 
>> 
>> http://people.freebsd.org/~pjd/patches/mtx_lock_recurse.patch
>> 
>> I also added a KASSERT() to protect against mixed locking modes, e.g.: 

rwatson> As you know, this is of interest to me because several situations have
rwatson> come up in the network stack locking code where we've worked around
rwatson> locking issues by conditionally acquiring a lock based on whether it's not
rwatson> already held.  For example, in the rwatson_netperf branch, we
rwatson> conditionally acquire the socket buffer lock in the KQueue filter because
rwatson> we don't know if we're being called from a context that has already
rwatson> acquired the mutex:

rwatson> @@ -1875,8 +1915,11 @@
rwatson>  filt_sowrite(struct knote *kn, long hint)
rwatson>  {
rwatson>         struct socket *so = kn->kn_fp->f_data;
rwatson> -       int result;
rwatson> +       int needlock, result;
 
rwatson> +       needlock = !SOCKBUF_OWNED(&so->so_snd);
rwatson> +       if (needlock)
rwatson> +               SOCKBUF_LOCK(&so->so_snd);
rwatson>         kn->kn_data = sbspace(&so->so_snd);
rwatson>         if (so->so_state & SS_CANTSENDMORE) {
rwatson>                 kn->kn_flags |= EV_EOF;
rwatson> @@ -1891,6 +1934,8 @@
rwatson>                 result = (kn->kn_data >= kn->kn_sdata);
rwatson>         else
rwatson>                 result = (kn->kn_data >= so->so_snd.sb_lowat);
rwatson> +       if (needlock)
rwatson> +               SOCKBUF_UNLOCK(&so->so_snd);
rwatson>         return (result);
rwatson>  }

rwatson> This stems directly from the fact that in general we wanted to avoid
rwatson> recursion on socket buffer mutexes, but that in one case due to existing
rwatson> implementation constraints, we don't know what state the lock will be in.
rwatson> In this case, that happens because invocation of filt_sowrite() sometimes
rwatson> occurs directly from the KQueue code to poll a filter, and sometimes from
rwatson> the socket code during a wakeup that invokes KNOTE().  There are arguably
rwatson> at least two ways in which the current code might be modified to try and
rwatson> address this:

rwatson> (0) Use recursive mutexes so that this recursion is permitted.

rwatson> (1) Use separate APIs to enter the per-object filters so that the origins
rwatson>     of the invocation (and hence locking state) are more clear.

rwatson> (2) Avoid holding existing locks over calls into KNOTE(), which goes off
rwatson>     and does a lot of stuff.

rwatson> Neither of these is 100% desirable, and I guess I'd prefer (1) over (2) 
rwatson> since (2) conflicts with the use of locks as a form of reference counting. 
rwatson> However, the easy answer (0) is also undesirable because the socket code
rwatson> generally doesn't need recursion of the socket buffer mutex.  Given the
rwatson> ability to selectively say "Ok, recursion here is fine" we would change
rwatson> the above conditional locking into such a case.


(3) Let a dedicated kernel thread call the methods of struct
    filterops.  

The headache of kqueue is that KNOTE() ultimately calls back the
caller subsystem.  Tty would suffer from the same pain as well.

Assuming that a kqueue event need not be delivered synchronously upon
the event, maybe we can attach an event queue to a knote list.
KNOTE() appends an event to the queue and wakes up a dedicated kernel
thread (kqueued).  It then dequeues the event in a queue to call a
filter *without* the lock of the event source. (socket, tty, etc.)

-- 
Seigo Tanimura <tanimura@tanimura.dyndns.org> <tanimura@FreeBSD.org>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200404090853.i398rwAq029617>