Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 13 Nov 2014 00:25:50 -0800
From:      John-Mark Gurney <>
To:        J David <>
Cc:        "" <>, "" <>
Subject:   Re: How thread-friendly is kevent?
Message-ID:  <>
In-Reply-To: <>
References:  <> <> <> <> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
J David wrote this message on Wed, Nov 12, 2014 at 17:24 -0500:
> On Wed, Nov 12, 2014 at 3:49 AM, John-Mark Gurney <> wrote:
> > This is odd...  I would expect that the event w/o _ONESHOT and _DISPATCH
> > to be delivered many times...  Is it possible you have locks in your
> > userland side of things that make this less likely?
> Nope, the test code is (intentionally) entirely lock-free in userland.
> > I have an idea that should only be a few lines of changes that would
> > prevent all the threads waking up...  As we lock the kq before doing
> > the wakeup, we can change KQ_SLEEP from a flag to a count for how many
> > threads are sleeping for an event, and if non-zero, do a wakeup_one...
> > Then when kqueue_scan is about to exit, check to see if there are
> > still events and threads waiting, and then do another wakeup_one...
> This sounds like it could optimize some workloads at substantial
> penalties for others.  If pursued, maybe it needs its own flag.

It really wouldn't be a penalty as the other thread couldn't make
progress since the kq lock was held and would be waiting for the kq
lock anyways...  The only penalty might be the delay in waking up,
but that'd be minor...  But there was already a penalty for the
cross processor read to find out that the lock is held...

> > Currently, KQ_SLEEP is only a flag, so we have to do wakeup to make
> > sure everyone wakes up...
> >
> > Well, if you don't have _ONESHOT and _DISPATCH, any changes I make
> > should make it more reliable that all threads get the events dispatched
> > to them... :)
> Using _DISPATCH is no problem, although a solution that didn't require
> two kevent()-calls per event would obviously be better when every
> syscall matters.  Albeit that is largely an issue on VM's where the
> syscall penalty is artificially large.  In production, this will of
> course run on bare metal.

If you care about that, I'd recommend you have a thread local buffer
that you add enable events to, and then when you get back to your
main loop, you add all of these waiting events..

> The other option is do wrap kevent() with a mutex on the user side.
> That's what Apache does with accept(), IIRC.

kevent effectively provides that lock internally...

> > But some of this is making sure you only run enough threads as
> > necessary...
> That's almost always true.  But, almost always, determining the
> correct value of "enough" requires a blood sacrifice. :)


  John-Mark Gurney				Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."

Want to link to this message? Use this URL: <>