Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 12 Apr 2007 16:00:26 -0400
From:      Mike Meyer <mwm-keyword-freebsdhackers2.e313df@mired.org>
To:        Daniel Taylor <daniel_h_taylor@yahoo.co.uk>
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: tcp connection splitter
Message-ID:  <17950.36826.926845.213901@bhuda.mired.org>
In-Reply-To: <20070412190849.63355.qmail@web27705.mail.ukl.yahoo.com>
References:  <20070412190849.63355.qmail@web27705.mail.ukl.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
In <20070412190849.63355.qmail@web27705.mail.ukl.yahoo.com>, Daniel Taylor <daniel_h_taylor@yahoo.co.uk> typed:
> data/second), a lot of memcpy()s, and doesn't scale
> very well.   Also, adding a packet to N queues is
> expensive because it needs to acquire and release
> N mutex locks (one for each client queue.)

You can't escape that with this architecture. In paticular:

> Each
> enqueue bumps the refcount, each dequeue decreases it;
> when the refcount drops to 0, the packet is free()'d
> (by whomever happened to dequeue it last).

These operations have to be locked, so you have to acquire and release
1 mutex lock N+1 times.

The FSM model already suggested works well, though I tend to call it
the async I/O model, because all your I/O is done async. You track the
state of each socket, and events on the socket trigger state
transitions for that socket. The programming for a single execution
path is a bit more complicated, because the state has to be tracked
explicitly instead of being implicit in the PC, but *all* the
concurrency issues go away, so overall it's a win.

	<mike
-- 
Mike Meyer <mwm@mired.org>		http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?17950.36826.926845.213901>