Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 21 Mar 2010 10:18:56 -0600
From:      Scott Long <scottl@samsco.org>
To:        Andriy Gapon <avg@icyb.net.ua>
Cc:        Alexander Motin <mav@freebsd.org>, freebsd-current@freebsd.org, Ivan Voras <ivoras@freebsd.org>, freebsd-arch@freebsd.org
Subject:   Re: Increasing MAXPHYS
Message-ID:  <082B2047-44AE-45DB-985B-D8928EBB4871@samsco.org>
In-Reply-To: <4BA633A0.2090108@icyb.net.ua>
References:  <1269109391.00231800.1269099002@10.7.7.3>	<1269120182.00231865.1269108002@10.7.7.3>	<1269120188.00231888.1269109203@10.7.7.3>	<1269123795.00231922.1269113402@10.7.7.3>	<1269130981.00231933.1269118202@10.7.7.3>	<1269130986.00231939.1269119402@10.7.7.3>	<1269134581.00231948.1269121202@10.7.7.3>	<1269134585.00231959.1269122405@10.7.7.3> <4BA6279E.3010201@FreeBSD.org> <4BA633A0.2090108@icyb.net.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
m
On Mar 21, 2010, at 8:56 AM, Andriy Gapon wrote:

> on 21/03/2010 16:05 Alexander Motin said the following:
>> Ivan Voras wrote:
>>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
>>> barring specific class behaviour, it has a fair chance of working =
out of
>>> the box) but the incoming queue will need to also be broken up for
>>> greater effect.
>>=20
>> According to "notes", looks there is a good chance to obtain races, =
as
>> some places expect only one up and one down thread.
>=20
> I haven't given any deep thought to this issue, but I remember us =
discussing
> them over beer :-)
> I think one idea was making sure (somehow) that requests traveling =
over the same
> edge of a geom graph (in the same direction) do it using the same =
queue/thread.
> Another idea was to bring some netgraph-like optimization where some =
(carefully
> chosen) geom vertices pass requests by a direct call instead of =
requeuing.
>=20

Ah, I see that we were thinking about similar things.  Another tactic, =
and one that is
easier to prototype and implement than moving GEOM to a graph, is to =
allow separate
but related bio's to be chained.  If a caller, like maybe physio or the =
bufdaemon or=20
even a middle geom transform, knows that it's going to send multiple =
bio's at once,
it chains them together into a single request, and that request gets =
pipelined through
the stack.  Each layer operates on the entire chain before requeueing to =
the next layer.
Layers/classes that can't operate this way will get the bio serialized =
automatically for them,
breaking the chain, but those won't be the common cases.  This will =
bring cache locality
benefits, and is something that know benefits high-transaction load =
network applications.

Scott




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?082B2047-44AE-45DB-985B-D8928EBB4871>