Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Apr 2009 15:08:38 +0200
From:      Ivan Voras <ivoras@freebsd.org>
To:        Robert Watson <rwatson@freebsd.org>
Cc:        svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org
Subject:   Re: svn commit: r191405 - in head/sys: amd64/amd64 i386/i386
Message-ID:  <9bbcef730904230608t7c91629erda244dbd38e617a8@mail.gmail.com>
In-Reply-To: <alpine.BSF.2.00.0904231326240.54334@fledge.watson.org>
References:  <200904222140.n3MLebn3068260@svn.freebsd.org> <alpine.BSF.2.00.0904231253140.54334@fledge.watson.org>  <9bbcef730904230501k26197958tb78d88958bd20654@mail.gmail.com>  <alpine.BSF.2.00.0904231326240.54334@fledge.watson.org>

next in thread | previous in thread | raw e-mail | index | archive | help
2009/4/23 Robert Watson <rwatson@freebsd.org>:
>
> On Thu, 23 Apr 2009, Ivan Voras wrote:
>
>> 2009/4/23 Robert Watson <rwatson@freebsd.org>:
>>
>>> Do you have any ideas about ways to usefully represent and manage
>>> concepts like "pick a close CPU" or "the set of CPUs that are close"? =
=C2=A0For
>>> example, if I have available a flow identifier, hashing to one of a set=
 of
>>> available CPUs is easy, but what would you suggest as an efficient
>>> representation to hash from a set of close available CPUs rather than t=
he
>>> entire pool?
>>
>> Excuse me if I'm missing the point but isn't this already done by ULE an=
d
>> for almost the same reasons? Shouldn't the scheduler (or its topology
>> infrastructure if it's separated from the scheduler) be the best place t=
o do
>> it?
>
> Yes, the scheduler will presumably provide the abstractions we're interes=
ted
> in in order to implement this sort of policy. =C2=A0However, the schedule=
r's
> notion of "strictly ordered events" is represented by a thread, and threa=
ds
> scale to several thousand per machine. =C2=A0The network stack's notion o=
f
> "strictly ordered events" is represented by a flow, and we need to be abl=
e
> to handle millions of those at once. =C2=A0The mapping between flows and =
threads
> is something the network stack is best suited to do, since it will be
> passing around and ordering the work, but with the help of appropriate
> abstractions from the scheduler so that it knows how many and which threa=
ds
> exist to do the work, and so that it can use scheduler-provided metrics,
> such as CPU topology, to make reasonable choices about placement of work
> when there's flexibility in the ordering.

Thanks, it's much clearer now!



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9bbcef730904230608t7c91629erda244dbd38e617a8>