Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 Dec 2009 13:46:54 +0000
From:      "Robert N. M. Watson" <rwatson@FreeBSD.org>
To:        Harti Brandt <harti@freebsd.org>
Cc:        =?iso-8859-1?Q?Ulrich_Sp=F6rlein?= <uqs@spoerlein.net>, Hans Petter Selasky <hselasky@c2i.net>, freebsd-arch@freebsd.org
Subject:   Re: network statistics in SMP
Message-ID:  <5230C2B2-57A5-4982-928A-43756BF8C1C4@FreeBSD.org>
In-Reply-To: <20091220134738.V46221@beagle.kn.op.dlr.de>
References:  <20091215103759.P97203@beagle.kn.op.dlr.de> <200912151313.28326.jhb@freebsd.org> <20091219112711.GR55913@acme.spoerlein.net> <200912191244.17803.hselasky@c2i.net> <20091219232119.L1555@besplex.bde.org> <20091219164818.L1741@beagle.kn.op.dlr.de> <alpine.BSF.2.00.0912201202520.73550@fledge.watson.org> <20091220134738.V46221@beagle.kn.op.dlr.de>

next in thread | previous in thread | raw e-mail | index | archive | help

On 20 Dec 2009, at 13:19, Harti Brandt wrote:

> RW>Frequent writes to the same cache line across multiple cores are =
remarkably
> RW>expensive, as they trigger the cache coherency protocol (mileage =
may vary).
> RW>For example, a single non-atomically incremented counter cut =
performance of
> RW>gettimeofday() to 1/6th performance on an 8-core system when =
parallel system
> RW>calls were made across all cores.  On many current systems, the =
cost of an
> RW>"atomic" operation is now fairly reasonable as long as the cache =
line is held
> RW>exclusively by the current CPU.  However, if we can avoid them that =
has
> RW>value, as we update quite a few global stats on the way through the =
network
> RW>stack.
>=20
> Hmm. I'm not sure that gettimeofday() is comparable to forwarding an =
IP=20
> packet. I would expect, that a single increment is a good percentage =
of=20
> the entire processing (in terms of numbers of operations) for=20
> gettimeofday(), while in IP forwarding this is somewhere in the noise=20=

> floor. In the simples case the packet is acted upon by the receiving=20=

> driver, the IP input function, the IP output function and the sending=20=

> driver. Not talking about IP filters, firewalls, tunnels, dummynet and=20=

> what else. The relative cost of the increment should be much less. =
But, I=20
> may be wrong of course.

If processing is occurring on multiple CPUs -- for example, you are =
receiving UDP from two ithreads -- then 4-8 cache lines being contended =
due to stats is a lot. Our goal should be (for 9.0) to avoid having any =
contended cache lines in the common case when processing independent =
streams on different CPUs.

> I would really like to sort that out before any kind of ABI freeze=20
> happens. Ideally all the statistics would be accessible per sysctl(), =
have=20
> a version number and have all or most of the required statistics with =
a=20
> simple way to add new fields without breaking anything. Also the field=20=

> sizes (64 vs. 32 bit) should be correct on the kernel - user =
interface.
>=20
> My current feeling after reading this thread is that the low-level =
kernel=20
> side stuff is probably out of what I could do with the time I have and=20=

> would sidetrack me too far from the work on bsnmp. What I would like =
to do=20
> is to fix the kernel/user interface and let the people that now how to =
do=20
> it handle the low-level side.
>=20
> I would really not like to have to deal with a changing user/kernel=20
> interface in current if we go in several steps with the kernel stuff.

I think we should treat the statistics gathering and statistics =
reporting interfaces as entirely separable problems. Statistics are =
updated far more frequently than they are queried, so making the query =
process a bit more expensive (reformatting from an efficient 'update' =
format to an application-friendly 'report' format) should be fine.

One question to think about is whether or not simply cross-CPU summaries =
are sufficient, or whether we actually also want to be able to directly =
monitor per-CPU statistics at the IP layer. The former would maintain =
the status quo making per-CPU behavior purely part of the 'update' step; =
the latter would change the 'report' format as well. I've been focused =
primarily on 'update', but at least for my work it would be quite =
helpful to have per-CPU stats in the 'report' format as well.

> I will try to come up with a patch for the kernel/user interface in =
the=20
> mean time. This will be for 9.x only, obviously.

Sounds good -- and the kernel stats capture can "grow into" the full =
report format as it matures.

> Doesn't this help for output only? For the input statistics there =
still=20
> will be per-ifnet statistics.

Most ifnet-layer stats should really be per-queue, both for input and =
output, which may help.

> An interesting question from the SNMP point of view is, what happens =
to=20
> the statistics if you move around interfaces between vimages. In any =
case=20
> it would be good if we could abstract from all the complications while=20=

> going kernel->userland.

At least for now, the interface is effectively recreated when it moves =
vimage, and only the current vimage is able to monitor it. That could be =
considered a bug but it might also be a simplifying assumption or even a =
feature. Likewise, it's worth remembering that the ifnet index space is =
per-vimage.

Robert=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5230C2B2-57A5-4982-928A-43756BF8C1C4>