Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 16 Mar 2003 15:48:34 -0800
From:      Terry Lambert <tlambert2@mindspring.com>
To:        Petri Helenius <pete@he.iki.fi>
Cc:        freebsd-current@FreeBSD.ORG
Subject:   Re: mbuf cache
Message-ID:  <3E750D52.FFA28DA2@mindspring.com>
References:  <0ded01c2e295$cbef0940$932a40c1@PHE> <20030304164449.A10136@unixdaemons.com> <0e1b01c2e29c$d1fefdc0$932a40c1@PHE> <20030304173809.A10373@unixdaemons.com> <0e2b01c2e2a3$96fd3b40$932a40c1@PHE> <20030304182133.A10561@unixdaemons.com> <0e3701c2e2a7$aaa2b180$932a40c1@PHE> <20030304190851.A10853@unixdaemons.com> <001201c2e2ee$54eedfb0$932a40c1@PHE> <20030307093736.A18611@unixdaemons.com> <008101c2e4ba$53d875a0$932a40c1@PHE> <3E68ECBF.E7648DE8@mindspring.com> <3E70813B.7040504@he.iki.fi>

next in thread | previous in thread | raw e-mail | index | archive | help
Petri Helenius wrote:
> Terry Lambert wrote:
> >Ah.  You are receiver livelocked.  Try enabling polling; it will
> >help up to the first stall barrier (NETISR not getting a chance
> >to run protocol processing to completion because of interrupt
> >overhead); there are two other stall barriers after that, and
> >another in user space is possible depending on whether the
> >application layer is request/response.
> 
> Are you sure that polling would help even since the em driver is using
> interrupt regulation by default?

You mean hardware interrupt coelescing, not regulation.  Regulation
is where you prevent the card from generating interrupts during a
livelock situation, to permit the host to process the data it already
has in the pipeline.

It will help some.  Instead of livelocking by the interrupt load
causing NETISR to never run, it will livelock where NETISR attemps
to push data to user space, which is never read by the user space
process, because the user space process never gets to run, since
interrupts, and now, NETISR processing, are taking all the CPU
time.

You can get to this same point in -CURRENT, if you are using up to
date sources, by enabling direct dispatch, which disables NETISR.
This will help somewhat more than polling, since it will remove the
normal timer latency between receipt of a packet, and processing of
the packet through the networks stack.  This should reduce overall
pool retention time for individual mbufs that don't end up on a
socket so_rcv queue.  Because interrupts on the card are not
acknowledged until the code runs to completion, this also tends to
requlate interupt load.

This also has the desirable side effect that stack processing will
occur on the same CPU as the interrupt processing occurred.  This
avoids inter-CPU memory bus arbitration cycles, and ensures that
you won't engage in a lot of unnecessary L1 cache busting.  Hence
I prefer this method to polling.


> It might solve the livelock but it does
> probably not increase the performance of the mbuf allocator?

No, it does not increase the performance of the mbuf allocator.

The main problem with the mbuf allocator as it stands today is
that there is a tradeoff between how fast you can make it, and
whether or not it's SMP safe.

There is a researcher at the University of Kentucky, who I have
explained a number of obscure details of the VM system to, who
has implemented a freelist allocator, and gotten a 5 times
performance increase on his TCP stack.  I'm not sure if he'd be
willing to share his research with you or anyone else, but if
you read back over my own postings regarding mbuf allocators,
you should be able to repeat the code developement that he has
done.  Note that his allocator is not SMP safe, and it's probably
antithetical to the idea, at all.

Personally, I'm coming to the conclusion that SMP systems should
be treated as NUMA machines, and seperately allocated resources,
and, potentially, even OS images.  Until the memory and I/O bus
speeds catch up with the CPU speeds again, the cost of resource
contention stalls is so incredibly high because of the speed
multipliers as to make it not really worth running SMP systems.
You will get much better load capacity scaling out of two cheaper
boxes, if you implement correctly, IMO.

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3E750D52.FFA28DA2>