Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Feb 2014 12:04:51 -0800
From:      Adrian Chadd <adrian@freebsd.org>
To:        Alexander Motin <mav@freebsd.org>
Cc:        Jeffrey Faden <jeffreyatw@gmail.com>, freebsd-current <freebsd-current@freebsd.org>, "freebsd-arch@freebsd.org" <freebsd-arch@freebsd.org>
Subject:   Re: [rfc] bind per-cpu timeout threads to each CPU
Message-ID:  <CAJ-Vmo=KFF_2tdyq1u=jNkWfEe1sR-89t3JNggf7MEvYsF%2BtQg@mail.gmail.com>
In-Reply-To: <53050D24.3020505@FreeBSD.org>
References:  <530508B7.7060102@FreeBSD.org> <CAJ-VmokQ_C=YVpk41_r-QakB46_RWRe0didq1_RrZBMS7hDX-A@mail.gmail.com> <53050D24.3020505@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 19 February 2014 11:59, Alexander Motin <mav@freebsd.org> wrote:

>> So if we're moving towards supporting (among others) a pcbgroup / RSS
>> hash style work load distribution across CPUs to minimise
>> per-connection lock contention, we really don't want the scheduler to
>> decide it can schedule things on other CPUs under enough pressure.
>> That'll just make things worse.

> True, though it is also not obvious that putting second thread on CPU run
> queue is better then executing it right now on another core.

Well, it depends if you're trying to optimise for "run all runnable
tasks as quickly as possible" or "run all runnable tasks in contexts
that minimise lock contention."

The former sounds great as long as there's no real lock contention
going on. But as you add more chances for contention (something like
"100,000 concurrent TCP flows") then you may end up having your TCP
timer firing stuff interfere with more TXing or RXing on the same
connection.

Chasing this stuff down is a pain, because it only really shows up
when you're doing lots of concurrency.

I'm happy to make this a boot-time option and leave it off for the
time being. How's that?



-a



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-Vmo=KFF_2tdyq1u=jNkWfEe1sR-89t3JNggf7MEvYsF%2BtQg>