Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 Dec 2011 22:06:25 +0100
From:      Luigi Rizzo <rizzo@iet.unipi.it>
To:        Daniel Kalchev <daniel@digsys.bg>
Cc:        Jack Vogel <jfvogel@gmail.com>, current@freebsd.org
Subject:   Re: datapoints on 10G throughput with TCP ?
Message-ID:  <20111206210625.GB62605@onelab2.iet.unipi.it>
In-Reply-To: <F5BCA7E9-6A61-4492-9F18-423178E9C9B4@digsys.bg>
References:  <20111205192703.GA49118@onelab2.iet.unipi.it> <2D87D847-A2B7-4E77-B6C1-61D73C9F582F@digsys.bg> <20111205222834.GA50285@onelab2.iet.unipi.it> <4EDDF9F4.9070508@digsys.bg> <4EDE259B.4010502@digsys.bg> <CAFOYbcmVR_K0iZU_Z4TxDVzPzx6-GZuzfCxUZbf6KQn4siF2UA@mail.gmail.com> <F5BCA7E9-6A61-4492-9F18-423178E9C9B4@digsys.bg>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Dec 06, 2011 at 07:40:21PM +0200, Daniel Kalchev wrote:
> I see significant difference between number of interrupts on the Intel and the AMD blades. When performing a test between the Intel and AMD blades, the Intel blade generates 20,000-35,000 interrupts, while the AMD blade generates under 1,000 interrupts.
> 

Even in my experiments there is a lot of instability in the results.
I don't know exactly where the problem is, but the high number of
read syscalls, and the huge impact of setting interrupt_rate=0
(defaults at 16us on the ixgbe) makes me think that there is something
that needs investigation in the protocol stack.

Of course we don't want to optimize specifically for the one-flow-at-10G
case, but devising something that makes the system less affected
by short timing variations, and can pass upstream interrupt mitigation
delays would help.

I don't have a solution yet..

cheers
luigi



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20111206210625.GB62605>